Skip to content
logo
Sensor AI
  • Pages
    • Introduction
    • Core Features
      • Annotation Tool
      • Crowdsourced Annotation
      • Dataset Marketplace
      • AI Model Training
    • Roadmap
    • Tokenomics
      • Token Utilities
      • Token Distribution
      • Governance
    • Technical Documentation
    • Use Cases
    • Tutorials
      • Annotating a dataset
      • Selling a dataset
      • Training a custom model
    • FAQ
    • Links
    • Resources
      • Privacy Policy
      • Terms of Service
      • Disclaimer

Training a custom model

Sensor AI provides decentralized GPU resources (via Golem/iExec) to train AI models using your datasets. Pay with $SENSE tokens and avoid expensive cloud costs.

Step-by-Step Guide

Step 1: Upload Your Dataset
Use your own annotated dataset or buy one from the marketplace.
Ensure data is in a supported format (COCO, YOLO, etc.).
Step 2: Choose a Model Framework
Sensor AI supports:
YOLOv8 (for object detection).
Mask R-CNN (for instance segmentation).
ResNet (for classification).
Custom PyTorch/TensorFlow scripts (upload your own).
Step 3: Configure Training Parameters
Epochs: How long to train (e.g., 50 epochs).
Batch Size: Depends on GPU memory (e.g., 16).
Learning Rate: Adjust for model convergence.
Augmentations: Enable flip/rotate for better generalization.
Step 4: Launch Training
Select a decentralized compute provider (e.g., Golem).
Pay with $SENSE tokens (cost depends on GPU hours).
Monitor progress in the "Training Jobs" dashboard.
Step 5: Evaluate & Deploy
Download model weights (e.g., .pt or .h5).
Test performance using validation metrics (mAP, accuracy).
Deploy via:
Sensor AI’s API (for cloud inference).
Edge devices (export to ONNX/TFLite).
Pro Tips
⚡ Start with a small subset to test hyperparameters. 📉 Use early stopping to save costs. 🤖 Fine-tune pre-trained models for faster results.
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.