Model Engineering
Catvish integrates the Ultralytics YOLOv8 engine directly into the application, allowing you to train, validate, and optimize state-of-the-art object detection models without writing a single line of Python.
Starting a Training Run
Go to the Model > Training page. Click "New Training Job".
Configuration
Base Model
Select the starting architecture size:
- Nano (n): Fastest, lowest accuracy. Good for Raspberry Pi.
- Small (s): Balanced. Recommended starting point.
- Medium (m) / Large (l): High accuracy, requires powerful GPU.
Hyperparameters
- Epochs: Total iterations (50-300).
- Batch Size: Images per step. Auto-mode (`-1`) is recommended.
- Image Size: Input resolution (640px default).
Running Training
Once started, you will see real-time graphs for:
- Box Loss: How accurately the model predicts bounding box coordinates. (Should decrease).
- Cls Loss: How accurate the class predictions are (Cat vs Dog). (Should decrease).
- mAP (Mean Average Precision): The overall accuracy score. (Should increase).
Catvish automatically saves the checkpoint with the highest fitness score as best.pt.
Optimization (Export)
For production deployment, you should export your PyTorch (`.pt`) weights to an optimized format. Navigate to Model > Optimizing.
ONNX
UniversalStandard format supported by almost all inference engines (TensorRT, OpenCV, Web).
OpenVINO
Intel CPUOptimized specifically for Intel CPUs and iGPUs. Up to 5x faster than stock PyTorch on Intel hardware.
FP16 Quantization: Enabling "Half-Precision" (FP16) reduces the model size by 50% often with negligible impact on accuracy, while significantly speeding up inference on GPU.