Export
Use export to convert ReID models to deployment formats such as ONNX and TensorRT.
Examples
Example
Export multiple formats:
Typical use cases
- deploy a ReID backbone outside BoxMOT
- prepare ReID models for inference benchmarks
- build an optimized runtime for a tracker that uses appearance features
CLI Arguments
boxmot export
Export ReID models
Usage:
Options:
| Name | Type | Description | Default |
|---|---|---|---|
--batch-size |
integer | Batch size for export | 1 |
--imgsz, --img, --img-size |
text | Image size as H,W (e.g. 256,128) | 256,128 |
--device |
text | CUDA device (e.g., '0', '0,1,2,3', or 'cpu') | cpu |
--optimize |
boolean | Optimize TorchScript for mobile (CPU export only) | False |
--dynamic |
boolean | Enable dynamic axes for ONNX/TF/TensorRT export | False |
--simplify |
boolean | Simplify ONNX model | False |
--opset |
integer | ONNX opset version | 17 |
--workspace |
integer | TensorRT workspace size (GB) | 4 |
--verbose |
boolean | Enable verbose logging for TensorRT | False |
--weights |
Path | Path to the model weights (.pt file) | /home/runner/work/boxmot/boxmot/models/osnet_x0_25_msmt17.pt |
--half |
boolean | Enable FP16 half-precision export (GPU only) | False |
--include |
text | Export formats to include. Options: torchscript, onnx, openvino, engine, tflite | ('onnx',) |
--help |
boolean | Show this message and exit. | False |