Skip to content

Export

Use export to convert ReID models to deployment formats such as ONNX and TensorRT.

Examples

Example

boxmot export --weights osnet_x0_25_msmt17.pt --include onnx

Export multiple formats:

boxmot export \
  --weights osnet_x0_25_msmt17.pt \
  --include onnx \
  --include engine \
  --dynamic
from boxmot import Boxmot

boxmot = Boxmot(reid="osnet_x0_25_msmt17")
exported = boxmot.export(include=("onnx", "engine"), dynamic=True)
print(exported.files)

Typical use cases

  • deploy a ReID backbone outside BoxMOT
  • prepare ReID models for inference benchmarks
  • build an optimized runtime for a tracker that uses appearance features

CLI Arguments

boxmot export

Export ReID models

Usage:

boxmot export [OPTIONS]

Options:

Name Type Description Default
--batch-size integer Batch size for export 1
--imgsz, --img, --img-size text Image size as H,W (e.g. 256,128) 256,128
--device text CUDA device (e.g., '0', '0,1,2,3', or 'cpu') cpu
--optimize boolean Optimize TorchScript for mobile (CPU export only) False
--dynamic boolean Enable dynamic axes for ONNX/TF/TensorRT export False
--simplify boolean Simplify ONNX model False
--opset integer ONNX opset version 17
--workspace integer TensorRT workspace size (GB) 4
--verbose boolean Enable verbose logging for TensorRT False
--weights Path Path to the model weights (.pt file) /home/runner/work/boxmot/boxmot/models/osnet_x0_25_msmt17.pt
--half boolean Enable FP16 half-precision export (GPU only) False
--include text Export formats to include. Options: torchscript, onnx, openvino, engine, tflite ('onnx',)
--help boolean Show this message and exit. False