Artifact Registry

Model Artifacts

Which detector artifacts currently exist, what each format is for, and how each target path is produced from training outputs.

Scope

The current maintained detector set is:

These are the chamber-zone detector models used for the April 6, 2026 cross-device benchmark work.

Canonical source runs

YOLO11s

NanoDet

Format matrix

Format Purpose Current status Canonical location
ONNX Main interchange format and correctness reference on CPU paths Approved software/client/blob/device_benchmarks/chamber_zone_pair_bundle/models/.../model.onnx
NCNN CPU deployment experiments on low-power devices Built, but currently not quality-approved for these chamber-zone exports software/client/blob/device_benchmarks/chamber_zone_pair_bundle/models/.../model.ncnn.param and model.ncnn.bin
CoreMLExecutionProvider Fast local Mac acceleration path using ONNX Runtime Approved for local benchmarking; no separate .mlpackage is maintained right now Reuses the canonical ONNX models
RKNN Orange Pi NPU deployment path Experimental; current artifacts are not rebuilt from the exact current ONNX exports Existing Orange Pi device-side artifacts under /root/bench/models
HEF Raspberry Pi 5 AI HAT deployment path Built for both models; NanoDet currently has the stronger quality result software/client/blob/hailo_compile_bundles/.../results/
HAR Intermediate Hailo compiler output Built and worth keeping with the matching HEF software/client/blob/hailo_compile_bundles/.../results/

Current target mapping

Mac Mini M4

CPU reference:

Accelerated local path:

Raspberry Pi 5

CPU fallback and correctness path:

AI HAT path:

Orange Pi 5

CPU fallback and correctness path:

NPU path:

Conversion rule

The durable rule is:

training run -> canonical ONNX export -> target-specific compiled format

Do not compile target-specific formats directly from ad hoc copies if the ONNX source is unclear.

Conversion map

1. Training run to ONNX

The training and export runs live under:

The ONNX export from that run is the canonical source for downstream targets.

2. ONNX to benchmark bundle

Use:

cd software/client
uv run python scripts/device_detector_benchmark.py bundle \
  --preset chamber_zone_pair \
  --output blob/device_benchmarks/chamber_zone_pair_bundle \
  --archive

The bundle becomes the single benchmarking input across devices.

3. ONNX to Mac CPU, Mac CoreML, Pi CPU, Orange Pi CPU

Use:

uv run python scripts/device_detector_benchmark.py run \
  --bundle blob/device_benchmarks/chamber_zone_pair_bundle \
  --output-dir <target-output-dir> \
  --runtime onnx

For the Mac accelerated path:

uv run python scripts/device_detector_benchmark.py run \
  --bundle blob/device_benchmarks/chamber_zone_pair_bundle \
  --output-dir <target-output-dir> \
  --runtime coreml

4. ONNX to NCNN

The benchmark bundle already carries NCNN exports, but the current chamber-zone NCNN results are not approved yet.

5. ONNX to RKNN

The Orange Pi NPU path needs:

Current status: runtime-side RKNN execution exists, but the current chamber-zone RKNN artifacts are not yet quality-approved.

6. ONNX to Hailo HEF

Use:

Detailed instructions live in Hailo HEF Workflow.

Current maintained results:

Update policy

When a new detector export replaces the current chamber-zone models, update:

  1. this page
  2. Runtime Status
  3. Device Benchmarking if the benchmark preset or runtime rules change
  4. Hailo HEF Workflow if the Hailo compile flow changes