Runtime Findings

Detector Runtime Status

The durable summary of what we learned from the detector experiments completed on April 6, 2026.

Quality reference

The current quality reference is the local Mac Mini M4 CPU run on the shared 50-image chamber-zone benchmark bundle.

Current validated findings

Platform / Runtime Model Match vs Mac CPU Notes
Mac Mini M4 CPU NanoDet, YOLO11s Reference Baseline for all comparisons
Mac Mini M4 CoreMLExecutionProvider NanoDet, YOLO11s Decision and count parity are effectively exact YOLO11s is much faster than CPU, NanoDet is slower
Orange Pi 5 CPU (ONNX) NanoDet, YOLO11s Exact on this benchmark bundle Safe correctness path on Orange Pi
Orange Pi 5 RKNN NanoDet, YOLO11s Not close enough yet Current .rknn files were pre-existing and were not rebuilt from the exact current ONNX exports
Raspberry Pi 5 CPU (ONNXRuntime 1.23.2) NanoDet, YOLO11s Exact on this benchmark bundle Safe correctness path on Pi 5
Raspberry Pi 5 Hailo-8 NanoDet Very close Best current accelerated target
Raspberry Pi 5 Hailo-8 YOLO11s Good on decision level, weaker on count and box parity Promising, but not yet the closest target
Raspberry Pi 5 NCNN NanoDet, YOLO11s Not acceptable yet Treat as experimental for these exports

Throughput takeaways

Current recommendations

If the priority is correctness

Use:

These paths currently mirror the reference exactly on the shared benchmark bundle.

If the priority is accelerated deployment on the Pi 5 AI HAT

Use:

Why:

Treat YOLO11s HEF as the next tuning candidate.

If the priority is accelerated deployment on Orange Pi

Do not treat the current RKNN artifacts as release-ready.

Before using Orange Pi NPU results for product decisions, rebuild the .rknn files from the exact current ONNX exports and rerun the benchmark bundle.

Canonical local artifacts to keep

Keep only the current canonical set and treat everything else under software/client/blob/ as disposable scratch unless a document promotes it.

Benchmark inputs and reference

Current result sets

Current comparison outputs

Current summary reports

Current concurrency summaries

Current Hailo deliverables

Policy for future work

  1. Keep stable conclusions here in the site.
  2. Keep only the latest canonical local artifacts under software/client/blob/.
  3. Regenerate reports from benchmark JSONs instead of treating every HTML file as permanent.
  4. Add new target conclusions only after the target has both:
    • a quality comparison against the Mac CPU reference
    • a sustained-throughput measurement