Implementation of YOLO-CPP by Mecatron.
- β‘ YOLOs-CPP: https://github.com/Geekgineer/YOLOs-CPP
- β‘ ros2_yolos_cpp: https://github.com/Geekgineer/ros2_yolos_cpp.git
ros2_yolos_cpp brings the blazing speed and unified API of YOLOs-CPP to the robot operating system. It provides composable, lifecycle-managed nodes for the entire YOLO family (v5, v8, v11, v26, etc.).
- β‘ Zero-Copy Transport: Optimized for high-throughput image pipelines using
rclcpp::Subscription. - π Lifecycle Management: Full support for
configure,activate,deactivate,shutdowntransitions. - π οΈ Composable Nodes: Run multiple models in a single container for efficient resource usage.
- π¦ All Tasks Supported: Detection, Segmentation, Pose, OBB, and Classification.
- ποΈ Production Ready: CI/CD tested, strictly typed parameters, and standardized messages (
vision_msgs).
| Requirement | Version | Notes |
|---|---|---|
| C++ Compiler | C++17 | GCC 9+, Clang 10+, MSVC 2019+ |
| CMake | β₯ 3.16 | |
| OpenCV (CPP) | β₯ 4.5 | Core, ImgProc, HighGUI |
| ONNX Runtime | β₯ 1.16 | Auto-downloaded by build script |
# Check GCC version
g++ --version
# Check cmake
cmake --version
# OpenCV (CPP)
pkg-config --modversion opencv4 2>/dev/null || pkg-config --modversion opencv- CUDA 12.x is the default version for ONNX Runtime GPU packages in PyPI
- Other versions of CUDA please refer to ONNX_CUDA
| ONNX Runtime Version | CUDA | cuDNN |
|---|---|---|
| 1.20.x | 12.x | 9.x |
# Clone
wget https://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda_12.8.0_570.86.10_linux.run
sudo sh cuda_12.8.0_570.86.10_linux.run
# Create symlink (required for most tools to find CUDA)
sudo ln -s /usr/local/cuda-12.4 /usr/local/cuda
# Add to ~/.bashrc
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
# Check installation
nvcc --version # Install
wget https://developer.download.nvidia.com/compute/cudnn/9.19.0/local_installers/cudnn-local-repo-ubuntu2204-9.19.0_1.0-1_amd64.deb
sudo dpkg -i cudnn-local-repo-ubuntu2204-9.19.0_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-9.19.0/cudnn-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cudnn9-cuda-12
### Installation of ONNX Runtime
```bash
# Clone
git clone https://github.com/Geekgineer/YOLOs-CPP.git
cd YOLOs-CPP
# Build (auto-downloads ONNX Runtime)
./build.sh 1.20.1 0 # CPU build
./build.sh 1.20.1 1 # GPU build (requires CUDA - pick this)# Create workspace
cd ros2_ws/src
# Clone package
git clone https://github.com/Geekgineer/ros2_yolos_cpp.git
# Install dependencies
cd ros2_ws
rosdep update && rosdep install --from-paths src --ignore-src -y
# Build (Release mode recommended for performance)
colcon build --packages-select ros2_yolos_cpp --cmake-args -DCMAKE_BUILD_TYPE=Release
source install/setup.bashThis package provides a launch file for each task. You must provide paths to your ONNX model and (optionally) labels file.
sudo apt install ros-humble-usb-camros2 run usb_cam usb_cam_node_exe --ros-args \
-p video_device:=/dev/video0 \
-p image_width:=640 \
-p image_height:=480 \
-p framerate:=30.0 \
-p pixel_format:=mjpeg2rgb \
-p io_method:=mmap \
-r __ns:=/cameraPublishes vision_msgs/Detection2DArray with bounding boxes and class IDs.
ros2 launch ros2_yolos_cpp detector.launch.py \
model_path:=src/ros2_yolos_cpp/models/yolo11n.onnx \
labels_path:=src/ros2_yolos_cpp/models/coco.names \
use_gpu:=true \
image_topic:=/camera/image_rawPublishes vision_msgs/Detection2DArray and a synchronized mask image.
ros2 launch ros2_yolos_cpp segmentor.launch.py \
model_path:=/path/to/yolo11n-seg.onnx \
labels_path:=/path/to/coco.names \
image_topic:=/camera/image_rawPublishes vision_msgs/Detection2DArray with keypoints.
ros2 launch ros2_yolos_cpp pose.launch.py \
model_path:=/path/to/yolo11n-pose.onnx \
image_topic:=/camera/image_rawPublishes custom ros2_yolos_cpp/OBBDetection2DArray with rotated bounding boxes.
ros2 launch ros2_yolos_cpp obb.launch.py \
model_path:=/path/to/yolo11n-obb.onnx \
labels_path:=/path/to/dota.names \
image_topic:=/camera/image_rawPublishes vision_msgs/Classification.
ros2 launch ros2_yolos_cpp classifier.launch.py \
model_path:=/path/to/yolo11n-cls.onnx \
labels_path:=/path/to/imagenet.names \
image_topic:=/camera/image_raw- ros2_yolos_cpp is manged by life cycle
- After launching the launch file:
# Step 1: Configure (loads model, allocates GPU memory)
ros2 lifecycle set /yolos_detector configure
# Step 2: Activate (STARTS INFERENCE β publishes detections)
ros2 lifecycle set /yolos_detector activate# 1. Deactivate β stops inference (active β inactive)
ros2 lifecycle set /yolos_detector deactivate
# 2. Cleanup β releases GPU/memory (inactive β unconfigured)
ros2 lifecycle set /yolos_detector cleanup
# 3. Shutdown β fully terminates node (unconfigured β finalized)
ros2 lifecycle set /yolos_detector shutdownNodes can be configured via launch arguments or a YAML parameter file. See config/default_params.yaml for a template.
| Parameter | Type | Default | Description |
|---|---|---|---|
model_path |
string | Required | Absolute path to .onnx model file. |
labels_path |
string | "" | Path to text file with class names (one per line). |
use_gpu |
bool | false |
Enable CUDA acceleration (requires GPU build). |
conf_threshold |
double | 0.4 |
Minimum confidence score to output detection. |
nms_threshold |
double | 0.45 |
IoU threshold for Non-Maximum Suppression. |
image_topic |
string | /camera/image_raw |
Topic to subscribe for input images. |
publish_debug_image |
bool | true |
Publish standard sensor_msgs/Image with visualizations. |
| Node Type | Subscriptions | Publications |
|---|---|---|
| Detector | ~/image_raw |
~/detections (Detection2DArray)~/debug_image (Image) |
| Segmentor | ~/image_raw |
~/detections (Detection2DArray)~/masks (Image)~/debug_image |
| Pose | ~/image_raw |
~/detections (Detection2DArray)~/debug_image |
| OBB | ~/image_raw |
~/detections (OBBDetection2DArray)~/debug_image |
| Classifier | ~/image_raw |
~/classification (Classification)~/debug_image |
Run the stack without installing dependencies locally.
# Build Docker image
docker build -t ros2_yolos_cpp .
# Run with GPU support
docker run --gpus all -it --rm \
-v /path/to/models:/models \
ros2_yolos_cpp \
ros2 launch ros2_yolos_cpp detector.launch.py model_path:=/models/yolov8n.onnx