Voyager SDK
The Voyager SDK makes it easy to build high-performance inferencing applications with Axelera AI Metis devices.
This is a production-ready release of Voyager SDK. Software components and features that are in development are marked [Beta] indicating tested functionality that will continue to grow in future releases or [Experimental] indicating early-stage feature with limited testing.
Finding your way around
Getting Started walks you through hardware installation and verifying your setup. Start here if this is your first time with a Metis device.
User Guides cover day-to-day tasks: installing the SDK, running your first inference, updating firmware, working with LLMs, and monitoring your device. These are step-by-step and assume no prior experience with the SDK.
Tutorials go deeper: video sources, custom weights, cascaded pipelines, Python API usage, and code examples you can run and modify. Come here once you're up and running and want to build something.
Model Zoo lists every pre-trained model in this release with performance data, supported tasks, and licensing. Use it to find the right model for your application.
Reference is the technical detail: CLI tools, pipeline configuration, compiler options, API specifications, and system internals. Look things up here when you need exact flags, parameters, or architecture details.
Glossary defines the terms used throughout — AIPU, pipeline, mAP, and other SDK-specific vocabulary.
Install SDK and get started
| Document | Description |
|---|---|
| Hardware installation | Install your Metis M.2, PCIe or Compute Board |
| SDK installation | Clone the repo, run the installer, activate your environment |
| Verify setup | Confirm the device is detected and the stack is working |
| Windows setup | Install Voyager SDK and run a model in Windows 11 (WSL2 + native) |
| AxDevice manual | Lists all Metis boards connected to your system and configures their settings |
| Firmware update | Update your board firmware |
Deploy models on Metis devices
| Document | Description |
|---|---|
| Model zoo | All models supported by this release of the Voyager SDK |
Deployment manual (deploy.py) | All options provided by the command-line deployment tool |
| Custom weights | Deploy a model using your own weights |
| Custom model | Deploy a custom model architecture |
| Compiler CLI | Compiler Command Line Interface [beta] |
| Compiler API | Python Compiler API [experimental] |
| Compiler configuration | TOML and Python configuration options |
Run models on Metis devices
| Document | Description |
|---|---|
| First inference | Run object detection on a camera or video file |
| Run inference in Python | InferenceStream API with worked examples |
| Video sources | Cameras, RTSP streams, video files, and multiple inputs |
| Measure accuracy | Benchmark a model against a validation dataset |
Inferencing manual (inference.py) | All options provided by the command-line inferencing tool |
| Cascaded pipelines | Chain models together (e.g. detect then classify) |
| LLM inference | Run Language Models on Metis devices [experimental] |
Two ways to build pipelines
The Voyager SDK provides two pipeline approaches. Use whichever fits your workflow, or combine them.
| YAML Pipeline | Pipeline Builder [Experimental] | |
|---|---|---|
| Best for | Production deployment, standard workflows | Custom inter-stage logic, rapid prototyping |
| Define pipelines in | YAML configuration files | Python code (axelera.runtime.op) |
| Strengths | Optimized GStreamer throughput, battle-tested | Composable operators, Jupyter-friendly, full Python control |
| Maturity | Stable — production systems run on this today | Core operators stable; cascade and streaming APIs in development |
Hybrid approach: Many teams use YAML pipelines for primary inference (detection, segmentation) via InferenceStream, then hand off to Pipeline Builder operators for tracking, filtering, and custom business logic in Python.
Application integration APIs
The Voyager SDK allows you to develop inferencing pipelines and end-user applications at different levels of abstraction.
| API | Description |
|---|---|
| Pipeline Builder | Pythonic API for composable ML pipelines using axelera.runtime.op operators [experimental] |
| InferenceStream (high level) | Python library for reading pipeline image and inference metadata from within your application |
| AxRuntime (low level) | Python API for manually constructing, configuring and executing pipelines |
| GStreamer plugins | Plugins for integrating Metis inferencing within a GStreamer pipeline |
Code examples
Complete, runnable examples demonstrating different integration patterns.
| Example | What it shows |
|---|---|
| Application (basic) | Simplest InferenceStream integration |
| Application (extended) | Runtime telemetry, hardware decoding, dynamic render settings |
| Application (tensor) | Direct tensor access for custom post-processing |
| AxInferenceNet (basic) | C++ low-level model integration |
| AxInferenceNet (cascaded) | C++ cascaded model pipeline |
| AxInferenceNet (tensor) | C++ direct tensor access |
| Classification | Image classification with generator-based input |
| Cross-line counting | Vehicle counting across a virtual line |
| Multiple pipelines | Dynamic pipeline management and hot-swapping |
| Remote monitor | TCP broadcast of real-time JSON telemetry |
Reference
| Document | Description |
|---|---|
| Pipeline basics | How inference pipelines work |
| Model formats | The model.json file and compiled output structure |
| YAML operators | Pipeline operators for YAML configuration |
| GStreamer operators | GStreamer pipeline plugins reference |
| Inference configuration | Advanced inference settings |
| Compiler configuration reference | Full compiler config field reference |
| ONNX operator support | ONNX operators supported by the Axelera AI compiler |
| Model adapters | Custom dataset adapters and evaluators |
| Additional models | Models beyond the standard zoo |
| AxRunmodel | Run deployed models with DMA buffers, double buffering, multiple cores |
| install.sh | Installer options and what it installs |
| Hardware | Metis hardware specifications and capabilities |
| Environment variables | SDK environment variables reference |
| Virtual environments | Why activation is required and how it works |
| Thermal and power | Thermal behavior, power management, and monitoring |
| AxMonitor | Real-time device monitoring tool |
| Performance | Performance benchmarking methodology |
| Accuracy metrics | Understanding mAP, precision, recall |
| Glossary | Definitions for terms used throughout the SDK docs |
Support
- Axelera AI Community — forums, projects, technical support
- Customer Portal — technical documents and support tickets
- GitHub Issues — SDK bugs and feature requests