Voyager SDK Effortless deployment of AI applications to the Edge

End-to-end integrated software stack
Voyager SDK is currently only available for members of our Early Access Program.
Optimized deployment
The SDK makes it easy to build high-performance applications for edgedevices.. Developers describe their end-to-end application pipelines in a high-level declarative language (YAML), including one or more neural networks along with the corresponding pre- & post-processing tasks as well as sophisticated image processing operations. The SDK automatically compiles, optimizes and deploys the entire pipeline, making use of host processors (CPU, embedded GPU or media accelerator) as required. Voyager supports a wide range of host architectures and platforms to meet the requirements of diverse Edge application environments. In addition, the SDK can flexibly embed a pipeline into an inference service to obtain a variety of preconfigured, out-of-the-box solutions, ranging from fully embedded use cases to distributed processing of multiple 4K streams.

Model Zoo
Voyager comes with a Model Zoo, a catalog of state-of-the-art AI models and turnkey pipelines for real-world use cases including image classification, object detection, segmentation, keypoint detection, face recognition and other Computer Vision tasks. This catalog is continuously improved and updated with support for the latest models to cover a broad range of market verticals and use cases and freely accessible to all our users Importantly, developers can easily modify any of the offered models to work with their own datasets or make them fit better to their application requirements. In addition to access via the web, access to the Model Zoo is seamlessly integrated within the SDK enabling effortless importing of Model Zoo content to the application under development.
Under the hood
Voyager builds on industry-standard APIs and open-source frameworks, enhanced with advanced capabilities by Axelera’s R&D team. Voyager’s Machine Learning compiler, built on the Apache TVM compiler framework, automates the compilation and optimization of models for the Metis AI Processing Unit. The compiler inputs models pretrained in industry-standard frameworks such as PyTorch, and outputs code tuned for Metis hardware. During compilation, the compiler quantizes the model using proprietary, state-of-the-art algorithms and partitions the model for optimal execution on Metis AIPU. Without manual intervention, the compiler generates code with an accuracy practically indistinguishable from the original model. Similarly, pipelines use GStreamer an open source media framework ; while the vast majority of users need not understand any of the internals of generated pipelines, the open nature of the stack allows expert users to customize the generated code for a specific use case.
Applications deployed with Voyager onto the Metis AIPU have the highest performance and energy efficiency, while retaining equivalent accuracy of the original FP32 model .

Read on
AI at the Edge: A fast, accurate and effortless journey with Voyager SDK
