In a major step forward for machine learning and edge computing, Myrtle.ai has announced official support for its ultra-low latency VOLLO inference accelerator on Napatech’s NT400D1x series SmartNICs. This strategic move positions VOLLO as a game-changer for organizations demanding real-time inference capabilities with minimal latency.
Unlike traditional accelerators, VOLLO delivers machine learning inference in less than one microsecond, setting a new benchmark in the industry. This cutting-edge integration now allows users to deploy ML inference directly on SmartNICs right next to the network making it ideal for time-sensitive applications.
AI Authority Trend: MicroAlgo Develops Classifier Auto-Optimization for Quantum Machine Learning
Moreover, VOLLO supports a diverse range of machine learning models. These include Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNNs), Multi-Layer Perceptrons (MLPs), and popular decision tree-based models such as Random Forests and Gradient Boosting Machines. This flexibility ensures that developers can apply VOLLO to a wide variety of use cases without compromising on speed or efficiency.
Transitioning into broader impact, this new release caters to industries where every microsecond matters. From financial trading and cybersecurity to telecom infrastructure and network management, companies operating in high-speed, data-intensive environments now have a powerful tool to stay ahead. By slashing inference latency, VOLLO offers tangible improvements in security, profitability, operational efficiency, and cost management.
Peter Baldwin, CEO of Myrtle.ai, expressed his enthusiasm about the collaboration: “We’re excited to team up with Napatech the global leader in SmartNIC sales to bring game-changing latency performance to machine learning inference. Our clients are constantly pushing the boundaries on performance, and this integration allows them to fully exploit VOLLO’s speed advantage.”
AI Authority Trend: Seerist and SOCOM Enter Five-Year CRADA to Advance AI and Machine Learning for Operations
Adding to this, Jarrod J.S. Siket, Chief Product & Marketing Officer at Napatech, highlighted the value this brings to financial institutions: “As more firms in finance embrace ML for automated trading, we saw an opportunity to provide unmatched inference latency. The VOLLO compiler is extremely developer-friendly, making it easier to adopt our SmartNICs and expanding the power of our product portfolio.”
For ML engineers and organizations eager to explore this technology, the VOLLO compiler is now available for download at vollo.myrtle.ai. This offers a direct opportunity to test and optimize model performance on the NT400D1x SmartNICs, unlocking new levels of speed and responsiveness.
AI Authority Trend: SAS Validates Whale Protection Program with Machine Learning
To share your insights, please write to us at sudipto@intentamplify.com





