Mirantis, a leader in open-source cloud infrastructure and platform engineering, announced that Netherlands-based private cloud service provider Nebul has deployed open source k0rdent to deliver an on-demand service that enables customers to run production AI inference (applying a trained model to new data to generate predictions or decisions) workloads.
“Using k0rdent enables us to effectively unify our diverse infrastructure across OpenStack, bare metal Kubernetes, while sunsetting the VMware technology stack and fully transforming to open source…”
Nebul has always focused on privacy and sovereignty with a broad European reach, and is a pioneer in high-performance computing, artificial intelligence (AI) and machine learning technologies. The company — an NVIDIA elite partner, NVIDIA Cloud Partner and elite NVIDIA Solution Provider — is leveraging k0rdent Kubernetes-native multi-cluster management that is integrated with NVIDIA GPU Operator and Gcore Everywhere Inference.
AI Authority Trend: Kao Data Secures New Customer Contract from UK Cloud and Hosting Provider, 20i
Nebul can support distributed AI inference across its NVIDIA-accelerated infrastructure that deliver low-latency and high performance, with dynamically provisioned processing resources to meet demand and policy-driven automation that optimizes GPU utilization for maximum efficiency.
“We believe open source is the enabler for infrastructure to support AI,” said Alex Freedland, co-founder and CEO, Mirantis, the maintainer of k0rdent. “Nebul is demonstrating the enormous potential of open technologies to solve one of the most complex challenges in IT today — delivering AI workloads reliably at scale.”
Launched last month, k0rdent helps platform engineers manage infrastructure sprawl and operational complexity across cloud service providers, on-premises infrastructure, and edge devices. It simplifies maintenance with declarative automation, centralized policy enforcement, and production-ready templates optimized for modern workloads. k0rdent is fully composable and leverages the open source Cluster API so that Kubernetes clusters can be created — and existing clusters can be deployed — anywhere.
AI Authority Trend: AI Solutions Provider, Stellar, Achieves Microsoft AI Data Cloud Status
“As demand for AI services grows, our challenge was transitioning our existing infrastructure,” said Arnold Juffer, CEO and founder at Nebul. “Using k0rdent enables us to effectively unify our diverse infrastructure across OpenStack, bare metal Kubernetes, while sunsetting the VMware technology stack and fully transforming to open source to streamline operations and accelerate our shift to Inference-as-a-Service for enterprise customers. Now, they can bring their trained AI model to their data and just run it with assurance of privacy and sovereignty in accordance with regulations. It’s as simple as that.”
“As Nebul is demonstrating, AI inference at scale requires infrastructure that dynamically adapts to end customer needs, ensuring AI applications meet the business requirements from a cost and performance stand point,” said Seva Vayner, product director of Edge Cloud and Edge AI at Gcore. “Our Everywhere Inference product provides a super super-simple portal to deploy and manage AI inference, while Smart Routing ensures inference tasks are sent to the nearest GPUs for the lowest latency.”
According to NVIDIA, “AI models are rapidly expanding in size, complexity, and diversity — pushing the boundaries of what’s possible. For the successful use of AI inference, organizations need a full-stack approach that supports the end-to-end AI life cycle and tools that enable teams to meet their goals in the new scaling laws era.”
Mirantis and Gcore announced an agreement at NVIDIA GTC on March 18.
AI Authority Trend: ISG to Assess Private/Hybrid Cloud, Data Center Providers
Source – businesswire
To share your insights, please write to us at sudipto@intentamplify.com