Mirantis, a leader in open-source cloud infrastructure and platform engineering, and Gcore, the global edge AI, cloud, network, and security solutions provider, announced an agreement to facilitate the deployment of artificial intelligence (AI) workloads. The cornerstone of this collaboration is the integration of Gcore Everywhere Inference with Mirantis k0rdent open-source platform, enabling users to effortlessly scale AI inference workloads globally.

“Enterprise AI adoption has entered a new phase and open source has a critical role to play – bridging public, private and managed service clouds, so that users can maintain autonomy and control over their global infrastructure,” said Alex Freedland, CEO, Mirantis. “Combining our expertise and commitment to open source technologies with Gcore’s AI expertise will accelerate our ability to create solutions and critical capabilities to address these issues for MLOps and platform engineers.”

Latest AI Trends: From Experiment to Execution: WSO2 Brings AI into the Heart of Modern Software Development

Mirantis recently launched k0rdent for large-scale application management across any infrastructure anywhere and intends to integrate Gcore Everywhere Inference to help global organizations deliver AI inference wherever needed, optimizing compute resource allocation, simplifying AI model deployment, enhancing performance monitoring and cost management, and streamlining compliance with regional data sovereignty requirements. This flexible technology can be deployed in the cloud, on-premises, or hybrid and edge environments.

“The collaboration between Mirantis and Gcore addresses today’s AI inference challenges by combining scalable infrastructure management with efficient workload deployment,” said Seva Vayner, product director, Edge Cloud and AI, Gcore.“Mirantis’ recently announced k0rdent project provides platform engineers with a Kubernetes-native, open-source solution for managing infrastructure sprawl and operational complexity across multi-cloud and hybrid environments. With the integration of Gcore Everywhere Inference, an accelerator-agnostic solution for managing AI inference workloads, the project will provide businesses with an easy-to-use platform for deploying and operating distributed AI inference at scale.”

Latest AI Trends: WSO2’s Choreo AI-Native IDP Launches as SAAS and Open-Source, Boosting Engineering Productivity

“Deploying AI at scale today can be time and resource consuming: under the wrong setup, it can take businesses a long time to onboard new GPUs, and hours or even days to deploy new models,” said Misch Strotz, CEO and co-founder of LetzAI. “Partnerships like the one between Gcore and Mirantis can simplify this: model deployment can be done in a few clicks and new GPUs can be onboarded within hours, enabling infrastructure and ML teams to be much more productive.”

Gcore, as a global AI infrastructure provider, has helped enterprises navigate AI adoption. Now, with the increased demand for AI inference, Gcore Everywhere Inference helps businesses efficiently leverage their resources when deploying AI inference, improving time-to-market and ROI from AI projects.

Mirantis and Gcore will showcase their joint technologies at upcoming industry events, such as GTC 2025 and KubeCon Europe, where they will demonstrate the power of their combined technologies.

Latest AI Trends: NSFW AI Role Play Chatbot Debuts at JuicyChat.AI

Source: Businesswire

To share your insights, please write to us at sudipto@intentamplify.com