Liquid AI, a leader in foundational model innovation, has launched LEAP v0 (Liquid Edge-AI Platform) a cutting-edge developer platform that allows AI applications to run directly on devices like smartphones, laptops, drones, wearables, and even cars, without depending on cloud infrastructure. Alongside LEAP, the company introduced Apollo, a lightweight, iOS-native application that showcases the power of private AI built using Liquid AI’s latest foundational models.

To address the increasing frustrations developers face around complexity, scalability, and privacy in edge AI development, Liquid AI took a bold step. Co-founder and CEO Ramin Hasani emphasized, “Developers need solutions that don’t compromise performance or privacy. LEAP offers a streamlined, powerful, and privacy-first platform to build edge-native AI apps effortlessly.”

AI Authority TrendApollo MCP Server Launches to Make GraphQL Key for AI-API Orchestration

Importantly, LEAP v0 combines a compact Small Language Model (SLM) library with a developer-centric interface and platform-agnostic toolchain. In just ten lines of code, developers can integrate foundational AI models into both Android and iOS applications making it incredibly accessible for inference engineers, full-stack developers, and even AI beginners.

To complement LEAP, Liquid AI has significantly enhanced Apollo, an application originally developed by Aaron Ng. Now transformed into a robust iOS-native interface, Apollo enables users to interact with AI entirely on-device. This advancement ensures secure, private, and low-latency AI experiences, free from cloud reliance or internet constraints.

Furthermore, developers now gain immediate access to Liquid Foundation Models (LFM2) the company’s latest open-source edge AI models via LEAP and Apollo. These models have already achieved remarkable benchmarks for speed, efficiency, and instruction-following capabilities, setting new industry records for edge-native deployment.

Unlike traditional transformer-based models, LFM2 relies on Liquid AI’s first-principles approach. It leverages structured adaptation operators that lead to faster inference, efficient training, and improved generalization especially in long-text processing and low-resource environments.

AI Authority TrendApptronik and Jabil Partner to Scale Apollo Humanoid Robots for Manufacturing

As AI continues to move closer to the edge, Liquid AI’s latest innovations in private AI and edge AI platforms redefine what’s possible unlocking a future where intelligence is local, secure, and accessible to all.

FAQs

1. What is an Edge AI platform, and why is LEAP significant?

An Edge AI platform enables AI to run directly on local devices without relying on cloud servers. LEAP stands out by simplifying deployment with minimal code and offering privacy-focused, efficient performance across various hardware.

2. How does Apollo support private AI applications?

Apollo runs entirely on-device, ensuring data never leaves the user’s hardware. This design guarantees privacy, ultra-low latency, and functionality even without internet access ideal for sensitive or mission-critical applications.

3. What makes Liquid Foundation Models (LFM2) unique for edge AI?

LFM2 models use structured adaptation operators instead of transformers, resulting in faster, more energy-efficient processing. They’re designed for resource-limited environments, making them perfect for edge devices like phones and drones.

AI Authority TrendApollo GraphQl Advances API Orchestration With Apollo Connectors and GraphOs Updates

To share your insights, please write to us at sudipto@intentamplify.com