Vespa Delivers 5X Infrastructure Cost Savings with Unmatched Query Efficiency Across Hybrid, Vector, and Lexical Search Types
Vespa.ai, developer of the leading platform for building and deploying large-scale, real-time AI applications powered by big data, has released a new benchmark report showcasing its superior performance, scalability, and efficiency in comparison to Elasticsearch. The comprehensive, reproducible study tested both systems on an e-commerce search application using a dataset of 1 million products, evaluating write operations (document ingestion and updates) and multiple query strategies: lexical matching, vector similarity, and hybrid approaches.
AI Tech Insights: Elastic Announces General Availability of Elastic Cloud Serverless Powered by Search AI Lake
This experience was shared by Vinted.com—a leading platform for second-hand items. Facing growing operational costs and hardware demands with Elasticsearch, Vinted Engineering conducted a separate evaluation. Seeking an all-in-one solution for both vector and traditional search, Vinted’s engineering team migrated to Vespa in 2023. For a deeper look at their evaluation and migration, read the Vinted Engineering blog post, “Search Scaling Chapter 8: Goodbye Elasticsearch. Hello Vespa Search Engine.”
Key Findings of the Vespa Benchmark
- Performance Across Query Types
- Hybrid Queries: Vespa achieved 8.5X higher throughput per CPU core than Elasticsearch.
- Vector Searches: Vespa demonstrated up to 12.9X higher throughput per CPU core.
- Lexical Searches: Vespa delivered 6.5X better throughput per CPU core.
- Updates
- Steady-State Efficiency: Vespa is 4X more efficient for in-place updates, handling queries and updates more effectively after the initial bootstrapping phase.
- Bootstrap: While Elasticsearch showed high efficiency in the initial ingestion phase (from 0 to 1M documents), Vespa stood out in long-term, steady-state operations.
AI Tech Insights: Elastic Announces AI Ecosystem to Accelerate GenAI Application Development
- Infrastructure Cost Savings
- Due to higher query throughput and more efficient CPU usage, Vespa can reduce infrastructure costs by up to 5X, as detailed in section 10 of the report.
Jon Bratseth, CEO and Founder, Vespa. “As companies demand ever-faster search results and the ability to handle continuous updates, it is vital to choose a solution that performs robustly at scale and remains within a cost-effective price point. Our benchmark shows that Vespa excels not just in pure query speed but in how efficiently it utilizes resources, which translates directly into measurable infrastructure cost savings.”
AI Tech Insights: New Research Reveals the Role of AI in Revolutionizing Network Operations for Service Providers
Source – businesswire
To share your insights, please write to us at news@intentamplify.com