As artificial intelligence becomes more ingrained in daily life, Americans are growing less tolerant of its errors and more willing to hold AI platforms accountable, even to the point of legal action. A new study released by Pearl.com, an AI search platform that combines language models with human experts, reveals a significant shift in public sentiment towards AI accountability.
The inaugural AI Accountability & Trust Report from Pearl.com indicates that a majority of U.S. adults (57%) believe AI platforms should be legally responsible for inaccuracies. Furthermore, a substantial 39% stated they would consider suing an AI provider if it furnished harmful or incorrect information. This data underscores a rising demand for trust, accuracy, and accountability in the rapidly evolving AI landscape.
AI Authority Trend: Acquia Introduces AI Search to Digital Experience Platform with SearchStax Solution
The study, conducted by Censuswide, surveyed over 2,000 Americans nationwide in December 2024. The findings highlight a fragility in public trust, with 47% of respondents expressing greater confidence in AI answers validated by human experts. This desire for human oversight is a core component of Pearl.com’s approach, which integrates a network of over 12,000 professionals to verify AI-generated responses.
Interestingly, the study also found that 42% of those surveyed would be willing to pay for AI services guaranteeing higher accuracy. However, the report noted that even a marginal 10% improvement in AI accuracy could cost the industry over $1 trillion, emphasizing the need for efficient and innovative solutions like human-validated AI.
“AI companies are at a critical juncture,” stated Andy Kurtzig, CEO of Pearl.com. “Consumers want the convenience AI offers, but they also demand accuracy and are prepared to take legal action if their expectations aren’t met. Our data demonstrates that Pearl is significantly more helpful than other GPTs, particularly for important questions requiring professional expertise. Instead of investing heavily in small accuracy gains, businesses can adopt human-validated AI now to foster trust, minimize legal risks, and provide genuine value.”
Pearl.com distinguishes itself by combining an advanced large language model (LLM) with a network of vetted human experts across various fields, including legal, medical, veterinary, IT, and home improvement. The platform aims to provide real-time, verified answers to high-risk, professional service questions. Pearl reports accuracy rates 22% higher than leading models like ChatGPT in professional services.
AI Authority Trend: SysAid Copilot boosts productivity with AI-powered ITSM, showing major gains
FAQs
1. What is the key takeaway from the Pearl.com AI Accountability & Trust Report?
A1: The main conclusion is that Americans are increasingly holding AI accountable for its mistakes and are even willing to consider legal action against AI providers that furnish harmful or incorrect information. There’s a strong desire for greater accuracy and trust in AI-driven services.
2. How does Pearl.com address the concerns about AI accuracy and trustworthiness?
A2: Pearl.com utilizes a “human-in-the-loop” approach. It combines an advanced AI language model with a network of over 12,000 vetted human experts who verify AI-generated responses, particularly for high-risk, professional service questions.
3. Why is this report important for businesses utilizing or developing AI?
A3: The report highlights the growing legal and reputational risks associated with inaccurate AI outputs. It suggests that businesses should prioritize accuracy and trustworthiness in their AI implementations, potentially by incorporating human validation processes, to mitigate risks and build consumer confidence.
AI Authority Trend: New ISG Positioning Highlights Key Role in AI Advisory Amid Growing Demand
To share your insights, please write to us at news@intentamplify.com





