Friday, December 6, 2024
HomeAIPatronus AI launches world’s first self-serve API to stop AI hallucinations

Patronus AI launches world’s first self-serve API to stop AI hallucinations

Published on

spot_img

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A customer service chatbot confidently describes a product that doesn’t exist. A financial AI invents market data. A healthcare bot provides dangerous medical advice. These AI hallucinations, once dismissed as amusing quirks, have become million-dollar problems for companies rushing to deploy artificial intelligence.

Today, Patronus AI, a San Francisco startup that recently secured $17 million in Series A funding, launched what it calls the first self-serve platform to detect and prevent AI failures in real-time. Think of it as a sophisticated spell-checker for AI systems, catching errors before they reach users.

Inside the AI safety net: How it works

“Many companies are grappling with AI failures in production, facing issues like hallucinations, security vulnerabilities, and unpredictable behavior,” said Anand Kannappan, Patronus AI’s CEO, in an interview with VentureBeat. The stakes are high: Recent research by the company found that leading AI models like GPT-4 reproduce copyrighted content 44% of the time when prompted, while even advanced models generate unsafe responses in over 20% of basic safety tests.

The timing couldn’t be more critical. As companies rush to implement generative AI capabilities — from customer service chatbots to content generation systems — they’re discovering that existing safety measures fall short. Current evaluation tools like Meta’s LlamaGuard perform below 50% accuracy, making them little better than a coin flip.

Patronus AI’s solution introduces several innovations that could reshape how businesses deploy AI. Perhaps most significant is its “judge evaluators” feature, which allows companies to create custom rules in plain English.

“You can customize evaluation to exactly [meet] your product needs,” Varun Joshi, Patronus AI’s product lead, told VentureBeat. “We let customers write out in English what they want to evaluate and check for.” A financial services company might specify rules about regulatory compliance, while a healthcare provider could focus on patient privacy and medical accuracy.

From detection to prevention: The technical breakthrough

The system’s cornerstone is Lynx, a breakthrough hallucination detection model that outperforms GPT-4 by 8.3% in detecting medical inaccuracies. The platform operates at two speeds: a quick-response version for real-time monitoring and a more thorough version for deeper analysis. “The small versions can be used for real-time guardrails, and the large ones might be more appropriate for offline analysis,” Joshi told VentureBeat.

Beyond traditional error checking, the company has developed specialized tools like CopyrightCatcher, which detects when AI systems reproduce protected content, and FinanceBench, the industry’s first benchmark for evaluating AI performance on financial questions. These tools work in concert with Lynx to provide comprehensive coverage against AI failures.

Beyond simple guard rails: Reshaping AI safety

The company has adopted a pay-as-you-go pricing model, starting at $10 per 1000 API calls for smaller evaluators and $20 per 1000 API calls for larger ones. This pricing structure could dramatically increase access to AI safety tools, making them available to startups and smaller businesses that previously couldn’t afford sophisticated AI monitoring.

Early adoption suggests major enterprises see AI safety as a critical investment, not just a nice-to-have feature. The company has already attracted clients including HP, AngelList, and Pearson, along with partnerships with tech giants like Nvidia, MongoDB, and IBM.

What sets Patronus AI apart is its focus on improvement rather than just detection. “We can actually highlight the span of the specific piece of text where the hallucination is,” Kannappan explained. This precision allows engineers to quickly identify and fix problems, rather than just knowing something went wrong.

The race against AI hallucinations

The launch comes at a pivotal moment in AI development. As large language models like GPT-4 and Claude become more powerful and widely used, the risks of AI failures grow correspondingly larger. A hallucinating AI system could expose companies to legal liability, damage customer trust, or worse.

Recent regulatory moves, including President Biden’s AI executive order and the EU’s AI Act, suggest that companies will soon face legal requirements to ensure their AI systems are safe and reliable. Tools like Patronus AI’s platform could become essential for compliance.

“Good evaluation is not just protecting against a bad outcome — it’s deeply about improving your models and improving your products,” Joshi emphasizes. This philosophy reflects a maturing approach to AI safety, moving from simple guard rails to continuous improvement.

The real test for Patronus AI isn’t just catching mistakes — it will be keeping pace with AI’s breakneck evolution. As language models grow more sophisticated, their hallucinations may become harder to spot, like finding increasingly convincing forgeries.

The stakes couldn’t be higher. Every time an AI system invents facts, recommends dangerous treatments, or generates copyrighted content, it erodes the trust these tools need to transform business. Without reliable guardrails, the AI revolution risks stumbling before it truly begins.

In the end, it’s a simple truth: If artificial intelligence can’t stop making things up, it may be humans who end up paying the price.



Source link

Latest articles

Something Has Gone Seriously Wrong With Cybertruck Production

A different kind of Cyber-stuck.Batteries Not IncludedLast week, Tesla told workers on its...

Meta To Spend $10 Billion On Largest Data Centre

Facebook parent confirms its 23rd data centre in the US will be located...

Musk’s Neuralink Cited For ‘Objectionable Conditions

Federal regulator reportedly cites animal lab at Elon Musk’s Neuralink for “objectionable conditions...

Google’s latest free update is a Christmas present to Pixel owners

This holiday season, Google is spreading cheer with its December Pixel Drop, delivering...

More like this

Something Has Gone Seriously Wrong With Cybertruck Production

A different kind of Cyber-stuck.Batteries Not IncludedLast week, Tesla told workers on its...

Meta To Spend $10 Billion On Largest Data Centre

Facebook parent confirms its 23rd data centre in the US will be located...

Musk’s Neuralink Cited For ‘Objectionable Conditions

Federal regulator reportedly cites animal lab at Elon Musk’s Neuralink for “objectionable conditions...