The Future of AI Verification
The Open Hallucination Index is the first independent, open-source platform for detecting and verifying AI hallucinations. We build trust in a world where AI-generated content is becoming ubiquitous.
Our Mission
Truth as an API Endpoint
Large Language Models have revolutionized how we interact with information. But with great power comes great responsibility: Up to 27% of all LLM outputs contain factual errors – so-called hallucinations.
We have made it our mission to close this trust gap. Not through censorship or restriction, but through transparent, traceable verification.
Claim Decomposition
Atomic Analysis
Knowledge Graph
Hybrid Verification
Citation Trace
Traceable
Real-time
<50ms Latency
Technology
How Verification Works
A multi-step process that combines linguistic analysis with knowledge graph technology.
Claim Decomposition
The Input Processor breaks down complex texts into atomic subject-predicate-object triplets. Each individual claim is isolated and analyzed independently.
Knowledge Graph Matching
The Verification Oracle matches triplets against a hybrid index: Trusted domains (government data, science) combined with semantic consensus graphs.
Scoring & Citation
Each claim receives a HallucinationScore (0.0–1.0) and a complete Citation Trace with direct links to confirming or refuting sources.
Our Values
Principles That Guide Us
Trust Through Transparency
Our entire methodology is openly documented. Every verification is traceable and based on verifiable sources.
Scientific Foundation
Our algorithms are based on peer-reviewed research and are continuously improved through academic insights.
Privacy First
Your data belongs to you. We don't store sensitive content and meet the highest GDPR standards.
Open Source Mission
We believe in democratizing AI security. Our core components are freely available.
Ready for Verified AI?
Get started with the Open Hallucination Index today and bring trust to your AI applications.