Hallucination Leaderboard is an open research project that tracks and compares the tendency of large language models to produce hallucinated or inaccurate information when generating summaries. The project provides a standardized benchmark that evaluates different models using a dedicated hallucination detection system known as the Hallucination Evaluation Model. Each model is tested on document summarization tasks to measure how often generated responses introduce information that is not supported by the original source material. The results are published as a leaderboard that allows researchers and developers to compare model reliability and factual consistency. By focusing on hallucination rates rather than traditional metrics such as accuracy or fluency, the benchmark highlights an important aspect of AI system safety and trustworthiness. The leaderboard is regularly updated as new models are released and evaluation methods evolve.

Features

  • Benchmark that measures hallucination frequency in language model outputs
  • Evaluation framework based on document summarization tasks
  • Leaderboard comparing hallucination rates across multiple LLMs
  • Automated scoring using a dedicated hallucination evaluation model
  • Public dataset and evaluation pipeline for reproducible testing
  • Regular updates tracking performance of newly released models

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow Hallucination Leaderboard

Hallucination Leaderboard Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Hallucination Leaderboard!

Additional Project Details

Programming Language

Python

Related Categories

Python Large Language Models (LLM)

Registered

2026-03-05