Show HN: Phare: A Safety Probe for Large Language Models

2 weeks ago 5

[Submitted on 16 May 2025 (v1), last revised 19 May 2025 (this version, v2)]

View PDF HTML (experimental)

Abstract:Ensuring the safety of large language models (LLMs) is critical for responsible deployment, yet existing evaluations often prioritize performance over identifying failure modes. We introduce Phare, a multilingual diagnostic framework to probe and evaluate LLM behavior across three critical dimensions: hallucination and reliability, social biases, and harmful content generation. Our evaluation of 17 state-of-the-art LLMs reveals patterns of systematic vulnerabilities across all safety dimensions, including sycophancy, prompt sensitivity, and stereotype reproduction. By highlighting these specific failure modes rather than simply ranking models, Phare provides researchers and practitioners with actionable insights to build more robust, aligned, and trustworthy language systems.

Submission history

From: Pierre Le Jeune [view email]
[v1] Fri, 16 May 2025 15:31:08 UTC (4,584 KB)
[v2] Mon, 19 May 2025 09:01:44 UTC (6,739 KB)

Read Entire Article