Analyzing the correlation between Hallucinations and Knowledge Conflicts in Large Language Models
-
Updated
Oct 26, 2025 - Jupyter Notebook
Analyzing the correlation between Hallucinations and Knowledge Conflicts in Large Language Models
[EMNLP 2023] A Causal View of Entity Bias in (Large) Language Models
Official implementation of "CSKS: Continuously Steering LLMs Sensitivity to Contextual Knowledge with Proxy Models" (EMNLP 2025)
Add a description, image, and links to the knowledge-conflicts topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-conflicts topic, visit your repo's landing page and select "manage topics."