A Data-Centric Approach to Generate Faithful and High Quality Patient Summaries with Large Language Models

Abstract

Patients often face difficulties in understanding their hospitalizations, while healthcare workers have limited resources to provide explanations. In this work, we investigate the potential of large language models to generate patient summaries based on doctors' notes and study the effect of training data on the faithfulness and quality of the generated summaries. To this end, we release (i) a rigorous labeling protocol for errors in medical texts and (ii) a publicly available dataset of annotated hallucinations in 100 doctor-written and 100 generated summaries. We show that fine-tuning on hallucination-free data effectively reduces hallucinations from 2.60 to 1.55 per summary for Llama 2, while preserving relevant information. We observe a similar effect on GPT-4 (0.70 to 0.40), when the few-shot examples are hallucination-free. We also conduct a qualitative evaluation using hallucination-free and improved training data. We find that common quantitative metrics do not correlate well with faithfulness and quality. Finally, we test GPT-4 for automatic hallucination detection, which clearly outperforms common baselines.

Publication
Conference on Health, Inference, and Learning (CHIL) 2024
Stefan Hegselmann
Stefan Hegselmann
Visiting Student
Shannon Shen
Shannon Shen
PhD Student

My research lies at the intersection between NLP and HCI. I am interested in understanding languages in scientific, legal, or clinical text from documents that are authored and used by domain experts. With newly developed NLP approaches, I study how they can enable better Human-AI collaboration to assist experts in these high-stake settings.

Monica Agrawal
Monica Agrawal
PhD Student

Assistant Professor, Duke

David Sontag
David Sontag
Professor of EECS

My research focuses on advancing machine learning and artificial intelligence, and using these to transform health care.

Related