Radiology Room |
Ultrasound Room |
Surgery Room |
Laboratory Room |
Comprehensive Room |
Pediatrics Room |
Dental Room |
Medical operation instruments |
Hospital Furniture |
Medical supplies |
News Center
New Scoring Systems Increase Accuracy of AI-Generated Radiology Reports
Artificial intelligence (AI) tools that efficiently produce detailed narrative reports of CT scans or X-rays can significantly lighten the workload of busy radiologists. These AI reports go beyond simple identification of abnormalities and instead provide complex diagnostic information, detailed descriptions, nuanced findings, and appropriate degrees of uncertainty, similar to how human radiologists describe scan results. While several AI models capable of generating such detailed medical imaging reports have emerged, automated scoring systems meant to assess these tools are proving to be less effective at gauging their performance, according to a new study.
In the study, researchers at Harvard Medical School (Boston, MA, USA) tested various scoring metrics on AI-generated narrative reports and had six human radiologists read these reports. The analysis revealed that automated scoring systems performed poorly compared to human radiologists when it came to evaluating AI-generated reports. These systems misinterpreted and even missed significant clinical errors made by the AI tool. Ensuring the reliability of scoring systems is crucial for AI tools to continue improving and gaining clinicians' trust. However, the metrics tested in the study failed to reliably identify clinical errors in the AI reports, highlighting an urgent need for improvement and the development of high-fidelity scoring systems that accurately monitor tool performance.
In order to create better scoring metrics, the research team designed a new method called RadGraph F1 for evaluating the performance of AI tools generating radiology reports from medical images. Additionally, they created a composite evaluation tool called RadCliQ, which combines multiple metrics to produce a single score that more closely aligns with how a human radiologist would assess an AI model's performance. Using these new scoring tools, the researchers evaluated several state-of-the-art AI models and found a notable gap between their actual scores and the top possible scores.
Going forward, the researchers envision building generalist medical AI models capable of performing various complex tasks, including solving novel problems. Such AI systems could effectively communicate with radiologists and physicians about medical images, assisting in diagnosis and treatment decisions. The team also aims to develop AI assistants that can explain imaging findings directly to patients using everyday language, enhancing patient understanding and engagement. Ultimately, these advancements could revolutionize medical imaging practices, improving efficiency, accuracy, and patient care.
“Accurately evaluating AI systems is the critical first step toward generating radiology reports that are clinically useful and trustworthy,” said study senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS. “By aligning better with radiologists, our new metrics will accelerate development of AI that integrates seamlessly into the clinical workflow to improve patient care,”
http://www.gzjiayumed.com/en/index.asp