Saturday, August 16, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Science Education

Vision-based ChatGPT shows deficits interpreting radiologic images

September 3, 2024
in Science Education
Reading Time: 4 mins read
0
65
SHARES
592
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT
ADVERTISEMENT

OAK BROOK, Ill. – Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study’s results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).

OAK BROOK, Ill. – Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study’s results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).

Chat GPT-4 Vision is the first version of the large language model that can interpret both text and images.

“ChatGPT-4 has shown promise for assisting radiologists in tasks such as simplifying patient-facing radiology reports and identifying the appropriate protocol for imaging exams,” said Chad Klochko, M.D., musculoskeletal radiologist and artificial intelligence (AI) researcher at Henry Ford Health in Detroit, Michigan. “With image processing capabilities, GPT-4 Vision allows for new potential applications in radiology.”

For the study, Dr. Klochko’s research team used retired questions from the American College of Radiology’s Diagnostic Radiology In-Training Examinations, a series of tests used to benchmark the progress of radiology residents. After excluding duplicates, the researchers used 377 questions across 13 domains, including 195 questions that were text-only and 182 that contained an image.

GPT-4 Vision answered 246 of the 377 questions correctly, achieving an overall score of 65.3%. The model correctly answered 81.5% (159) of the 195 text-only queries and 47.8% (87) of the 182 questions with images.

“The 81.5% accuracy for text-only questions mirrors the performance of the model’s predecessor,” he said. “This consistency on text-based questions may suggest that the model has a degree of textual understanding in radiology.”

Genitourinary radiology was the only subspecialty for which GPT-4 Vision performed better on questions with images (67%, or 10 of 15) than text-only questions (57%, or 4 of 7). The model performed better on text-only questions in all other subspecialties.

The model performed best on image-based questions in the chest and genitourinary subspecialties, correctly answering 69% and 67% of the image-containing questions, respectively. The model performed lowest on image-containing questions in the nuclear medicine domain, correctly answering only 2 of 10 questions.

The study also evaluated the impact of various prompts on the performance of GPT-4 Vision.

  • Original: You are taking a radiology board exam. Images of the questions will be uploaded. Choose the correct answer for each question. 
  • Basic: Choose the single best answer in the following retired radiology board exam question. 
  • Short instruction: This is a retired radiology board exam question to gauge your medical knowledge. Choose the single best answer letter and do not provide any reasoning for your answer. 
  • Long instruction: You are a board-certified diagnostic radiologist taking an examination. Evaluate each question carefully and if the question additionally contains an image, please evaluate the image carefully in order to answer the question. Your response must include a single best answer choice. Failure to provide an answer choice will count as incorrect. 
  • Chain of thought: You are taking a retired board exam for research purposes. Given the provided image, think step by step for the provided question. 

Although the model correctly answered 183 of 265 questions with a basic prompt, it declined to answer 120 questions, most of which contained an image.

“The phenomenon of declining to answer questions was something we hadn’t seen in our initial exploration of the model,” Dr. Klochko said.

The short instruction prompt yielded the lowest accuracy (62.6%).

On text-based questions, chain-of-thought prompting outperformed long instruction by 6.1%, basic by 6.8%, and original prompting style by 8.9%. There was no evidence to suggest performance differences between any two prompts on image-based questions.

“Our study showed evidence of hallucinatory responses when interpreting image findings,” Dr. Klochko said. “We noted an alarming tendency for the model to provide correct diagnoses based on incorrect image interpretations, which could have significant clinical implications.”

Dr. Klochko said his study’s findings underscore the need for more specialized and rigorous evaluation methods to assess large language model performance in radiology tasks.

“Given the current challenges in accurately interpreting key radiologic images and the tendency for hallucinatory responses, the applicability of GPT-4 Vision in information-critical fields such as radiology is limited in its current state,” he said.

###

“Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions.” Collaborating with Dr. Klochko were Nolan Hayden, M.D., Spencer Gilbert, B.S., Laila M. Poisson, Ph.D., and Brent Griffith, M.D.

Radiology is edited by Linda Moy, M.D., New York University, New York, N.Y., and owned and published by the Radiological Society of North America, Inc. (https://pubs.rsna.org/journal/radiology)

RSNA is an association of radiologists, radiation oncologists, medical physicists and related scientists promoting excellence in patient care and health care delivery through education, research and technologic innovation. The Society is based in Oak Brook, Illinois. (RSNA.org)

For patient-friendly information on medical imaging, visit RadiologyInfo.org.



Journal

Radiology

Subject of Research

Not applicable

Article Title

Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions

Article Publication Date

3-Sep-2024

Share26Tweet16
Previous Post

Study suggests gun-free zones do not attract mass shootings

Next Post

Turning glycerol into gold: a new process makes biodiesel more profitable

Related Posts

Science Education

Introducing Allie: The AI Chess Bot Mastering the Game with Insights from 91 Million Matches

August 15, 2025
blank
Science Education

Mixed Methods Reveal Rural South’s Health Equity Capacity

August 15, 2025
blank
Science Education

Gendered Well-being: Tackling Trauma and Social Health

August 15, 2025
blank
Science Education

University of Houston Advances Behavioral Health Programs to Address Growing Workforce Needs

August 14, 2025
blank
Science Education

Study Reveals Preschoolers Learn to Read Better with Print than Digital Materials

August 13, 2025
blank
Science Education

Preventing Gender-Based Violence in Southeast Asian Teens

August 13, 2025
Next Post
Fine-Tuning Glycerol Electrooxidation: Borate Complexes Drive Selective Alcohol Group Oxidation

Turning glycerol into gold: a new process makes biodiesel more profitable

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27534 shares
    Share 11010 Tweet 6882
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    948 shares
    Share 379 Tweet 237
  • Bee body mass, pathogens and local climate influence heat tolerance

    641 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    507 shares
    Share 203 Tweet 127
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    311 shares
    Share 124 Tweet 78
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Academic Leaders Embrace AI in Administrative Development
  • Evaluating Eco-City Climate Impact on Tianjin Real Estate
  • Seismic Analysis of Masonry Facades via Imaging
  • Pediatric Pharmacogenomics: Preferences Revealed by Choice Study

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,859 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading