Friday, August 29, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Medicine

When to trust an AI model

July 12, 2024
in Medicine
Reading Time: 4 mins read
0
When to trust an AI model
65
SHARES
594
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

CAMBRIDGE, MA – Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

CAMBRIDGE, MA – Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

But a model’s uncertainty quantifications are only useful if they are accurate. If a model says it is 49% confident that a medical image shows a pleural effusion, then 49% of the time, the model should be right.

MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models. Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently.

In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.

This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task.

“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT.

Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. The research will be presented at the International Conference on Machine Learning.

Quantifying uncertainty

Uncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters. These methods also require users to make assumptions about the model and data used to train it.

The MIT researchers took a different approach. They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods. MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label.

The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings.

MDL involves considering all possible labels a model could give a test point. If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly.

“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says.

For example, consider a model that says a medical image shows a pleural effusion. If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision.

With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point. If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities.

The amount of code used to label a datapoint is known as stochastic data complexity. If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident.

But testing each datapoint using MDL would require an enormous amount of computation.

Speeding up the process

With IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function. They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs. This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity.

In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence. The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers.

The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods.

“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says.

IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models. This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions.

“People need to understand that these systems are very fallible and can make things up as they go. A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says.

In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle.



DOI

10.48550/arXiv.2406.02745

Share26Tweet16
Previous Post

Entering the golden age for antibody-drug conjugates in gynecologic cancer

Next Post

UC3M student startup, Solaris Vita, awarded in Europe’s largest entrepreneurship competition

Related Posts

blank
Medicine

Innovative Inverse Kinematics Tool for Motion Capture

August 29, 2025
blank
Medicine

SPI1 Enhances TXNRD1 to Shield Trophoblasts from Ferroptosis

August 29, 2025
blank
Medicine

Impact of Non-Insulin Diabetes Medications on Complications

August 29, 2025
blank
Medicine

Modeling Post-Gastrula Development with Bidirectional Stem Cells

August 29, 2025
blank
Medicine

Bariatric Surgery Benefits for Type 1 Diabetics Explored

August 29, 2025
blank
Medicine

Hip Arthroplasty: Boosting Life Satisfaction Post-Surgery

August 29, 2025
Next Post
UC3M student startup, Solaris Vita, awarded in Europe's largest entrepreneurship competition

UC3M student startup, Solaris Vita, awarded in Europe's largest entrepreneurship competition

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27541 shares
    Share 11013 Tweet 6883
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    954 shares
    Share 382 Tweet 239
  • Bee body mass, pathogens and local climate influence heat tolerance

    642 shares
    Share 257 Tweet 161
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    509 shares
    Share 204 Tweet 127
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    312 shares
    Share 125 Tweet 78
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Innovative Inverse Kinematics Tool for Motion Capture
  • SPI1 Enhances TXNRD1 to Shield Trophoblasts from Ferroptosis
  • Best Treatments for Depression in Cancer Patients
  • Impact of Non-Insulin Diabetes Medications on Complications

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,181 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading