Tuesday, March 21, 2023
SCIENMAG: Latest Science and Health News
No Result
View All Result
  • Login
  • HOME PAGE
  • BIOLOGY
  • CHEMISTRY AND PHYSICS
  • MEDICINE
    • Cancer
    • Infectious Emerging Diseases
  • SPACE
  • TECHNOLOGY
  • CONTACT US
  • HOME PAGE
  • BIOLOGY
  • CHEMISTRY AND PHYSICS
  • MEDICINE
    • Cancer
    • Infectious Emerging Diseases
  • SPACE
  • TECHNOLOGY
  • CONTACT US
No Result
View All Result
Scienmag - Latest science news from science magazine
No Result
View All Result
Home SCIENCE NEWS Space & Planetary Science

2022 NSF CAREER award recipient Sihong Xie seeks to pull back the curtain on the intelligence behind AI

May 11, 2022
in Space & Planetary Science
0
Share on FacebookShare on Twitter

If algorithms are going to impact nearly every aspect of our lives, they should meet the highest standards of accountability. Yet little is known—particularly to those outside the field of computer science—about how these algorithms actually operate. 

Sihong Xie

Credit: Douglas Benedict/Academic Image for Lehigh University

If algorithms are going to impact nearly every aspect of our lives, they should meet the highest standards of accountability. Yet little is known—particularly to those outside the field of computer science—about how these algorithms actually operate. 

Making this form of artificial intelligence more transparent to the lay audience is the driving force behind the work of Lehigh University faculty member Sihong Xie. “Creating accountable machine learning is the ultimate goal of our research,” he says.

Xie, an assistant professor of computer science and engineering in the P.C. Rossin College of Engineering and Applied Science, recently won support from the National Science Foundation’s Faculty Early Career Development (CAREER) program for his proposal to make machine learning models more transparent, more robust, and more fair.

The prestigious NSF CAREER award is given annually to junior faculty members across the U.S. who exemplify the role of teacher-scholars through outstanding research, excellent education, and the integration of those two pursuits. Each award provides stable support at the level of approximately $500,000 for a five-year period.  

Machine learning models are capable of solving complex problems by analyzing an enormous amount of data. But how they computationally solve the problem is a mystery. “It’s difficult for humans to make sense of the reasoning process of the program,” says Xie. “How do we know that the artificial intelligence is analyzing the data like a human domain expert would analyze it?”

For example, he says, scientists may turn to a machine learning model to tell them which molecular combination to experiment with in a particular experiment. The model could analyze thousands of molecules in seconds, and come up with a list of promising candidates.

“So the model says, try A, B, and C. But the researchers might not be confident in that list because they don’t know why they should try A, B, and C,” says Xie. “Conducting experiments in such a domain can be highly expensive, and I want the machine to also tell chemical engineers that A, B, and C have certain characteristics that make them more promising for whatever reason, and that’s why they should try that combination.”

He says such explainability of the machine learning algorithms will generate greater confidence and trust of the human users in the models. To establish that explainability, he’ll work with domain experts to combine human knowledge with machine learning programs. He’ll incorporate the constraints that guide these professionals in their decision-making into the development of algorithms that more closely reflect human domain knowledge and reasoning patterns.

“If the model can satisfy those constraints, then they behave pretty much like the human is expected to behave,” he says. “The technical intent should be general enough to apply to many different domains, including cybersecurity, computational neuroscience, and smart cities.”

Ultimately, few human experts will be able to dedicate the time necessary to fully compile the constraints around any one question. To that end, Xie intends to automate the creation of such checklists by collecting relevant data from the experts instead.

“They have years’ worth of data, so the idea is to have my machine run through it all, and create the checklist for us. And of course, that’s the problem,” he says. “That list may not be 100 percent accurate.”

It could be missing things. It could introduce noise. In short, he says, the checklist of constraints the model might develop on its own—a checklist it will then use to determine the answer to something like, what combination of molecules should I study?—could be too sensitive, or not sensitive enough. Xie and his team will design another algorithm to find what he calls the sweet spot in this checklist creation. One that is sensitive enough to detect subtle but useful positives, but not so sensitive it generates too many false positives.

“Real-world data are dirty,” he says. “We want the machine program to be robust enough so that if it’s dealing with reasonably dirty data, it will still generate reliable output.”

Along with questions about accountability come concerns about algorithmic fairness: If machine learning algorithms now influence what we read in our social feeds, which job postings we see, and how our loan applications are processed, how can we be sure that choices are made ethically? 

To address those concerns, Xie will utilize multiple objective optimization to find the most efficient solutions to competing perspectives on what’s considered fair. 

“Different people, different organizations, different countries, they all have their own definition of fairness,” he says. “So we have to explore all the possible trade-offs between these different definitions, and that’s the technical challenge, because there are so many different ways to trade off. The computer has to actually search for how much each of these fairness standards has to be respected.” 

He will provide algorithmic solutions that can efficiently search such trade-offs. “We did recognize that trading one objective for another is a ubiquitous situation in the broader accountable machine learning,” he adds, “for example, trading transparency and accuracy, or multiple forms of explanations, etc.”

The implications of this research could be profound. Xie says that, eventually, experts could have much more confidence in artificial intelligence, and algorithms could become more responsive to social norms.

“The biggest motivation for me in conducting this research is that it has the potential to make a real social impact,” he says. “And because we always have humans in the loop, we’re going to ensure that these models inspire more confidence and treat people fairly.”

About Sihong Xie

Sihong Xie is an assistant professor in the Department of Computer Science and Engineering at Lehigh University. His research interests include misinformation detection in adversarial environments, interpretable and fair graphical models, and human–machine learning collaboration in data annotation.

Xie received his PhD from the Department of Computer Science at the University of Illinois Chicago in 2016. He holds bachelor’s and master’s degrees from the School of Software Engineering at Sun Yat-Sen University in China. 

Xie has published over 60 papers in major data mining conferences, such as KDD, ICDM, WWW, AAAI, IJCAI, WSDM, SDM, TKDE, with over 2000 citations and an h-index of 17. He serves on the Senior Program Committee for AAAI and is a program committee member for other ML and AI conferences, including KDD, ICLR, ICDM, and SIGIR.

Related Links 

  • Faculty Profile: Sihong Xie
  • NSF Award (2145922): CAREER: Bilevel Optimization for Accountable Machine Learning on Graphs
  • Rossin College News: “Sihong Xie: Increasing transparency in machine learning decision-making”


Tags: awardcareercurtainIntelligenceNSFpullrecipientseeksSihongXie
Share26Tweet16Share5ShareSendShare
  • Hitchhiking insect

    Spotted lanternfly spreads by hitching a ride with humans

    70 shares
    Share 28 Tweet 18
  • Multi-state study reports COVID-19 mRNA vaccines protective during Omicron BA.4/BA.5 predominanc

    70 shares
    Share 28 Tweet 18
  • The ACMG Foundation for Genetic and Genomic Medicine presents four Next Generation Fellowship Awards at the 2023 ACMG Annual Clinical Genetics Meeting

    71 shares
    Share 28 Tweet 18
  • Genomic study of ancient humans sheds light on human evolution on the Tibetan Plateau

    80 shares
    Share 32 Tweet 20
  • New study from Japan shows SARS-CoV-2 Omicron XBB.1.5 variant is highly transmissible and infectious

    77 shares
    Share 31 Tweet 19
  • Cyprus’s copper deposits created one of the most important trade hubs in the Bronze Age

    79 shares
    Share 32 Tweet 20
ADVERTISEMENT

About us

We bring you the latest science news from best research centers and universities around the world. Check our website.

Latest NEWS

World’s strongest MRI investigates COVID and myalgic encephalomyelitis/chronic fatigue impacts on the brain

Artificial pancreas developed at UVA improves blood sugar control for kids ages 2-6, study finds

Reactive oxygen impacts carbon cycling in tidal sands

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 205 other subscribers

© 2023 Scienmag- Science Magazine: Latest Science News.

No Result
View All Result
  • HOME PAGE
  • BIOLOGY
  • CHEMISTRY AND PHYSICS
  • MEDICINE
    • Cancer
    • Infectious Emerging Diseases
  • SPACE
  • TECHNOLOGY
  • CONTACT US

© 2023 Scienmag- Science Magazine: Latest Science News.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In