Sunday, August 24, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Technology and Engineering

Correcting biases in image generator models

June 24, 2024
in Technology and Engineering
Reading Time: 4 mins read
0
65
SHARES
593
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT
ADVERTISEMENT

Image generator models – systems that produce new images based on textual descriptions – have become a common and well-known phenomenon in the past year. Their continuous improvement, largely relying on developments in the field of artificial intelligence, makes them an important resource in various fields.

Image generator models – systems that produce new images based on textual descriptions – have become a common and well-known phenomenon in the past year. Their continuous improvement, largely relying on developments in the field of artificial intelligence, makes them an important resource in various fields.

 

To achieve good results, these models are trained on vast amounts of image-text pairs – for example, matching the text “picture of a dog” to a picture of a dog, repeated millions of times. Through this training, the model learns to generate original images of dogs.

 

However, as noted by Hadas Orgad, a doctoral student from the Henry and Marilyn Taub Faculty of Computer Science, and Bahjat Kawar a graduate of the same Faculty, “since these models are trained on a lot of data from the real world, they acquire and internalize assumptions about the world during the training process. Some of these assumptions are useful, for example, ‘the sky is blue,’ and they allow us to obtain beautiful images even with short and simple descriptions. On the other hand, the model also encodes incorrect or irrelevant assumptions about the world, as well as societal biases. For example, if we ask Stable Diffusion (a very popular image generator) for a picture of a CEO, we will only get pictures of women in 4% of cases.”

 

Another problem these models face is the significant number of changes occurring in the world around us. The models cannot adapt to the changes after the training process. As Dana Arad, also a doctoral student at the Taub Faculty of Computer Science, explains, “during their training process, models also learn a lot of factual knowledge about the world. For example, models learn the identities of heads of state, presidents, and even actors who portrayed popular characters in TV series. Such models are no longer updated after their training process, so if we ask a model today to generate a picture of the President of the United States, we might still reasonably receive a picture of Donald Trump, who of course has not been the president in recent years. We wanted to develop an efficient way to update the information without relying on expensive actions.”

 

The “traditional” solution to these problems is constant data correction by the user, retraining, or fine-tuning. However, these fixes incur high costs financially, in terms of workload, in terms of result quality, and in environmental aspects (due to the longer operation of computer servers). Additionally, implementing these methods does not guarantee control over unwanted assumptions or new assumptions that may arise. “Therefore,” they explain, “we would like a precise method to control the assumptions that the model encodes.”

 

The methods developed by the doctoral students under the guidance of Dr. Yonatan Belinkov address this need. The first method, developed by Orgad and Kawar and called TIME (Text-to-Image Model Editing), allows for the quick and efficient correction of biases and assumptions. The reason for this is that the correction does not require fine-tuning, retraining, or changing the language model and altering the text interpretation tools, but only a partial re-editing of around 1.95% of the model’s parameters. Moreover, the same editing process is performed in less than a second. In ongoing research based on TIME, called UCE, which has been developed in collaboration with Northeastern and MIT universities, they proposed a way to control a variety of undesirable ethical behaviors of the model – such as copyright infringement or social biases – by removing unwanted associations from the model such as offensive content or artistic styles of different artists.

 

Another method, developed subsequently by Arad and Orgad, is called ReFACT. It offers a different algorithm for parameter editing and achieves more precise results. ReFACT edits an even smaller percentage of the model’s parameters – only 0.25% – and manages to perform a wider variety of edits, even in cases where previous methods failed. It does so while maintaining the quality of the images and the facts and assumptions of the model that we want to preserve.

 

The methods receive inputs from the user regarding a fact or assumption they want to edit. For example, in cases of implicit assumptions, the method receives a “source” on which the model bases implicit assumptions (e.g., “red roses” by default the model assumes red roses) and a “target” that describes the same circumstances but with the desired features (e.g., “blue roses”). When wanting to use the method for role editing, the method receives an editing request (e.g., “President of the United States”) and then a “source” and “target” (“Donald Trump” and “Joe Biden,” respectively). The researchers collected about 200 works and assumptions on which they tested the editing methods and showed that these are efficient methods for updating information and correcting biases.

 

TIME was presented in October 2023 at the ICCV conference, one of the important conferences in the field of computer vision and machine learning. UCE was recently presented at the WACV conference. ReFACT was presented in Mexico at the NAACL conference, one of the leading conferences in natural language processing research.

The research was supported by the Israel Science Foundation (ISF), the Azrieli Foundation, Open Philanthropy, FTX Future Fund, the Crown Family Foundation, and the Council for Higher Education. Hadas Orgad is an Apple AI doctoral fellow.



Method of Research

Meta-analysis

Article Title

ReFACT: Updating Text-to-Image Models by Editing the Text Encoder

Article Publication Date

16-Jun-2024

Share26Tweet16
Previous Post

Alarming study unveils how “forever chemicals” transfer from mothers to newborns

Next Post

Researchers from Tel Aviv University have for the first time created a transparent, self-repairing adhesive glass that forms spontaneously when it comes in contact with water

Related Posts

Technology and Engineering

Pressure’s Impact on Ionic Conduction in Pb0.7Sn0.3F2

August 23, 2025
blank
Technology and Engineering

Advancing Supercapacitor Electrodes with Doped BiFeO3 Nanoparticles

August 23, 2025
blank
Technology and Engineering

Biphasic Cerium Oxide Nanoparticles: Dual Application Synergy

August 23, 2025
blank
Technology and Engineering

Global Decarbonization Drives Unseasonal Land Changes

August 23, 2025
blank
Technology and Engineering

MOF-Enhanced Sn-Doped V2O5 Cathodes for Fast Lithium Storage

August 23, 2025
blank
Technology and Engineering

Sustainable Detection of Ofloxacin with PGCN-Modified Electrodes

August 23, 2025
Next Post
Facile preparation of peptide glass at room temperature using standard lab equipment.

Researchers from Tel Aviv University have for the first time created a transparent, self-repairing adhesive glass that forms spontaneously when it comes in contact with water

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27537 shares
    Share 11012 Tweet 6882
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    952 shares
    Share 381 Tweet 238
  • Bee body mass, pathogens and local climate influence heat tolerance

    641 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    508 shares
    Share 203 Tweet 127
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    311 shares
    Share 124 Tweet 78
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Curcumin Shields Microenvironment to Block Colon Cancer Metastasis
  • Teachers Combat Violent Extremism in Kenyan Schools
  • Unique Midgut Symbiosis in Stinkbug Development Unveiled
  • Revolutionizing Drug Interaction Prediction with Graph Networks

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,859 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading