{"id":11912,"date":"2024-06-25T20:55:30","date_gmt":"2024-06-25T20:55:30","guid":{"rendered":"https:\/\/scienmag.com\/researchers-develop-new-training-technique-that-aims-to-make-ai-systems-less-socially-biased\/"},"modified":"2024-06-25T20:55:30","modified_gmt":"2024-06-25T20:55:30","slug":"researchers-develop-new-training-technique-that-aims-to-make-ai-systems-less-socially-biased","status":"publish","type":"post","link":"https:\/\/scienmag.com\/researchers-develop-new-training-technique-that-aims-to-make-ai-systems-less-socially-biased\/","title":{"rendered":"Researchers develop new training technique that aims to make AI systems less socially biased"},"content":{"rendered":"

CORVALLIS, Ore. \u2013 An Oregon State University doctoral student and researchers at Adobe have created a new, cost-effective training technique for artificial intelligence systems that aims to make them less socially biased.<\/p>\n

\"Eric<\/p>\n

Credit: Johanna Carson, OSU College of Engineering<\/p>\n

<\/p>\n

\n

CORVALLIS, Ore. \u2013 An Oregon State University doctoral student and researchers at Adobe have created a new, cost-effective training technique for artificial intelligence systems that aims to make them less socially biased.<\/p>\n

Eric Slyman of the OSU College of Engineering and the Adobe researchers call the novel method FairDeDup<\/a>, an abbreviation for fair deduplication. Deduplication means removing redundant information from the data used to train AI systems, which lowers the high computing costs of the training.<\/p>\n

Datasets gleaned from the internet often contain biases present in society, the researchers said. When those biases are codified in trained AI models, they can serve to perpetuate unfair ideas and behavior.<\/p>\n

By understanding how deduplication affects bias prevalence, it\u2019s possible to mitigate negative effects \u2013 such as an AI system automatically serving up only photos of white men if asked to show a picture of a CEO, doctor, etc. when the intended use case is to show diverse representations of people.<\/p>\n

\u201cWe named it FairDeDup as a play on words for an earlier cost-effective method, SemDeDup, which we improved upon by incorporating fairness considerations,\u201d Slyman said. \u201cWhile prior work has shown that removing this redundant data can enable accurate AI training with fewer resources, we find that this process can also exacerbate the harmful social biases AI often learns.\u201d<\/p>\n

Slyman<\/a> presented the FairDeDup algorithm last week in Seattle at the IEEE\/CVF\u00a0Conference on Computer Vision and Pattern Recognition<\/a>.<\/p>\n

FairDeDup works by thinning the datasets of image captions collected from the web through a process known as pruning. Pruning refers to choosing a subset of the data that\u2019s representative of the whole dataset, and if done in a content-aware manner, pruning allows for informed decisions about which parts of the data stay and which go.<\/p>\n

\u201cFairDeDup removes redundant data while incorporating controllable, human-defined dimensions of diversity to mitigate biases,\u201d Slyman said. \u201cOur approach enables AI training that is not only cost-effective and accurate but also more fair.\u201d<\/p>\n

In addition to occupation, race and gender, other biases perpetuated during training can include those related to age, geography and culture.<\/p>\n

\u201cBy addressing biases during dataset pruning, we can create AI systems that are more socially just,\u201d Slyman said. \u201cOur work doesn\u2019t force AI into following our own prescribed notion of fairness but rather creates a pathway to nudge AI to act fairly when contextualized within some settings and user bases in which it\u2019s deployed. We let people define what is fair in their setting instead of the internet or other large-scale datasets deciding that.\u201d<\/p>\n

Collaborating with Slyman were Stefan Lee<\/a>, an assistant professor in the OSU College of Engineering, and Scott Cohen and Kushal Kafle of Adobe.<\/p>\n


\n
\n
\n
\n
\n

Method of Research<\/h4>\n

Imaging analysis<\/p>\n<\/p><\/div>\n

\n

Subject of Research<\/h4>\n

Not applicable<\/p>\n<\/p><\/div>\n

\n

Article Title<\/h4>\n

FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication<\/p>\n<\/p><\/div>\n

\n

Article Publication Date<\/h4>\n

17-Jun-2024<\/p>\n<\/p><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"

CORVALLIS, Ore. \u2013 An Oregon State University doctoral student and researchers at Adobe have created a new, cost-effective training technique for artificial intelligence systems that aims to make them less socially biased. Credit: Johanna Carson, OSU College of Engineering CORVALLIS, Ore. \u2013 An Oregon State University doctoral student and researchers at Adobe have created a […]<\/p>\n","protected":false},"author":1,"featured_media":11913,"comment_status":"","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"jnews-multi-image_gallery":[],"jnews_single_post":[],"jnews_primary_category":[],"jnews_social_meta":[],"jnews_override_counter":[],"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[129],"tags":[],"class_list":["post-11912","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-social-science"],"jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/scienmag.com\/wp-content\/uploads\/2024\/06\/Researchers-develop-new-training-technique-that-aims-to-make-AI.jpeg","jetpack_likes_enabled":false,"_links":{"self":[{"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/posts\/11912","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/comments?post=11912"}],"version-history":[{"count":0,"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/posts\/11912\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/media\/11913"}],"wp:attachment":[{"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/media?parent=11912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/categories?post=11912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scienmag.com\/wp-json\/wp\/v2\/tags?post=11912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}} Science

Page 1 of 928 1 2 928