Wednesday, August 27, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Technology and Engineering

Governance frameworks should address the prospect of AI systems that cannot be safely tested

April 18, 2024
in Technology and Engineering
Reading Time: 3 mins read
0
66
SHARES
597
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

In this Policy Forum, Michael Cohen and colleagues highlight the unique risks presented by a particular class of artificial intelligence (AI) systems: reinforcement learning (RL) agents that plan more effectively than humans over long horizons. “Giving [such] an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop,” write Cohen and colleagues. This incentive also arises for long-term planning agents (LTPAs) more generally, say the authors, and in ways empirical testing is unlikely to cover. It is thus critical to address extinction risk from these systems, say Cohen et al., and this will require new forms of government intervention. Although governments have expressed some concern about existential risks from AI and taken promising first steps in the U.S. and U.K, in particular, regulatory proposals to date do not adequately address this particular class of risk – losing control of advanced LTPAs. Even empirical safety testing – the prevailing regulatory approach for AI – is likely to be either dangerous or uninformative, for a sufficiently capable LTPA, say the authors. Accordingly, Cohen and colleagues propose that developers not be permitted to build sufficiently capable LTPAs, and that the resources required to build them be subject to stringent controls. When it comes to determining how capable is “sufficiently capable,” for an LTPA,  the authors offer insight to guide regulators and policymakers. They note they do not believe that existing AI systems exhibit existentially dangerous capabilities, nor do they exhibit several of the capabilities mentioned in President Biden’s recent executive order on AI, “and it is very difficult to predict when they could.” The authors note that although their proposal for governing LTPAs fills an important gap, “further institutional mechanisms will likely be needed to mitigate the risks posed by advanced artificial agents.”

In this Policy Forum, Michael Cohen and colleagues highlight the unique risks presented by a particular class of artificial intelligence (AI) systems: reinforcement learning (RL) agents that plan more effectively than humans over long horizons. “Giving [such] an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop,” write Cohen and colleagues. This incentive also arises for long-term planning agents (LTPAs) more generally, say the authors, and in ways empirical testing is unlikely to cover. It is thus critical to address extinction risk from these systems, say Cohen et al., and this will require new forms of government intervention. Although governments have expressed some concern about existential risks from AI and taken promising first steps in the U.S. and U.K, in particular, regulatory proposals to date do not adequately address this particular class of risk – losing control of advanced LTPAs. Even empirical safety testing – the prevailing regulatory approach for AI – is likely to be either dangerous or uninformative, for a sufficiently capable LTPA, say the authors. Accordingly, Cohen and colleagues propose that developers not be permitted to build sufficiently capable LTPAs, and that the resources required to build them be subject to stringent controls. When it comes to determining how capable is “sufficiently capable,” for an LTPA,  the authors offer insight to guide regulators and policymakers. They note they do not believe that existing AI systems exhibit existentially dangerous capabilities, nor do they exhibit several of the capabilities mentioned in President Biden’s recent executive order on AI, “and it is very difficult to predict when they could.” The authors note that although their proposal for governing LTPAs fills an important gap, “further institutional mechanisms will likely be needed to mitigate the risks posed by advanced artificial agents.”



Journal

Science

DOI

10.1126/science.adl0625

Article Title

Regulating advanced artificial agents

Article Publication Date

5-Apr-2024

Share26Tweet17
Previous Post

Sylvester physician co-authors global plan to combat prostate cancer

Next Post

Cells engineered to produce immune-boosting amino acids in prizewinning research

Related Posts

Technology and Engineering

Broadband Photon-Counting Dual-Comb Spectroscopy Achieves Attowatt Sensitivity

August 27, 2025
blank
Technology and Engineering

High-Performance MoS2/rGO Nanocomposite for Oxygen Evolution

August 27, 2025
blank
Technology and Engineering

AI Review Reveals Innovative Approaches to Address Missing Traffic Data in Smart Cities

August 27, 2025
blank
Technology and Engineering

Quantum Capacitance of Transition Metal Alloys Analyzed

August 27, 2025
blank
Technology and Engineering

First-Ever Image Captures a Developing Baby Planet Set Against a Dark Backdrop

August 26, 2025
blank
Technology and Engineering

KAIST Unveils AI System Capable of Detecting Manufacturing Defects in Smart Factories Amid Changing Conditions

August 26, 2025
Next Post

Cells engineered to produce immune-boosting amino acids in prizewinning research

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27539 shares
    Share 11012 Tweet 6883
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    952 shares
    Share 381 Tweet 238
  • Bee body mass, pathogens and local climate influence heat tolerance

    641 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    508 shares
    Share 203 Tweet 127
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    312 shares
    Share 125 Tweet 78
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • CytoSorb® Enhanced Hemadsorption in Cardiac Surgery Outcomes
  • Metformin Boosts Triple-Negative Breast Cancer Treatment Efficacy
  • Broadband Photon-Counting Dual-Comb Spectroscopy Achieves Attowatt Sensitivity
  • Deep Learning Links ADHD Genes to Brain Structure

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,859 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading