Thursday, April 9, 2026
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Technology and Engineering

Governance frameworks should address the prospect of AI systems that cannot be safely tested

April 18, 2024
in Technology and Engineering
Reading Time: 3 mins read
0
66
SHARES
604
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

In this Policy Forum, Michael Cohen and colleagues highlight the unique risks presented by a particular class of artificial intelligence (AI) systems: reinforcement learning (RL) agents that plan more effectively than humans over long horizons. “Giving [such] an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop,” write Cohen and colleagues. This incentive also arises for long-term planning agents (LTPAs) more generally, say the authors, and in ways empirical testing is unlikely to cover. It is thus critical to address extinction risk from these systems, say Cohen et al., and this will require new forms of government intervention. Although governments have expressed some concern about existential risks from AI and taken promising first steps in the U.S. and U.K, in particular, regulatory proposals to date do not adequately address this particular class of risk – losing control of advanced LTPAs. Even empirical safety testing – the prevailing regulatory approach for AI – is likely to be either dangerous or uninformative, for a sufficiently capable LTPA, say the authors. Accordingly, Cohen and colleagues propose that developers not be permitted to build sufficiently capable LTPAs, and that the resources required to build them be subject to stringent controls. When it comes to determining how capable is “sufficiently capable,” for an LTPA,  the authors offer insight to guide regulators and policymakers. They note they do not believe that existing AI systems exhibit existentially dangerous capabilities, nor do they exhibit several of the capabilities mentioned in President Biden’s recent executive order on AI, “and it is very difficult to predict when they could.” The authors note that although their proposal for governing LTPAs fills an important gap, “further institutional mechanisms will likely be needed to mitigate the risks posed by advanced artificial agents.”

In this Policy Forum, Michael Cohen and colleagues highlight the unique risks presented by a particular class of artificial intelligence (AI) systems: reinforcement learning (RL) agents that plan more effectively than humans over long horizons. “Giving [such] an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop,” write Cohen and colleagues. This incentive also arises for long-term planning agents (LTPAs) more generally, say the authors, and in ways empirical testing is unlikely to cover. It is thus critical to address extinction risk from these systems, say Cohen et al., and this will require new forms of government intervention. Although governments have expressed some concern about existential risks from AI and taken promising first steps in the U.S. and U.K, in particular, regulatory proposals to date do not adequately address this particular class of risk – losing control of advanced LTPAs. Even empirical safety testing – the prevailing regulatory approach for AI – is likely to be either dangerous or uninformative, for a sufficiently capable LTPA, say the authors. Accordingly, Cohen and colleagues propose that developers not be permitted to build sufficiently capable LTPAs, and that the resources required to build them be subject to stringent controls. When it comes to determining how capable is “sufficiently capable,” for an LTPA,  the authors offer insight to guide regulators and policymakers. They note they do not believe that existing AI systems exhibit existentially dangerous capabilities, nor do they exhibit several of the capabilities mentioned in President Biden’s recent executive order on AI, “and it is very difficult to predict when they could.” The authors note that although their proposal for governing LTPAs fills an important gap, “further institutional mechanisms will likely be needed to mitigate the risks posed by advanced artificial agents.”



Journal

Science

DOI

10.1126/science.adl0625

Article Title

Regulating advanced artificial agents

Article Publication Date

5-Apr-2024

Share26Tweet17
Previous Post

Sylvester physician co-authors global plan to combat prostate cancer

Next Post

Cells engineered to produce immune-boosting amino acids in prizewinning research

Related Posts

Technology and Engineering

Machine Learning Predicts Class III Malocclusion Treatment

April 9, 2026
blank
Technology and Engineering

AI-Driven Plastic Waste Management: Paving the Way to Zero-Waste Cities

April 9, 2026
blank
Technology and Engineering

Smart Polymer Films Revolutionize Electronics: Pioneering Flexible Circuit Boards Unveiled at Hannover Messe

April 9, 2026
blank
Medicine

Ambiphilic Cross-Coupling via Aryl-Bismuth Reagents

April 9, 2026
blank
Technology and Engineering

Heterojunction and Doping Engineering Synergy Drives Breakthrough in Oxygen Evolution Catalyst

April 9, 2026
blank
Technology and Engineering

Researchers Create Deep Learning Framework Using Spatiotemporal Correlations to Correct Biases in Atmospheric and Oceanic Data

April 9, 2026
Next Post

Cells engineered to produce immune-boosting amino acids in prizewinning research

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27633 shares
    Share 11050 Tweet 6906
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    1036 shares
    Share 414 Tweet 259
  • Bee body mass, pathogens and local climate influence heat tolerance

    675 shares
    Share 270 Tweet 169
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    537 shares
    Share 215 Tweet 134
  • Groundbreaking Clinical Trial Reveals Lubiprostone Enhances Kidney Function

    523 shares
    Share 209 Tweet 131
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Rocahepevirus Ratti: Evolution, Zoonosis, and Health Impact
  • Amphibole Rims Reveal Shear in Rising Magma
  • STAT3-Driven ITGB4 Upregulation Lowers Bladder Cancer Cisplatin Sensitivity
  • Obsessive–Compulsive Disorder Risk Post-Traumatic Events

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Biotechnology
  • Blog
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Editorial Policy
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,146 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading