Friday, May 16, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Policy

A unified objective for dynamics model and policy learning in model-based reinforcement learning

September 4, 2024
in Policy
Reading Time: 4 mins read
0
The processing flow of Model Gradient
66
SHARES
601
VIEWS
Share on FacebookShare on Twitter

Recently, model-based reinforcement learning has been considered a crucial approach to applying reinforcement learning in the physical world, primarily due to its efficient utilization of samples. However, the supervised learned model, which generates rollouts for policy optimization, leads to compounding errors and hinders policy performance. To address this problem, the research team led by Yang YU published their new research on 15 August 2024 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposed a novel model-based learning approach that unifies the objectives of model learning and policy learning. By directly maximizing the policy’s performance in the real world, this research proposes the Model Gradient algorithm (MG). Compared with existing model-based methods, this approach achieves both higher sample efficiency and better performance.

This research identifies the limitation of current supervised-learned model-based reinforcement learning methods, where the model inaccuracy leads to compounding error. The authors suggest addressing the problem by modifying model learning objective. A supervised model learning approach may not be designed to assist policy learning in achieving better performance because the objective does not align with the ultimate goal of reinforcement learning, i.e., maximizing the real-world policy performance. Therefore, this research aims to unify the objective of model learning and policy learning starting with policy gradient. By maximizing the real-world performance of the policy learned in the model, this research derives the gradient of model, which represents the direction of policy improvement with the form of enhancing the similarity between the policy gradient in the real environment and that in the model. By adopting this model update approach, the authors develops a novel model-based reinforcement learning algorithm called the Model Gradient algorithm (MG).

Experimental results demonstrate that MG outperforms other model-based reinforcement learning baselines with supervised model fitting in multiple continuous control tasks. MG especially exhibits stable performance in sparse reward tasks, even when compared to state-of-the-art Dyna-style model-based reinforcement learning methods with short-horizon rollouts. 

For the future work, this research considers extending this form to more policy optimization such as off-policy methods.

DOI: 10.1007/s11704-023-3150-5
 

The processing flow of Model Gradient

Credit: Chengxing JIA, Fuxiang ZHANG, Tian XU, Jing-Cheng PANG, Zongzhang ZHANG, Yang YU

Recently, model-based reinforcement learning has been considered a crucial approach to applying reinforcement learning in the physical world, primarily due to its efficient utilization of samples. However, the supervised learned model, which generates rollouts for policy optimization, leads to compounding errors and hinders policy performance. To address this problem, the research team led by Yang YU published their new research on 15 August 2024 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposed a novel model-based learning approach that unifies the objectives of model learning and policy learning. By directly maximizing the policy’s performance in the real world, this research proposes the Model Gradient algorithm (MG). Compared with existing model-based methods, this approach achieves both higher sample efficiency and better performance.

This research identifies the limitation of current supervised-learned model-based reinforcement learning methods, where the model inaccuracy leads to compounding error. The authors suggest addressing the problem by modifying model learning objective. A supervised model learning approach may not be designed to assist policy learning in achieving better performance because the objective does not align with the ultimate goal of reinforcement learning, i.e., maximizing the real-world policy performance. Therefore, this research aims to unify the objective of model learning and policy learning starting with policy gradient. By maximizing the real-world performance of the policy learned in the model, this research derives the gradient of model, which represents the direction of policy improvement with the form of enhancing the similarity between the policy gradient in the real environment and that in the model. By adopting this model update approach, the authors develops a novel model-based reinforcement learning algorithm called the Model Gradient algorithm (MG).

Experimental results demonstrate that MG outperforms other model-based reinforcement learning baselines with supervised model fitting in multiple continuous control tasks. MG especially exhibits stable performance in sparse reward tasks, even when compared to state-of-the-art Dyna-style model-based reinforcement learning methods with short-horizon rollouts. 

For the future work, this research considers extending this form to more policy optimization such as off-policy methods.

DOI: 10.1007/s11704-023-3150-5
 



Journal

Frontiers of Computer Science

DOI

10.1007/s11684-023-1046-2

Method of Research

Experimental study

Subject of Research

Not applicable

Article Title

Assessment of HER2 status in extramammary Paget disease and its implication for disitamab vedotin, a novel humanized anti-HER2 antibody-drug conjugate therapy

Article Publication Date

15-Aug-2024

Share26Tweet17
Previous Post

How to solve the challenges faced by the carbon sequestration function of Chinese plantations in the future?

Next Post

How Sub-Saharan Africa can achieve the SDGs by 2100: A new report by Earth4All

Related Posts

Dishwasher
Policy

Could Household Goo and Gunk Hold the Key to Solving Climate Change?

May 15, 2025
blank
Policy

Research Reveals Rising Impact on Medicare Part D Beneficiaries as Insurers Exit Market

May 15, 2025
blank
Policy

NIH Grants $8 Million to Launch New USC Superfund Center Tackling ‘Forever Chemicals’

May 15, 2025
blank
Policy

HIV Infections Rise Among Older Adults, Yet Prevention Efforts Remain Youth-Centered

May 15, 2025
blank
Policy

Car-Free Families Defy the Odds: A New Look at Unlikely Transportation Trends

May 14, 2025
blank
Policy

PFAS Contamination Found in Drinking Water and Certain Foods Among California Adults

May 14, 2025
Next Post
Africa specific turnarounds and policy levers

How Sub-Saharan Africa can achieve the SDGs by 2100: A new report by Earth4All

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27495 shares
    Share 10995 Tweet 6872
  • Bee body mass, pathogens and local climate influence heat tolerance

    636 shares
    Share 254 Tweet 159
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    498 shares
    Share 199 Tweet 125
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    304 shares
    Share 122 Tweet 76
  • Probiotics during pregnancy shown to help moms and babies

    252 shares
    Share 101 Tweet 63
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

Recent Posts

  • POSTN Splicing Epitopes Spark Hope in Glioblastoma Immunotherapy
  • E2F2: New Therapeutic Target in Meibomian Carcinoma
  • Advancing Toward Reliable Blood Stem Cell Production for Regenerative Medicine
  • Stress in Kerala Police: Organizational and Operational Factors

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,861 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine