Friday, August 15, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Technology and Engineering

Using photos or videos, these AI systems can conjure simulations that train robots to function in physical spaces

August 7, 2024
in Technology and Engineering
Reading Time: 5 mins read
0
Using photos or videos, these AI systems can conjure simulations that train robots to function in physical spaces
66
SHARES
597
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT
ADVERTISEMENT

Researchers working on large artificial intelligence models like ChatGPT have vast swaths of internet text, photos and videos to train systems. But roboticists training physical machines face barriers: Robot data is expensive, and because there aren’t fleets of robots roaming the world at large, there simply isn’t enough data easily available to make them perform well in dynamic environments, such as people’s homes.

Researchers working on large artificial intelligence models like ChatGPT have vast swaths of internet text, photos and videos to train systems. But roboticists training physical machines face barriers: Robot data is expensive, and because there aren’t fleets of robots roaming the world at large, there simply isn’t enough data easily available to make them perform well in dynamic environments, such as people’s homes.

Some researchers have turned to simulations to train robots. Yet even that process, which often involves a graphic designer or engineer, is laborious and costly.

Two new studies from University of Washington researchers introduce AI systems that use either video or photos to create simulations that can train robots to function in real settings. This could significantly lower the costs of training robots to function in complex settings.

In the first study, a user quickly scans a space with a smartphone to record its geometry. The system, called RialTo, can then create a “digital twin” simulation of the space, where the user can enter how different things function (opening a drawer, for instance). A robot can then virtually repeat motions in the simulation with slight variations to learn to do them effectively. In the second study, the team built a system called URDFormer, which takes images of real environments from the internet and quickly creates physically realistic simulation environments where robots can train.

The teams presented their studies — the first on July 16 and the second on July 19 — at the Robotics Science and Systems conference in Delft, Netherlands.

“We’re trying to enable systems that cheaply go from the real world to simulation,” said Abhishek Gupta, a UW assistant professor in the Paul G. Allen School of Computer Science & Engineering and co-senior author on both papers. “The systems can then train robots in those simulation scenes, so the robot can function more effectively in a physical space. That’s useful for safety — you can’t have poorly trained robots breaking things and hurting people — and it potentially widens access. If you can get a robot to work in your house just by scanning it with your phone, that democratizes the technology.”

While many robots are currently well suited to working in environments like assembly lines, teaching them to interact with people and in less structured environments remains a challenge.

“In a factory, for example, there’s a ton of repetition,” said lead author of the URDFormer study Zoey Chen, a UW doctoral student in the Allen School. “The tasks might be hard to do, but once you program a robot, it can keep doing the task over and over and over. Whereas homes are unique and constantly changing. There’s a diversity of objects, of tasks, of floorplans and of people moving through them. This is where AI becomes really useful to roboticists.”

The two systems approach these challenges in different ways.

RialTo — which Gupta created with a team at the Massachusetts Institute of Technology — has someone pass through an environment and take video of its geometry and moving parts. For instance, in a kitchen, they’ll open cabinets and the toaster and the fridge. The system then uses existing AI models — and a human does some quick work through a graphic user interface to show how things move — to create a simulated version of the kitchen shown in the video. A virtual robot trains itself through trial and error in the simulated environment by repeatedly attempting tasks such as opening that toaster oven — a method called reinforcement learning.

By going through this process in the simulation, the robot improves at that task and works around disturbances or changes in the environment, such as a mug placed beside the toaster. The robot can then transfer that learning to the physical environment, where it’s nearly as accurate as a robot trained in the real kitchen.

The other system, URDFormer, is focused less on relatively high accuracy in a single kitchen; instead, it quickly and cheaply conjures hundreds of generic kitchen simulations. URDFormer scans images from the internet and pairs them with existing models of how, for instance, those kitchen drawers and cabinets will likely move. It then predicts a simulation from the initial real-world image, allowing researchers to quickly and inexpensively train robots in a huge range of environments. The trade-off is that these simulations are significantly less accurate than those that RialTo generates.

“The two approaches can complement each other,” Gupta said. “URDFormer is really useful for pre-training on hundreds of scenarios. RialTo is particularly useful if you’ve already pre-trained a robot, and now you want to deploy it in someone’s home and have it be maybe 95% successful.”

Moving forward, the RialTo team wants to deploy its system in peoples’ homes (it’s largely been tested in a lab), and Gupta said he wants to incorporate small amounts of real-world training data with the systems to improve their success rates.

“Hopefully, just a tiny amount of real-world data can fix the failures,” Gupta said. “But we still have to figure out how best to combine data collected directly in the real world, which is expensive, with data collected in simulations, which is cheap, but slightly wrong.”

On the URDFormer paper additional co-authors include the UW’s Aaron Walsman, Marius Memmel, Alex Fang — all doctoral students in the Allen School; Karthikeya Vemuri, an undergraduate in the Allen School; Alan Wu, a masters student in the Allen School; and Kaichun Mo, a research scientist at NVIDIA. Dieter Fox, a professor in the Allen School, was a co-senior author. On the URDFormer paper additional co-authors include MIT’s Marcel Torne, Anthony Simeonov, Tao Chen — all doctoral students; Zechu Li, a research assistant; and April Chan, an undergraduate. Pulkit Agrawal, an assistant professor at MIT, was a co-senior author. The URDFormer research was partially funded by Amazon Science Hub. The RialTo research was partially funded by the Sony Research Award, the U.S. Government and Hyundai Motor Company.

For more information, contact Gupta at abhgupta@cs.washington.edu and Chen at qiuyuc@uw.edu.



Article Title

URDFormer: A Pipeline for Constructing Articulated Simulation Environments from Real-World Images

Article Publication Date

19-Jul-2024

Share26Tweet17
Previous Post

How fungi elude antifungal treatments

Next Post

Forever chemical pollution can now be tracked

Related Posts

blank
Technology and Engineering

Texas A&M Researchers Leverage AI to Identify Critical Power Outage Hotspots Across America

August 14, 2025
blank
Technology and Engineering

Plant-Derived Plastics: FAMU-FSU Engineering Professor Innovates with Material from Plant Cell Walls to Create Versatile Polymers

August 14, 2025
blank
Technology and Engineering

Worcester Polytechnic Institute Chosen as Principal Partner in National Initiative to Enhance Cybersecurity and AI Training for U.S. Automotive Innovation

August 14, 2025
blank
Technology and Engineering

Advancing Mg++ Batteries: Innovative Quasi-Solid Electrolyte Developed

August 14, 2025
blank
Technology and Engineering

Expanding the Cybersecurity Landscape: Fostering a Holistic Ecosystem

August 14, 2025
blank
Technology and Engineering

Dr. Alfredo García-Alix: Pioneering Brain-Aware Neonatal Care

August 14, 2025
Next Post
Scientist in lab

Forever chemical pollution can now be tracked

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27533 shares
    Share 11010 Tweet 6881
  • University of Seville Breaks 120-Year-Old Mystery, Revises a Key Einstein Concept

    947 shares
    Share 379 Tweet 237
  • Bee body mass, pathogens and local climate influence heat tolerance

    641 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    507 shares
    Share 203 Tweet 127
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    310 shares
    Share 124 Tweet 78
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Immune System’s Role in Clearing Senescent Cells
  • Texas A&M Researchers Leverage AI to Identify Critical Power Outage Hotspots Across America
  • Epigenetic Duo Drives Cell Fate and Disease: Unraveling Double Trouble
  • FSU Chemists Pioneer Advanced X-Ray Material, Revolutionizing Thin Film Imaging

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 4,859 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading