As artificial intelligence is inevitably woven into the workplace, teams of humans will increasingly collaborate with robots on complex design problems, such as those in the auto, aviation, and space industries.
Credit: Lehigh University
As artificial intelligence is inevitably woven into the workplace, teams of humans will increasingly collaborate with robots on complex design problems, such as those in the auto, aviation, and space industries.
“Right now, design is mainly done by humans, and it’s based on their expertise and intuitive decision-making, which is learned over time,” says A. Emrah Bayrak, an assistant professor of mechanical engineering and mechanics in Lehigh University’s P.C. Rossin College of Engineering and Applied Science. “Usually, you’re not creating something totally new. You take something that works already, understand how it works, and make incremental changes. But introducing AI could make the process a lot faster—and potentially more innovative.”
However, best practices for integrating AI in a way that both maximizes productivity and the job satisfaction of the human worker remain unclear. Bayrak recently won support from the National Science Foundation’s Faculty Early Career Development (CAREER) program for his proposal to allocate portions of complex design problems to human and AI teams based on their capabilities and preferences.
The prestigious NSF CAREER award is given annually to junior faculty members across the U.S. who exemplify the role of teacher-scholars through outstanding research, excellent education, and the integration of education and research. Each award provides stable support at the level of approximately $500,000 for a five-year period.
Bayrak will explore the problem of dividing a complex task between human designers and AI from both a computational and experimental perspective. For the former, he’ll use models that predict how a rational human being would explore the design of, say, the powertrain in an electric vehicle.
“We know that decision-making is a sequential process,” he says. “People will make a decision, look at the outcome, and revise their next decision accordingly. In order to maximize the range of an EV, when humans consider the design of the powertrain, they have to make decisions about gear ratios, motor size, and battery size. These are all mathematical variables that we can feed into a model to predict what the next decision should be if a human is a rational person.”
AI, in contrast, makes decisions based on training data. Feed it data on good decisions regarding gears, motors, and batteries, and it can then estimate possible vehicle designs that will yield an acceptable range. AI could also use that knowledge to think about what the next design decision should be.
Bayrak’s model will also contain different human archetypes. For example, a person who trusts AI completely versus one who does not, and those who hover somewhere in the middle. The model will combine the mathematical variables that represent decision-making with the full range of archetypes to determine strategies for the division of labor between humans and AI.
Bayrak will then test those findings experimentally. Study participants will be asked to work together with AI to virtually approach the design of a vehicle.
“We give them a design problem and tell the people which decisions they’re responsible for making and which are the responsibility of the AI. They work together, and the goal is to collect the data and see if the computational results reflect what happens in the experimental findings. In other words, do designers act as predicted by the computational models or do those designers who don’t fully trust AI end up satisfied with the division of labor?” says Bayrak.
The ultimate goal, he says, is not to replace humans in the workplace. Rather, it’s to develop principles for how and to what extent AI should be integrated into complex design projects. And those guidelines will reflect different priorities—for example, a team may want to incorporate AI as merely an assistant, or it may want to give it significant responsibility. Teams may want to prioritize quick decision-making, innovation, or job satisfaction.
“The idea is that we’ll have quantitative evidence that reveals which practices work well to achieve specific objectives and which do not,” he says. “This work could potentially shape how organizations are structured in the future, and that is very exciting.”
About A. Emrah Bayrak
Alparslan Emrah Bayrak is an assistant professor in the Department of Mechanical Engineering and Mechanics in the P.C. Rossin College of Engineering and Applied Science at Lehigh University. He joined Lehigh in January 2024.
Bayrak’s research focuses on bridging computational methods and human cognition to develop human-computer collaboration architectures for the design and control of smart products and systems. He is particularly interested in developing artificial intelligence systems that can effectively collaborate with humans considering unique capabilities of humans and computational systems. His research uses methods from design, controls, game theory, and machine learning, as well as human-subject experiments on virtual environments such as video games.
Bayrak earned his MS and PhD in mechanical engineering from the University of Michigan and a BS in mechatronics engineering from Sabanci University (Turkey).
Related Links
- Faculty Profile: A. Emrah Bayrak
- NSF Award Abstract (# 2339546): CAREER: Problem Partitioning and Division of Labor for Human-Computer Collaboration in Engineering Design
Discover more from Science
Subscribe to get the latest posts sent to your email.