In recent years, the escalating power demands of artificial intelligence (AI) and data center operations have raised significant concerns about energy consumption and sustainability. The United States alone reportedly consumed approximately 415 terawatt hours (TWh) of electricity in 2024 for AI and data centers, accounting for a substantial portion—over 10%—of the nation’s total energy output. This figure is projected to double by 2030, underscoring an urgent need for more energy-efficient AI solutions. Addressing this challenge, researchers at the School of Engineering have developed an innovative neuro-symbolic AI system that promises to drastically reduce energy usage while enhancing performance, especially in robotics applications.
The newly developed approach uniquely integrates conventional neural network architectures with symbolic reasoning methods, mimicking human cognitive processes that break tasks and concepts into hierarchical steps and categorical abstractions. This hybrid model, pioneered under the guidance of Matthias Scheutz, Karol Family Applied Technology Professor, leverages neuro-symbolic AI to achieve energy efficiency improvements as high as a hundredfold compared to standard AI systems, all while delivering superior accuracy in complex task execution. The team plans to present their groundbreaking research at the upcoming International Conference of Robotics and Automation, reinforcing the crucial intersection of AI, robotics, and sustainable energy use.
Unlike mainstream large language models (LLMs) such as ChatGPT or Gemini, which primarily operate within screen-based environments, this research focuses on visual-language-action (VLA) models. VLAs extend conventional LLMs by incorporating visual and motor capabilities, enabling robots to perceive their environment using cameras and execute physical actions—like moving wheels, arms, or fingers—based on language commands. The combination of visual perception with linguistic processing is essential for robots interacting meaningfully with humans and their surroundings, but it traditionally comes with vast computational and energy expenses.
In real-world scenarios, resource-intensive VLA models frequently struggle with seemingly straightforward tasks. For example, a robot tasked with stacking blocks to build a tower must identify each block’s location, shape, and orientation while interpreting the instruction with high precision. Conventional neural networks may falter due to ambiguous visual inputs like shadows or reflections, resulting in errors such as incorrectly grasping or placing the blocks, potentially causing instability in the tower. Such failures parallel inaccuracies seen in LLM text outputs, where hallucinations—fabricated or false information—can degrade reliability drastically.
The neuro-symbolic system addresses these limitations by introducing a symbolic reasoning layer that leverages abstract grouping and rule-based generalization. Instead of relying purely on large-scale pattern recognition, this AI applies logical constraints and puzzle-specific heuristics that guide decision-making. This approach yields more robust and energy-efficient learning, minimizing the trial-and-error cycles that typically consume excessive computational power in conventional VLA models.
One compelling demonstration of this system’s superior performance involved the Tower of Hanoi—a classical puzzle used in AI research. Testing revealed the neuro-symbolic VLA achieved an impressive 95% success rate on the standard puzzle configuration, significantly outperforming the 34% success rate of traditional VLAs. Even more striking, when challenged with a novel, more complex variation of the puzzle it had never encountered before, the neuro-symbolic model maintained a 78% success rate, whereas the conventional models failed entirely to solve the puzzle. These results highlight the system’s potential for robust generalization beyond the training scenarios.
Beyond accuracy gains, the energy savings realized through neuro-symbolic AI are transformative. Training the neuro-symbolic VLA required only 34 minutes, a drastic reduction from the more than 36 hours needed for the conventional model’s training. Furthermore, the energy consumption for this training was merely 1% of that used by the standard approach. During operational use, the neuro-symbolic system continued to consume just around 5% of the energy needed by a conventional VLA, demonstrating remarkable efficiency across both learning and task execution stages.
The implications of adopting neuro-symbolic frameworks extend well beyond robotics. Scheutz points to analogous inefficiencies in popular LLM-powered applications, such as AI-driven search engine summaries, which can expend up to 100 times more energy than generating the underlying webpage content itself. These disproportionate energy costs raise broader concerns about the sustainability and scalability of current AI architectures, especially as demand increases exponentially and models become larger and more complex.
At present, the relentless drive toward ever-larger AI models and data center capacities fuels a competitive arms race, with monumental server facilities consuming power equivalent to that of entire small cities. Examples include Sandia National Laboratory’s servers, xAI Colossus in Memphis, and massive projects like Microsoft and OpenAI’s Stargate in construction. Such infrastructures require hundreds of megawatts to operate, posing a fundamental challenge to the sustainability of AI deployment if underlying computing models remain energy inefficient.
Given these realities, the researchers argue that while large language models and conventional VLAs have garnered immense popularity and demonstrated efficacy in many applications, they may be hitting a resource-driven bottleneck. It is likely that continuing to expand their scale and complexity without addressing fundamental computational inefficiencies will lead to untenable environmental and economic costs. Instead, a shift to hybrid neuro-symbolic AI architectures represents a promising alternative pathway that balances high performance with dramatically reduced energy footprints.
In summary, this innovative neuro-symbolic approach not only advances AI reliability and accuracy but pioneers a path toward environmentally sustainable artificial intelligence technologies. By mimicking human-like cognitive processes through the integration of neural networks with symbolic reasoning, the new system enables faster learning, greater task generalization, and substantial energy savings. As AI becomes increasingly embedded in everyday robotics and industrial automation, adopting such efficient methodologies will be critical to meet future performance demands without exacerbating global energy consumption and environmental impacts.
The upcoming presentation in Vienna at the International Conference of Robotics and Automation will further detail these insights and experimental validations, drawing attention to the pressing need for energy-conscious AI design. This research could serve as a blueprint for the next generation of AI systems—ones that are not only smarter but more sustainable and responsible in their resource use. The days of AI powered by sprawling, power-hungry data centers may soon give way to leaner, neuro-symbolic intelligences capable of thriving in resource-constrained environments while exceeding current performance benchmarks.
Subject of Research:
Not applicable
Article Title:
The Price is Not Right: Neuro-Symbolic Approaches Significantly Outperform VLAs in Performance as well as Computational Cost and Energy Efficiency
News Publication Date:
5-Jun-2026
Image Credits:
Sandia National Laboratory
Keywords:
Artificial intelligence, Energy resources, Energy resources conservation, Water resources, Robotics

