As artificial intelligence (AI) continues to embed itself deeply within the fabric of modern critical infrastructures—ranging from energy management systems to self-driving vehicles—the imperative to ensure these technologies operate safely and reliably has grown more urgent than ever. The stakes are no longer hypothetical; lives, resources, and national security depend on dependable AI. One of the central challenges faced by scientists and engineers today is developing robust methods to rigorously guarantee that AI controllers behave as intended in dynamic and complex environments.
At the University of Waterloo, a pioneering research group is tackling this challenge by blending the rigor of applied mathematics with the capabilities of modern machine learning. Their work focuses on providing mathematically sound verification for AI systems that govern time-varying physical processes. Dr. Jun Liu, a professor of applied mathematics and Canada Research Chair in Hybrid Systems and Control, leads this effort by leveraging classical tools such as differential equations to model dynamic systems characterized by continuous change, from the intricate flow of electricity across power grids to the nuanced movements of autonomous vehicles.
Central to their approach is the deployment of Lyapunov functions, a mathematical concept dating back to the late 19th century, which serve as certificates of stability for complex dynamical systems. If one imagines the behavior of a system as analogous to a ball rolling within a landscape, a Lyapunov function acts like the contours of a bowl that ensure the ball eventually settles into a stable equilibrium at the bottom. However, finding an appropriate Lyapunov function for modern, nonlinear systems controlled by AI has historically been a formidable mathematical problem, often requiring manual derivation and deep expertise.
To circumvent this bottleneck, Liu’s team harnessed the power of neural networks, a subclass of AI increasingly recognized for their ability to approximate complex functions and solve intricate problems. The innovation lies in training a neural network to satisfy the stringent mathematical constraints that define a valid Lyapunov function. More specifically, the network learns a function that confirms whether the system’s state will converge to a safe, stable operating point over time. Training such networks involves embedding the underlying physics and control laws of the system into the learning process itself—a method the researchers refer to as physics-informed neural networks.
However, the use of neural networks in safety-critical control systems introduces new questions about verification. Standard AI models can be opaque, leading to a lack of trust in their outputs. To address this, the Waterloo team supplemented their approach with a separate logic-driven reasoning system, a form of AI specializing in formal verification techniques. This reasoning layer rigorously checks whether the neural network’s output adheres to the mathematical criteria of safety, creating a closed loop of learning and verification that collectively provides a rare mathematical guarantee of system stability.
One of the remarkable outcomes of this dual AI approach—using one AI to design controllers and another to verify them—is the substantial reduction in the traditionally labor-intensive process of controller design and safety proof construction. What once required painstaking manual derivation and exhaustive analytical effort can now be performed more efficiently without compromising on the level of rigor demanded by critical applications.
Importantly, this approach does not advocate for the replacement of humans in the decision-making loop but rather for augmenting human capabilities. Dr. Liu emphasizes that ethical considerations and higher-order judgments remain the province of human experts. The AI systems are designed to offload the computationally demanding and error-prone tasks of mathematical proof generation and real-time control optimization. This symbiosis enables researchers and engineers to concentrate on oversight, interpretation, and policy, domains that intrinsically require human values and intuition.
To validate their framework, the researchers applied their combined machine learning and formal verification toolbox to several challenging control scenarios. These case studies demonstrated that their method either matched or outperformed classical techniques in ensuring system safety and stability. By incorporating physics knowledge directly into the learning architecture, they achieved more reliable and interpretable results than conventional black-box machine learning controllers could offer.
Looking forward, the team is actively developing their framework into an open-source software toolbox. This initiative promises to democratize access to advanced verification tools for the broader scientific and engineering communities, accelerating innovation in safe AI control applications. Furthermore, collaborations with industry partners are underway, aiming to translate these theoretical advances into practical, high-impact solutions for sectors where AI safety is paramount.
This research aligns closely with global efforts to foster transparent, responsible, and trustworthy AI technologies. At Waterloo, the project benefits from synergies with initiatives such as the TRuST Scholarly Network, which fosters interdisciplinary research to ensure AI systems adhere to ethical and safety standards. Concurrently, federal programs aimed at promoting accountable AI underscore the timeliness and societal importance of this work.
The technical breakthrough reported in the study, “Physics-informed neural network Lyapunov functions: PDE characterization, learning, and verification,” published in the journal Automatica, reflects a significant step toward integrating rigorous mathematical theory with state-of-the-art AI methods. By bridging these domains, the research outlines a promising pathway to endow AI-driven systems with verifiable safety properties, thus bolstering confidence in their deployment in real-world applications.
In summary, the University of Waterloo team’s novel approach represents a paradigm shift in ensuring the safe operation of AI controllers in dynamic physical systems. By combining the strengths of neural networks in function approximation with formal logic-based verification, they provide a comprehensive framework that addresses the long-standing challenge of guaranteeing AI safety. Their work underscores the potential of interdisciplinary innovation to solve some of the most pressing technological problems of our age, paving the way for a future in which AI-driven systems can be both powerful and trustworthy.
Subject of Research: Safe and trustworthy AI for dynamic physical systems through mathematical verification and machine learning
Article Title: Physics-informed neural network Lyapunov functions: PDE characterization, learning, and verification
News Publication Date: Not specified
Web References:
- https://uwaterloo.ca/applied-mathematics/profiles/jun-liu
- https://uwaterloo.ca/trust-research-undertaken-science-technology-scholarly-network/
- https://www.sciencedirect.com/science/article/pii/S000510982500086X
References:
- Liu, J. et al. (2025). Physics-informed neural network Lyapunov functions: PDE characterization, learning, and verification. Automatica.
Keywords:
Applied mathematics, Machine learning, Autonomous vehicles, Autonomous robots, Artificial neural networks, Neural networks, Applied sciences and engineering, Systems theory, Adaptive systems