ITU researcher secures grant to improve safety of AI systems
At Advanced Institute of Science and Technology in Japan, Associate Professor Alessandro Bruni from ITU is currently conducting research on the mathematical foundation for developing verifiably correct machine learning frameworks. The project is supported by the Carlsberg Foundation.
Alessandro BruniResearchartificial intelligencegrantsprogramming languages
Written 19 March, 2025 09:33 by Theis Duelund Jensen
The project, VeriFunAI, which focuses on the mathematical foundation for developing verifiably correct machine learning frameworks, aims to ensure the robustness of AI systems. The project addresses the pressing issue of safety in AI, a requirement in the European Union with the introduction of the AI Act.
“AI systems built using machine learning, particularly neural networks, need to be safe and robust to ensure they do not harm the environment in which they operate. Machine learning frameworks are known to contain errors that can cause safety issues, as they result in incorrect computations rather than direct system failures,” says Alessandro Bruni.
To address this problem, the VeriFunAI project aims to develop verifiably correct machine learning frameworks by creating a solid mathematical foundation rooted in functional analysis and probability theory.
In addition to Alessandro Bruni, who specialises in formalized mathematics, statistics, and machine learning, the project research team consist of Professor Ekaterina Komendantskaya (University of Southampton) whose work focuses on AI, machine learning, logic, and programming languages and Reynald Affeldt who is a chief senior research scientist at AIST in Japan and an expert in verified information theory and probabilistic programming.
The research team will contribute to two main research threads: the use of Interactive Theorem Provers (ITPs) for formalising mathematics, and the development of Differentiable Logics (DLs) for Neuro-Symbolic AI. ITPs are software tools that help verify the correctness of mathematical proofs, and the team will rely on the Rocq ITP and the MathComp library for their formalisations. DLs allow the training of neural networks to satisfy specific safety specifications, such as robustness to small perturbations in the input.
“The Carlsberg Foundation grant will enable our research team to make significant advancements in the development of safe and robust AI systems, contributing to the overall safety of AI technologies in the European Union and beyond,” says Alessandro Bruni.
Theis Duelund Jensen, Press Officer, phone +45 2555 0447, email