ITU researcher wants to make AI more trustworthy
Associate Professor Christian Hardmeier has been granted DKK 7.18 million from the Independent Research Fund Denmark. The grant is given for a project that investigates how large language models can better communicate uncertainty to users.
Christian HardmeierResearchartificial intelligencegrants
Written 21 November, 2025 07:28 by Theis Duelund Jensen
Large language models such as ChatGPT have become popular tools for answering questions quickly – even complex ones that require specialised knowledge. But while these systems often sound convincing, their answers are not always correct. This can mislead users, including experts, into trusting inaccurate information.
A research project, led by Christian Hardmeier at the IT University of Copenhagen, aims to make AI systems more transparent and trustworthy by improving how they express uncertainty.
“When we interact with conversational agents, we rarely know how much we can trust their answers,” says Christian Hardmeier. “Current systems tend to sound very certain, even when they are not. Our goal is to make these systems more honest about what they know and don’t know.”
The research combines machine learning and linguistics. On the technical side, the team will develop methods to quantify uncertainty in large language models using Bayesian approaches. These methods need to scale to models with billions of parameters – a major challenge for current techniques.
But numbers alone are not enough. “A 73% confidence score might make sense to a statistician, but not to a 10-year-old,” Hardmeier explains. “We want to use linguistic strategies – like hedging with words such as ‘perhaps’ or ‘likely’ – so that uncertainty is communicated in a way people can understand. And crucially, these expressions must match the model’s actual confidence.”
The project will focus on healthcare applications, where clarity and trust are critical. Working with partners such as the Virtu Research Group from the Capital Region of Denmark, the team will study how different users – patients and healthcare professionals – need uncertainty expressed differently. For example, a doctor may need to know about rare but serious risks, while a patient might be better served by reassurance rather than alarming low-probability scenarios.
To achieve this, the project will also draw on experimental linguistics. Professor Hannah Rohde from the University of Edinburgh will lead studies on how people interpret expressions of uncertainty, ensuring that linguistic cues align with numerical confidence estimates.
“Ultimately, this is about trustworthiness,” says Hardmeier. “If AI systems always sound certain, regardless of their actual confidence, users cannot rely on them – especially in high-stakes domains like healthcare.”
The project is entitled “Conveying Caution & Confidence: Quantification and Communication of Uncertainty in Large Language Models.” It will be conducted at the IT University of Copenhagen in collaboration with DTU Compute and the University of Edinburgh.
Theis Duelund Jensen, Press Officer, phone +45 2555 0447, email thej@itu.dk