New research project explores human-AI entanglement to promote responsible use
Professor at ITU, Jichen Zhu, has secured 7.19 million kroner from the Independent Research Fund Denmark for a new project that investigates how people interact with artificial intelligence in highly subjective domains such as emotion recognition – and how to design tools that support more responsible use.
Jichen ZhuResearchalgorithmsartificial intelligencegrants
Artificial intelligence systems are increasingly embedded in everyday life, from recommendation engines to emotion recognition apps. But as these technologies evolve, researchers warn of a growing risk: over-reliance on AI predictions, even in areas where there is no clear “right” answer.
“When we ask AI to tell us how we feel, we risk forming an unhealthy relationship with technology. If users start believing the AI knows their emotions better than they do, that changes what it means to be human,” says professor at ITU, Jichen Zhu, whose research project Reducing Over-reliance and Fostering Responsible use of Emotion Tracking through Self-Reflection has recently been awarded a 7.19 million kroner grant from IRFD.
Current research on over-reliance often assumes an objective truth – for example, whether an AI correctly predicts a medical diagnosis. But in subjective domains like emotion detection, there is no baseline. “If people adopt the AI’s prediction as fact, then by definition the model is never wrong,” Jichen Zhu explains. “That creates a new kind of entanglement between humans and machines.”
Jichen Zhu’s project has two main goals. First, to understand how and when over-reliance occurs. Through qualitative user studies, the team will observe how people engage with emotion recognition models and identify factors that influence trust and dependency. Second, to design interaction mechanisms that encourage self-reflection. Rather than simply improving model accuracy, the research focuses on empowering users to critically evaluate AI predictions instead of accepting them at face value.
“We want to slow things down,” says the researcher. “Instead of instantly showing a prediction, can we create moments for reflection? Can we provide explanations that help users think for themselves?”
The work is technology-agnostic, emphasising human behaviour and interaction design over model performance. Explainable AI features wil be developed, giving users insight into why a model predicts a particular emotion.
Ultimately, the project aims to establish design principles and best practices for responsible AI use – guidelines that can be applied across domains where AI decisions intersect with human experience.
“AI is here to stay,” Jichen Zhu concludes. “The question is not whether we use it, but how we use it in ways that respect and support what it means to be human.”