Research suggests that predictive modelling is not a good replacement for caseworkers’ assessment at job centres
Using artificial intelligence to predict long term unemployment among welfare seekers can be problematic in practice. That is the conclusion drawn in a new research article entitled “We Would Never Write That Down”: Classifications of Unemployed and Data Challenges for AI published in the proceedings of the ACM on HCI (PACM HCI), as part of the leading international conference on computer supported cooperative work (CSCW 2021).
Business IT DepartmentResearchdigitizatione-governmentartificial intelligence
Written 22 April, 2021 08:51 by Jari Kickbusch
Artificial Intelligence—or AI as it is popularly known—is integrated in all aspects of our everyday lives, from predicting our shopping habits to our taste in film and music. Basically, the predictions made by AI technology is informed by analysing vast amounts of data, and in a day and age where electronic data is stored and preserved on an unprecedented scale, AI can provide us with insights that were unthinkable only a few decades ago. For instance, AI technologies are used to predict how the Covid-19 virus will spread and what demographic is particularly at risk.
Even though AI is one of the most hyped technologies of our day, it is, of course, not the answer to all the world’s challenges. The recently published article “We Would Never Write That Down”: Classifications of Unemployed and Data Challenges for AI, in which researchers have focused on the process of assessing welfare seekers at Danish job centres, is a case in point.
- At the job centres, caseworkers organise welfare seekers according to certain categories, also known as ‘match groups’. This process is tremendously important because the categories determine what type of benefits a welfare seeker is entitled to and their rights in the system. There has been a great push to use historical data from welfare seekers to predict, for instance, the likelihood of long-term employment. That is what our research has focused on, says Anette C.M. Petersen, who is a PhD student at the IT University and authored the article alongside Lars Rune Christensen (ITU) Richard Harper (Lancaster University), and Thomas Hildebrandt (University of Copenhagen).
Binary perceptions
The article is based on fieldwork conducted by Anette C. M. Petersen in Gladsaxe Municipality. Here, she studied cases concerning welfare seekers processed by caseworkers with the intention of finding out whether artificial intelligence can streamline and improve decision-making. In the municipality, welfare seekers are divided into two match groups; 2.2, consisting of the “ready to work” applicants, and 2.3, consisting of applicants who are not ready to accept employment but are, however, “ready for activation measures” (these match groups have subsequently been renamed 6.2 and 6.3). However, the reality is far more complex than what is implied by the two match groups, says Anette C. M. Petersen:
- A caseworker told me that a welfare seeker basically has to be ‘almost dead’ to qualify for 2.3. It is incredibly difficult for the caseworkers to document and prove that citizens are challenged in their lives to the point of qualifying for that match group. At the same time, there is mounting political pressure to place more welfare seekers in the ‘ready to work’ match group to expedite their way to employment and out of the system … What we see is that, in practice, most welfare seekers do not actually belong in either of the two match groups. Instead, they occupy a grey area. They are neither well enough to take on employment nor able to sufficiently document their illness to prevent them from working. So, caseworkers handle these cases by using their own terminology: For example, a welfare seeker may be further classified as either a “light,” “heavy,” or “hard” 2.2.
The person behind the label
According to Anette C.M. Petersen, the caseworkers’ own system of definitions helps them provide welfare seekers with the proper service and support (which is not restricted to job hunting), and the article concludes that caseworkers’ assessment cannot simply be replaced by AI.
- In terms of implementing AI at job centres, there is too much important information on what it means to be a citizen in this system, that AI systems are not privy to and thus cannot employ in predicting, for instance, long-term unemployment … Naturally, our results have raised questions about whether we can find patterns in the data that correspond to the caseworkers’ classifications and identify, for instance, ‘light’ or ‘heavy’ 2.2s. These are valid questions, but the caseworker has physical contact with the welfare seeker. For example, the caseworkers can smell the alcohol on someone’s breath or conclude if a welfare seeker is upset by asking questions about their health. There are many physical clues that technology cannot pick up, says Anette C.M. Petersen.
- According to the caseworkers, their terminology is not compatible with a bureaucratic system - regardless of whether AI is introduced or not. Caseworkers do not write everything down, because they are working with humans, and humans are complicated and changeable beings. When you start employing historical data to predict the future, you run the risk of further entrenching these people in their circumstances or possibly making them permanent. Caseworkers are very aware of that concern.
How then can we use artificial intelligence?
While the article’s conclusion takes a critical stance on employing artificial technology in assessing risk of long-term unemployment among welfare seekers, it remains optimistic in terms of using AI as a tool for caseworkers to use in their work. That is why Anette C.M. Petersen hopes to engender a discussion about the possibilities and limitations of AI by publishing the article in PACM HCI.
- Even though our research is conducted in a Danish context, it is very relevant abroad, too – and in different contexts. Our research highlights important challenges that may occur when working with AI which may also be relevant, for instance, in connection with predicting crime. With this article, we hope to gain the attention of those developing the systems, open up opportunities for constructive dialogues and enhance collaboration across disciplines. In that respect, PACM HCI is an excellent venue for our article.
Ecoknow
The research article is part of a major research project, Ecoknow, which aims to examine how digital casework processes can become more intelligent, flexible, and transparent. Head of the research project, professor at the University of Copenhagen, Thomas Hildebrandt (one of the co-authors of the article), is delighted that the researchers were get the article published in PACM HCI. But that is not all he is happy about. The article will appear alongside another Ecoknow-article, Street-Level Algorithms and AI in Bureaucratic Decision-Making: A Caseworker Perspective, focusing on caseworkers’ view on AI-supported work.
- I am thrilled that our work with Ecoknow is getting a lot of exposure, and that there is so much interest in AI research that does not simply start out with a technical solution to a problem, but instead takes a step back to observe the caseworkers at work and interview them about their relationship to the technology, so we can collaborate to identify problems and challenges that can be solved by AI, says Thomas Hildebrandt.
Find more information about Ecoknow, which is financed by Innovation Fund Denmark, at the research project’s
website.
Read the research article on ACM's
webpage Jari Kickbusch, phone 7218 5304, email jark@itu.dk