Austria: Using AI in asylum cases brings risks

4 hours ago 2

AI is already playing a role in the processing of asylum cases in Austria. Researchers at the University of Graz see more risks than benefits.

Austrian Ministry of the InteriorThe authors of a new report also looked at what technologies might come into use in the future. (Source: IMAGO / CHROMORANGE)

Automated mobile phone analysis and translations: in Austria, so-called artificial intelligence is being used at various stages of the process of evaluating asylum claims. A paper published as part of the University of Graz’s A.I.SYL research project examines the use of the technology. According to the report, the use of additional tools in the future is conceivable.

The report, “Künstliche Intelligenz im Asylverfahren” (“Artificial Intelligence in Asylum Cases”), is the result of a year-long research effort and is meant to be of use to legal and social counseling groups. The authors of the report provide an overview of which tools are already in use in Austria and which, in their estimation, could come into use in the future.

The authors write that many tools promise the same benefits, such as making aspects of the process of assessing claims more efficient or lowering costs by doing away with expensive evaluations. But their verdict is clear: in every instance they investigated, the risks outweigh the benefits.

Controversial device analysis

Asylum seekers in Austria may for example be required to hand over their phones and submit to an analysis of the phones’ data. The purpose of this analysis is to determine a person’s identity, the circumstances of their flight, and the country responsible for processing their asylum claim, if this cannot be otherwise determined. According to the authors of the report, this measure has been legally regulated since 2018. Even at the time it was first implemented, the practice drew criticism from several observers.

Software is used to analyze the device and the result is evaluated by a police officer. Incoming and outgoing calls stored in the phone’s records are among the data that are processed. The country codes of the phone’s contacts may allow evaluators to determine a person’s country of origin and the route they took when fleeing. Apps and the usernames a person uses may also give clues as to the person’s identity and nationality. According to the Austrian Ministry of the Interior, photo data and system settings are also analyzed.

The authors of the report see this practice as a severe breach of asylum seekers’ fundamental rights to privacy and data protection. Mobile phones frequently store a large quantity of private information. In analyzing these devices, no distinction is made between various types of data, meaning that private photos and dating profiles can also be included in the analysis. It can also be difficult to connect the data to a particular individual – due to the circumstances of their travel or displacement, people frequently share devices; a single device may be used by several different people.

Experiences from Germany have also shown that the method is not very reliable, the researchers note. In 2023, the German Federal Office for Migration and Refugees declared the practice unlawful. In Austria, meanwhile, there are also plans to establish a legal basis for allowing AI to analyze photos.

Translation tools can make mistakes

The authors also cast a critical eye on the automated translation tools used by Austria’s Federal Office for Immigration and Asylum (BFA). The tools are used to assess the trustworthiness of a candidate for asylum and whether they are in danger of persecution in their home country. There are also reports of Frontex – the European Union’s border patrol agency – and Austrian border patrol agents using such tools to communicate with refugees. Counseling organizations also use them.

The authors found that there is always a risk of mistranslation, even when dealing with languages for which the tools have a large database to draw on. For less common languages, translations are less reliable, because the programs are only “trained” on a limited database. Unlike human interpreters, translation programs do not factor in context – the software doesn’t ask follow-up questions, but merely delivers the most likely translation. In this way, translation errors find their way into communications between officials and asylum seekers and into translated documents. For this reason, a human review is necessary.

The BFA also uses external AI systems to “identify relevant sources, produce working translations and process texts.” According to the Interior Ministry, the programs used include Perplexity and Copilot. These programs are trained on large quantities of data to enable them to summarize texts. The authors of the report explain however that these systems are “not fact checkers” – in fact, they can hallucinate and generate false information.

While these tools make it possible to reduce the time outlay required for manual research, they can make mistakes. What’s more, the systems are trained on data sets that contain social and historical prejudices and can exacerbate patterns of discrimination.

Systems are a “black box”

Another problem is that it is often impossible to understand or explain why the systems generated the results that they did. In asylum cases, however, the evidence has to be weighed in a logical and transparent manner, otherwise the principle of equal treatment is violated, the authors warn.

They see additional risks in the use of automated facial recognition, which is another tool employed to verify individuals’ identity. The use of such technology is a severe breach of the rights to data protection and privacy. If biometric data is collected under coercion, this could even be a violation of human dignity. The technology can also lead to discrimination, because it works less well in identifying people with darker skin tones – and in general also works less well with women than with men.

The reform of the Common European Asylum System (CEAS) will lead to expanded use of facial recognition technology. The fingerprints of asylum seekers are already stored in the central Eurodac database – but now facial images will also be stored.

Plans for dialect recognition

Automated dialect recognition systems are not yet in use in Austria – though the current government has plans for introducing them. These tools are used to determine an asylum seeker’s country of origin by means of a speech sample.

The danger with such systems is that they may incorrectly identify the language or dialect spoken by an individual – and that the individual’s application might even be denied as a result of this misidentification. Linguists criticize the notion that people speak clearly distinguishable languages and dialects that can easily be ascribed to specific geographic regions or territories. The authors also point out that dialect recognition systems, like automated translation programs, are limited by the quality of the database they draw on.

In Germany so-called dialect recognition is already in use. The human rights organization Amnesty International has voiced criticism, pointing out that the program has never been evaluated by independent experts. According to the University of Graz’s paper, the European asylum agency also plans to make these tools available throughout the EU.

No positive assessment

In their research for the A.I.SYL project, the authors of the report consulted numerous sources, including a response from the Ministry of the Interior, interviews with experts, existing research and case law, and media reports. In their view, none of the tools used in processing asylum cases can, without restrictions, help shape the asylum process “in a way that is more fair and in line with human rights.” In particular, the tools used for determining asylum seekers’ identities or countries of origin are either too prone to error or represent too great a breach of privacy to allow for a positive assessment.

Co-author Laura Jung warned that asylum seekers could become “guinea pigs.” Tools are tested out on them that Austrian citizens “just wouldn’t accept.”

At the same time, the use of such applications could further increase. The Ministry of the Interior is currently developing its own chatbot and is working on a tool (“OSIF-Tool”) to prepare, contextualize, and sort information. In the authors’ estimation, such a tool would fall into the high-risk category laid out in the EU’s AI act. Also of concern is that in the future, so-called AI could be used as a lie detector – which would likewise qualify as high-risk. (js)

Read Entire Article