To manage increasing volumes of visa, work permit, study permit, and other temporary residence applications, governments are turning to artificial intelligence (AI) and adjacent technologies to improve the efficiency of their immigration regimes. These technologies are newly impacting international migration governance in complex ways – some of which raise ethical concerns.
Migration governance
Migration governance is defined by the International Organization for Migration as “the combined frameworks of legal norms, laws and regulations, policies and traditions as well as organizational structures (subnational, national, regional and international) and the relevant processes that shape and regulate States’ approaches with regard to migration in all its forms.” From a critical perspective, migration governance is linked to ongoing colonialism and capitalism as it regulates uneven global relationships through border imperialism, or the institutions, discourses, and systems entrenching controls against migrants and determining whom the state includes. These determinations carry life-altering consequences for individual migrants and their families. They are also increasingly made through multi-institutional, decentralized processes involving offshore, subnational, and non-state actors. In today’s diffused migration governance model, both private-sector and public social institutions like airlines, employers, and post-secondary institutions play de facto gatekeeping and surveillance roles on behalf of the state.
How AI is used in migration governance
AI and adjacent technologies are incorporated into migration governance at multiple levels. One well-known example is the automation of governments’ administrative application triaging and decision-making systems. These systems use historical data sets and risk flags based on often unchangeable facts, such as an applicants’ citizenship, age, and/or marital status. Their functions range from the generation of generic language for visa officers to utilize as justifications for decisions to the full automation of eligibility determinations.
Limited public information about governments’ advances makes it difficult to conclusively determine such technologies’ full functional scopes and impacts. However, in Canada – where we co-authors reside – the federal government’s Immigration, Refugees and Citizenship Canada (IRCC) department publicly cites significant gains in efficiencies and developed an internal “Policy Playbook on Automated Support for Decision-making” to resolve “novel questions about privacy, data governance, security, transparency, procedural fairness, and human-machine interaction.”
Less attention has been paid to the ways non-state actors’ use of AI and related technologies also govern migrants while also facing less oversight and regulation. For example, in the growing EdTech industry, agent aggregators connect international students and post-secondary institutions through AI-enabled recruitment platforms, while similarly AI-enabled verification platforms corroborate the authenticity of application documents such as post-secondary acceptance letters and language proficiency tests. As a result, international students, education institutions, and governments alike now rely on proprietary algorithms to facilitate, at least in part, migration application processes. As Global North countries increasingly depend on temporary residents – for example, international students in particular – as a source of economic immigrants, this can have major down-stream impacts on the make-up of future immigrant pools.
The trouble with AI
AI-related technologies are frequently legitimized by an assumption of objectivity, neutrality, and/or consistency. However, while not inherently problematic, they are inherently biased due to their reliance on human-controlled, and thus unavoidably subjective, inputs. AI amplifies whatever systemic racism, sexism, classism, and other forms of discrimination are ingrained within its programming and data, which naturally reflects the existing conscious and unconscious biases of its programmers, data sources, and any more broadly structuring systems. Our research on AI’s impacts on international student applicants in the Canadian context, for example, found those most negatively impacted tended to be racialized applicants from high-volume, Global South visa offices. This an unsurprising finding based on the Canadian federal government’s documented “racial biases in the application of (IRCC’s) programs, policies and client service” and “administrative practices that introduce biases or the potential for bias.”
Many AI algorithms are also hidden within ‘black boxes’, especially those used in the private sector, making it difficult for the public to understand and judge their structures. Even when AI systems are themselves governed and thus meant to be made transparent, such as by the Canadian government’s Directive on Automated Decision-Making, their functions can still suffer from a lack of explainability so that humans cannot determine how decisions were made. This has major implications for the automation of complex, high-stakes decision-making tasks involved in migration governance.
Of course, humans – who have been making migration governance-related decisions for centuries – are also inherently biased. Hypothetically, if an algorithm’s bias is well-understood and controlled for, AI could be used to mitigate visa officers’ explicit and implicit bias by reducing the reliance on prejudiced human decision making.
In the meantime, the cost of errors is disproportionately borne by non-citizens with limited recourse. In the UK, for example, the use of voice recognition technology to assess potential cheating among international students on language proficiency testing was found to likely be flawed, yet already resulted in the deportation of students. Would-be migrants who are refused entry from abroad, on the other hand, face often insurmountable hurdles in challenging their failed applications.
Looking to the future
AI is predicted to significantly impact migration in the future, from individual rights to long-term international patterns of movement. To ensure this does not exacerbate an already inequitable and inherently discriminatory global sorting system, robust discussion and oversight of AI’s role in migration governance’s many decision-making tasks is required. Greater transparency of the rules and applications of AI and other digital technologies would be a good start. Given the diffusion of migration governance across territories and sectors, ensuring ethical approaches among non-state actors will also be a crucial, albeit challenging, piece of the puzzle.
Citation
This content is licensed under a Creative Commons Attribution 4.0 International license except for third-party materials or where otherwise noted.