Federal use of A.I. in visa applications could breach human rights, report says
OTTAWA — A new report is warning about the federal government’s interest in using artificial intelligence to screen and process immigrant files, saying it could create discrimination, as well as privacy and human rights breaches.
The research, conducted by the University of Toronto’s Citizen Lab outlines the impacts of automated decision-making involving immigration applications and how errors and assumptions within the technology could lead to “life-and-death ramifications” for immigrants and refugees.
The authors of the report issue a list of seven recommendations calling for greater transparency and public reporting and oversight on government’s use of artificial intelligence and predictive analytics to automate certain activities involving immigrant and visitor applications.
“We know that the government is experimenting with the use of these technologies … but it’s clear that without appropriate safeguards and oversight mechanisms, using A.I. in immigration and refugee determinations is very risky because the impact on people’s lives are quite real,” said report co-author Petra Molnar, a research associate in the university’s international human rights program.


