
On a winter night in Minneapolis, Minnesota, the area outside a hotel where U.S. Immigration and Customs Enforcement (ICE) agents were staying was engulfed in loud protests day after day. About 2,500 demonstrators gathered near the University of Minnesota’s Twin Cities campus, chanting slogans and confronting the agents across the hotel’s glass entrance.
Yet something unusual stood out. While the protesters—armed with trumpets, drums, electric guitars, even empty paint cans and buckets—created an overwhelming wall of noise to disrupt the agents’ rest, the ICE officers remained almost completely still. Facing the crowd, they barely moved, occasionally adjusting their angles. It was only after noticing that most protesters near the glass were wearing masks or face coverings that attention shifted to the agents’ chests. Attached to their uniforms were small black devices. The agents were deliberately minimizing movement so that the protesters’ faces could be captured steadily on body cameras.
According to the U.S. Department of Homeland Security’s recently released report, “AI Use Case Inventory,” this concern is far from exaggerated. ICE is already using facial recognition–based AI systems extensively in field operations and identity verification. Civil rights groups have criticized the breadth of these surveillance technologies, warning that they are excessively expansive. There have even been instances suggesting that authorities may be bypassing due process by relying on AI.
In one case, Nicole Cleland, a Minnesota resident, testified that while she was filming ICE agents during a protest, one of them stepped out of a vehicle and called her by name: “Nicole?” She had never seen the agent before. He reportedly told her, “We have facial recognition technology, and our body cameras are recording.”
Three days later, Cleland received an email from the Department of Homeland Security informing her that her Trusted Traveler Program privileges had been revoked. No reason was provided. She told The New York Times that “anger turned into fear.”
A key AI system used by ICE is called “Fortify.” Through this mobile application, agents can scan a person’s face, fingerprints, or iris, which are then instantly matched against massive databases. A single facial image can reveal criminal records, immigration numbers, and more. In effect, AI is being used to identify—and potentially detain—individuals without a court-issued warrant, raising concerns in a country founded on the separation of powers.
The algorithm that converts facial features into numerical data in Fortify is developed by NEC, a Japanese company that has consistently ranked highly in NIST facial recognition benchmarks. The image-matching process relies on Clearview AI’s vast facial database, which is notorious for scraping billions of images from the internet and social media without consent.
The issue is that the system’s output is not merely advisory—it is treated as decisive evidence for identity verification. Even if someone carries valid citizenship documents, if the algorithm identifies them as a “foreign national” or “deportable individual,” agents may proceed with detention while disregarding those documents. Although ICE denies this, Ars Technica reported on January 30, citing internal documents and testimonies from current and former employees, that Fortify’s matches are indeed used as “decisive grounds” for identity determination.
Clearview AI claims its purpose is to identify criminals, but there are increasing reports that it is also being used to identify protesters, observers, and journalists. If a privately built facial database becomes embedded as an “unofficial surveillance infrastructure” of state power, it could become a powerful tool for suppressing freedom of expression.
If facial recognition–based enforcement threatens civil liberties, the use of AI in national security presents another layer of ethical conflict. The U.S. Department of Defense recently sidelined Anthropic after the company refused to allow its AI models to be used for military purposes. Anthropic’s model, Claude, was reportedly the only AI authorized for use in certain classified military systems, but the company opposed using its technology for mass domestic surveillance or autonomous weapons.
When the Pentagon demanded broader access for military applications, Anthropic held its ground, insisting its technology should not be used for large-scale surveillance of Americans or fully autonomous lethal weapons. As a result, the company was designated a “supply chain risk,” effectively cutting it off from contracts not only with the Department of Defense but also with all affiliated contractors and partners.
Anthropic CEO Dario Amodei stated publicly, “We cannot, in good conscience, agree to these requests.” While collecting data such as social media posts, location information, and financial records individually may be considered legal, combining them through AI transforms them into powerful surveillance tools. This highlights how rapidly advancing AI can exploit gaps in existing legal frameworks that have yet to catch up with its ethical implications.

Within Silicon Valley, tech workers are increasingly alert to the risks that arise when state power intersects with AI. In January, internal backlash erupted at Palantir after it was revealed that the company was providing AI-driven data analysis to U.S. immigration authorities. Last month, nearly 1,000 Google employees formed a coalition called “No Tech for ICE,” criticizing Google Cloud for supporting Customs and Border Protection (CBP) surveillance systems, as well as infrastructure linked to Palantir’s tools used in immigration enforcement.
Although tech companies often proclaim ethical AI principles—such as prohibiting use in weapons or unlawful surveillance—they are not free from criticism. A whistleblower complaint filed with the U.S. Securities and Exchange Commission alleged that Google supported Israeli military operations by applying AI to analyze drone surveillance footage at the request of Israeli officials in July 2024, according to The Washington Post.
Google and Amazon had already secured a $1.2 billion cloud contract with the Israeli government in 2021 under “Project Nimbus,” while Microsoft has also provided cloud computing services to the Israeli government.
AI ethicist Deborah Raji warned in a “Frontier AI Policy Report” submitted to the state of California in June last year that AI could significantly strengthen mechanisms of social control and surveillance, thereby threatening democratic freedoms. “Facial recognition–based surveillance quietly erodes the foundations of democracy,” she has repeatedly cautioned in academic forums and U.S. Senate hearings since 2023.
Her warning suggests that technology has already entered a phase where it can classify, predict, and control individuals’ daily lives and political behavior. The night in Minneapolis offers a glimpse of what happens when state power wields AI—and the potential risks it poses to democracy.
SAM KIM
US ASIA JOURNAL



