US DHS attempts to use "AI"
Three more uses cases where synthetic text is not appropriate, now paid for with tax dollars
By Emily
The DHS announced today that they are going to "test uses of the technologies that deliver meaningful benefits to the American public and advance homeland security, while ensuring that individuals’ privacy, civil rights, and civil liberties are protected." "The technologies" here refers to "Artificial Intelligence", which of course doesn't refer to a coherent set of technologies. The three projects, which they characterize as "innovative" are (quoting from the press release):
Homeland Security Investigations (HSI) will test AI to enhance investigative processes focused on detecting fentanyl and increasing efficiency of investigations related to combatting child sexual exploitation.
The Federal Emergency Management Agency (FEMA) will deploy AI to help communities plan for and develop hazard mitigation plans to build resilience and minimize risks.
United States Citizenship and Immigration Services (USCIS) will use AI to improve immigration officer training.
What does "AI" refer to in these three use cases? In the first, it's "a LLM-based system designed to enhance the efficiency and accuracy of summaries investigators rely upon". In the second, it's "a GenAI pilot to create efficiencies for the hazard mitigation planning process". The third is "an interactive application that uses GenAI to improve the way the agency trains immigration officer personnel" by "ensur[ing] the best possible knowledge and training on a wide range of current policies and laws relevant to their jobs".
So, in other words: they're planning on putting synthetic text, which is only ever accurate by chance into a) the information scanned by investigators working on fentanyl-related networks and child exploitation; b) the drafting of community emergency preparedness plans; and c) the information about the laws and regulations that immigration officers are supposed to uphold.
I searched the roadmap linked to the press release for "accuracy", "false", "misleading", and "hallucination" to see if there was any discussion of the fact that the output of synthetic text extruding machines is ungrounded in reality or communicative intent and therefore frequently misleading. None of those terms turned up any hits. Is the DHS even aware that LLMs are even worse than "garbage in garbage out" in that they'll make papier-mâché out of whatever is put into them? (The US immigration service's history of relying on MT in handling asylum cases suggests that DHS should not be trusted to handle this technology appropriately.)
I also searched for "bias" to see what they're doing about the extremely well-documented propensity of LLMs to absorb, reproduce & amplify the biases of their training data. All I found was repeated assurances that the systems will be "rigorously tested" to make sure they "avoid inappropriate bias", but no discussion of how that will be achieved.
It's worth thinking for a bit about what the impacts of the bias might be in these three use cases. Some likely outcomes are the "GenAI" associating names from oversurveilled and overpoliced communities with criminality, the system for helping communities write emergency preparedness plans embedding biases about which communities are "safe" (and therefore "deserve safety"), and officers in training being presented with scenarios that confirm the biases they (and we) are all swimming in.
The press release includes the usual incantations of "unprecedented speed and potential", "enormous opportunities" which have to be balanced against "risks". So, this seems like a good moment to remind everyone that that "potential" and "opportunities" always promised for "AI" are speculative, that the "unprecedented speed" is either illusory, if it's speed of progress (no LLMs aren't "smart" nor are they getting "smarter"), or BAD if it's speed of consolidation of wealth and power and despoiling of the physical and information environments. The press release also reassures us that the "DHS is committed to ensuring that its use of AI fully respects privacy, civil liberties, and civil rights, is rigorously tested to avoid bias, disparate impact, privacy harms, and other risks, and that it is understandable to the people we serve." Coming from DHS, I guess we call that civil liberties theater?
The New York Times reports that these pilot programs represent "partnerships with OpenAI, Anthropic and Meta." In other words, more tax dollars being funneled to Big Tech for useless tech. What if instead of paying OpenAI etc to adapt an LLM to help communities write grants to get emergency preparedness funds, we hired people to work with those communities to help them navigate FEMA's requirements? What if instead of paying Anthropic and company to somehow work an LLM into the surveillance for drug trafficking, we put those funds instead into the repairing the holes in the social safety net through which people fall into opiod dependence? As usual, when "AI" is proposed as the solution, the "solvers" are taking an overly narrow view of the problem.