Some law enforcement agencies have a policy, some don't, and at least one says its policy isn't available for public scrutiny.
The recent incident involving Angela Lipps, a Tennessee grandmother who was detained and later released by the Fargo Police Department, serves as a stark example of the critical need for clear guidelines in the use of artificial intelligence (AI) by law enforcement. Her arrest was reportedly justified solely by an AI facial recognition match, highlighting a significant vacuum in public debate and official policy regarding this powerful technology. Initially, the Fargo Police Department lacked any specific formal policy addressing the use of facial recognition technology, underscoring a reactive rather than proactive approach to its integration into police work. This absence of regulation led directly to a situation where AI was used as the sole basis for a serious law enforcement action, a practice that has since been widely criticized as a travesty for Lipps and a dangerous precedent.
In the immediate aftermath of the Angela Lipps debacle, the Fargo Police Department moved quickly to rectify its policy shortcomings by implementing FPD Policy 610 on March 25. This new policy specifically outlines the parameters and expectations for using Facial Recognition Technology (FRT). A crucial component of Policy 610 is its explicit declaration that information obtained from an FRT search is to be considered merely an 'investigative lead' and not a 'positive identification.' It further mandates that such information cannot be the sole basis for any law enforcement action. Instead, any potential identification derived from FRT must be rigorously corroborated through 'additional investigative means and resources.' This policy change reflects an acknowledgment of the technology's limitations and the inherent risks of relying solely on AI for criminal identification, a lesson learned through the very public and problematic Lipps case.
Interestingly, it was revealed that the West Fargo Police Department already possessed an identical facial recognition policy to the one Fargo adopted after the Lipps incident. This policy stipulated that FRT matches should be treated as intelligence leads only and not as conclusive evidence for arrest. This pre-existing policy shaped West Fargo's actions: they conducted the facial recognition search, but consciously chose not to arrest Lipps because they determined there wasn't enough corroborating evidence. However, they shared this AI-generated lead with the Fargo Police Department at their request. This exchange subsequently led to a 'finger-pointing' scenario, where Fargo officials, including the former Chief, claimed they were unaware the match was based on a fake ID rather than surveillance footage, suggesting a misunderstanding or miscommunication that exacerbated the problem. Regardless of the source material (fake ID vs. surveillance), the core issue remained Fargo's use of the AI match as a positive identification rather than just an investigative lead, directly contradicting the spirit of West Fargo's policy and the newly adopted Fargo policy.
The varying approaches to AI and facial recognition technology extend beyond Fargo and West Fargo, revealing a fragmented landscape across North Dakota's law enforcement agencies. The North Dakota Highway Patrol, for instance, openly admits to having no specific policies regarding the use of artificial intelligence, despite recognizing that they might receive AI-developed information from other jurisdictions. This lack of internal policy leaves a significant gap in how such crucial data is handled and integrated into their operations. Even more concerning is the stance of the North Dakota Bureau of Criminal Investigation (BCI), which actively uses AI technology but refuses to make its policies public. Citing N.D.C.C. § 44-04-18.4, which exempts 'trade secret, proprietary, commercial, financial, and research information,' the BCI asserts confidentiality over its AI usage guidelines. While they provide a disclaimer to officers that AI matches are only investigative leads, the public's inability to scrutinize these policies raises profound questions about transparency, accountability, and the potential for misuse of powerful AI tools within the state's criminal justice system.
The patchwork of policies, or lack thereof, among North Dakota's law enforcement agencies concerning artificial intelligence and facial recognition technology necessitates immediate legislative action. The Angela Lipps fiasco and the diverse stances of departments like Fargo, West Fargo, the Highway Patrol, and the BCI underscore a critical void in statewide governance for AI use. Currently, agencies operate with different rules, some with none at all, and some even keep their internal policies secret from public scrutiny. This inconsistency not only creates a confusing and potentially dangerous environment for citizens but also for law enforcement officers who might unknowingly misuse AI-generated evidence. The Legislature must intervene to establish universal, clear, and transparent guidelines that apply uniformly to all law enforcement entities operating within North Dakota. These laws should define ethical boundaries, dictate responsible usage, and ensure public oversight, thereby enabling AI to be a beneficial tool in the pursuit of justice while safeguarding civil liberties and preventing future miscarriages of justice. The onus is on the Legislature to provide this crucial framework for the responsible integration of AI into public safety.