The Pandemic Response Accountability Committee (PRAC) is actively employing AI tools to combat fraudsters, who are also increasingly utilizing this advanced technology, raising the stakes for government oversight.
The Pandemic Response Accountability Committee (PRAC) is leveraging artificial intelligence to significantly enhance its oversight capabilities. By using AI tools, PRAC can efficiently collect, analyze, and communicate vast amounts of federal spending data. This technology acts as a force multiplier for investigators and auditors, enabling them to perform tasks with greater speed and effectiveness than human beings alone, thereby flagging potential fraudulent spending more rapidly for investigation.
A primary objective for PRAC is to transition from a reactive 'pay and chase' approach to a proactive fraud prevention model. The committee aims to utilize AI tools and data to provide crucial information to grant officers and contracting officials, allowing them to conduct due diligence and halt suspicious payments before funds are disbursed. PRAC has developed an AI-enabled 'fraud prevention engine,' trained on millions of pandemic relief applications, which can review thousands of applications per second and identify anomalies. This proactive strategy seeks to prevent costly and lengthy prosecution processes by stopping fraud on the front end, demonstrating that agencies can achieve both rapid disbursement and robust payment integrity.
Despite executive orders granting broader access to federal datasets like the Do Not Pay files and the Death Master File, agencies often encounter legal obstacles in direct data sharing for fraud prevention. PRAC addresses this challenge by implementing simple yes/no validation methods. For instance, they request agencies like the Social Security Administration to merely confirm if a Social Security number was issued and if the associated name and date of birth match their records, without exchanging sensitive data. This demonstration project successfully identified 1.4 million potentially invalid Social Security numbers linked to approximately $79 billion in pandemic funding, highlighting the significant fraud prevention potential of pre-award vetting.
As government agencies adopt AI to combat fraud, criminals are simultaneously advancing their own AI capabilities, intensifying the challenge for oversight bodies. Fraudsters are now using artificial intelligence to generate highly realistic false documentation, such as receipts, contracts, and bank statements, making it exceptionally difficult for auditors and investigators to discern legitimacy. Furthermore, they deploy bots and other technologies to overwhelm and exploit weaknesses in agency programs. This necessitates that oversight agencies continuously adapt and innovate their own AI strategies to keep pace with the rapidly evolving and sophisticated tactics employed by fraudsters.