NCUA provides resources for evaluating and performing due diligence on third-party vendors offering artificial intelligence services. This page helps credit unions navigate the opportunities and risks associated with AI technologies, covering implementation, risk management, data security, use cases, and cybersecurity.
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) offers 'NIST AI Resources' with practical recommendations for AI design, development, governance, and usage, assisting credit unions in managing related risks. Additionally, the U.S. Department of the Treasury, alongside the Artificial Intelligence Executive Oversight Group (AIEOG), provides 'AIEOG Resources' – industry-aligned tools supporting the safe, effective, and responsible adoption of AI technologies in financial institutions. These resources are crucial for credit unions aiming to build trustworthy AI systems that align with their cooperative principles and mission.
The Committee of Sponsoring Organizations (COSO) of the Treadway Commission published a research paper titled 'Realize the Full Potential of Artificial Intelligence,' which presents a structured framework for understanding and managing AI-related risks while exploring strategic opportunities. This document offers valuable insights into governance structures, risk assessment methodologies, and performance monitoring approaches for credit unions considering AI applications in areas like member services, fraud detection, and operational efficiency. It provides considerations for board oversight, defining risk appetite, and implementing controls aligned with credit unions’ missions and core values, supporting informed decision-making.
The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) serves as a valuable resource for credit unions, providing information related to protecting the data that powers AI systems throughout their entire life cycle. CISA's 'Cybersecurity Information Sheet on AI Data Security' specifically addresses AI data supply chain security, methods to protect against maliciously modified data, and the mediation of AI data drift risks. Credit unions can leverage this resource to establish robust data security frameworks that safeguard sensitive member information, ensure the integrity of AI systems, and maintain the accuracy of AI-driven decisions essential for effective member service.
The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) also published 'Deploying AI Systems Securely,' a document offering comprehensive security methods for deploying and operating AI systems developed by external entities. For credit unions exploring AI technologies to enhance member services, fraud detection, and operational efficiency, this resource highlights critical cybersecurity considerations specific to AI system deployment and maintenance. It addresses unique security challenges, including the protection of model weights, secure API implementation, and continuous monitoring protocols, helping credit unions establish AI security frameworks that protect sensitive member data and maintain system integrity.
The U.S. Department of the Treasury issued a report, 'Artificial Intelligence in Financial Services,' that examines both traditional AI applications and emerging generative AI technologies. This report addresses critical areas such as data privacy and security standards, challenges related to bias and explainability, consumer protection considerations, concentration risks, and third-party vendor management. Credit unions can leverage this resource to gain a better understanding of the evolving regulatory landscape, best practices for AI implementation, and effective risk mitigation strategies as they evaluate and adopt AI technologies within their operations.
The U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) published a report titled 'Fraud Schemes Involving Deepfake Media Targeting Financial Institutions.' This report is a crucial resource for credit unions, providing information on how to identify fraudulent activities that involve AI-generated deepfake content. It highlights the increasing threat of criminals using AI tools to create fake identity documents, photos, and videos to circumvent customer verification processes and commit fraud. The resource includes specific red flag indicators to help credit unions detect suspicious deepfake activity and offers best practices for strengthening identity verification procedures, enhancing fraud detection, protecting members, and ensuring proper reporting of suspicious activities.
The NCUA has developed Frequently Asked Questions to clarify its supervisory approach to AI and other innovations. Credit unions are permitted to use AI tools and technologies, provided they are implemented in a safe, sound, and compliant manner. While NCUA has not issued AI-specific regulations, existing technology-neutral regulations apply to AI use. The agency supervises AI within its existing framework, focusing on risk management practices such, as identifying, monitoring, and mitigating unique AI risks. Credit unions are expected to conduct appropriate due diligence when relying on third-party AI vendors, and NCUA encourages feedback on regulatory barriers or areas needing clarity to support responsible AI adoption.