In 2025, state and federal officials continued scrutinizing the data privacy aspects of artificial intelligence (AI) systems, with enforcement agencies focusing on deceptive marketing claims, opaque data use disclosures, and potential risks to children stemming from chatbots and AI enabled customer interaction tools.
In 2025, as Artificial Intelligence (AI) systems became more deeply integrated into consumer-facing products and services, regulators and private plaintiffs intensified their efforts to determine how existing privacy and consumer-protection laws would apply to the collection, use, and commercialization of personal data in AI development and deployment. Federal and state enforcement agencies particularly focused on unsubstantiated marketing claims, opaque or inaccurate data use disclosures, and the potential risks AI poses to children. Concurrently, private litigants introduced a wide array of novel legal theories, challenging aspects ranging from AI training practices to the application of automated chatbots under established electronic-communications statutes. Courts delivered mixed results, offering early yet significant indications regarding disclosure obligations, consent, and the limitations of applying legacy privacy laws to rapidly evolving AI technologies.
Throughout 2025, state attorneys general (AGs) significantly heightened their focus on AI-related consumer-protection and privacy risks. A prominent area of concern was AI chatbots, leading a bipartisan coalition of state AGs to issue a joint warning to leading AI developers. This warning underscored that companies would be held accountable for harms arising from AI systems' access to and use of consumer data, particularly in contexts involving children. Additionally, the AG for Texas launched an investigation specifically targeting alleged representations that chatbots could serve therapeutic purposes. These proactive efforts demonstrate that, even in the absence of comprehensive federal AI legislation, state regulators are prepared to leverage existing consumer-protection tools to influence AI product design and data-governance practices.
At the federal level, the Federal Trade Commission (FTC) persistently exercised its authority over consumer-protection matters to scrutinize companies developing or deploying AI tools, primarily focusing on allegedly deceptive or unsubstantiated marketing claims. Late in the Biden administration, the FTC initiated “Operation AI Comply,” an enforcement initiative designed to curb false or misleading representations about AI capabilities and outcomes. Although this initiative largely continued into the subsequent Trump administration, the agency signaled some willingness to revisit prior decisions, finding in at least one instance that a prior consent order 'unduly burden[ed] innovation in the nascent AI industry.' The FTC brought several Section 5 unfair or deceptive conduct actions against companies accused of overstating AI product benefits, seeking injunctions, monetary relief, and in some cases, a permanent ban on offering AI-related services. The agency also distributed over $15 million in connection with allegations that an AI developer stored, used, and sold consumer information without consent. Beyond enforcement, the FTC utilized its Section 6 investigatory authority to examine AI-powered chatbots and companion tools, requesting detailed information on data-collection practices, model training, retention policies, and safeguards for minors, with particular attention to Children’s Online Privacy Protection Act (COPPA) compliance.
Private plaintiffs, for their part, advanced increasingly novel consumer-protection theories in lawsuits challenging AI development and deployment practices. In one particular lawsuit, for example, a plaintiff alleged that a company had unlawfully exploited the 'cognitive labor' generated through user interactions with its AI system by capturing and utilizing that data without offering compensation. Although the court ultimately dismissed these specific claims for failing to articulate a cognizable legal theory, the case serves as an illustration of the creative—and occasionally expansive—approaches plaintiffs have adopted in attempting to characterize AI data practices as unfair or deceptive. These actions highlight the ongoing effort to apply existing legal frameworks to new and complex AI-related scenarios.
A second, and increasingly impactful, area of AI-privacy litigation in 2025 involved concerted efforts to extend existing electronic-communications and privacy statutes to cover AI-enabled tools and related data-collection practices. Courts were tasked with determining whether long-standing prohibitions against the unauthorized interception, disclosure, or misuse of personal information could adequately accommodate emerging technologies that replace or augment human interaction, collect data at an unprecedented scale, and repurpose that data for AI model development or improvement. This body of litigation reflects a critical legal challenge: adapting traditional privacy principles to the sophisticated and evolving capabilities of artificial intelligence systems.
Several cases in 2025 specifically examined whether AI chatbots deployed in customer-service or consumer-interaction environments constituted unlawful interception under state and federal electronic-communications laws. A notable example is *Taylor v. ConverseNow Technologies*, where a federal court permitted a putative class action claim under the California Invasion of Privacy Act (CIPA) to proceed past the motion-to-dismiss stage against an SaaS company that uses AI assistants to process customer phone calls for restaurants. The court emphasized whether the chatbot provider could be considered a 'third party' interceptor, differentiating between data used exclusively for consumer benefit versus data leveraged for the provider’s own commercial purposes, including system improvement, finding plausible liability grounds when data served both roles. In contrast, other courts, such as in *Rodriguez v. ByteDance*, were more skeptical, dismissing claims under CIPA and the federal Electronic Communications Privacy Act, concluding that allegations of using personal data to train AI systems were overly speculative without more concrete evidence of interception or disclosure.
Certain lawsuits also involved allegations that companies collected or repurposed consumer data for AI training without providing adequate disclosure or obtaining explicit consent. In *Riganian v. LiveRamp*, for instance, a putative class of consumers successfully survived early dismissal after alleging that a data broker utilized AI tools to collect, combine, and sell personal information sourced from both online and offline channels. The court determined that the plaintiffs had plausibly alleged invasive and nonconsensual data practices sufficient to support common-law privacy claims under California law, alongside claims under the California Invasion of Privacy Act (CIPA) and the federal Wiretap Act. This case highlights the legal challenges associated with the opaque collection and utilization of data for AI training purposes, especially when transparency and consent are perceived to be lacking.
Beyond the courtroom, 2025 also saw significant developments from state legislatures and court systems that are poised to shape the future of privacy-related AI litigation. State legislatures across the country actively focused on AI regulation, with California, Colorado, and Texas enacting new laws specifically addressing AI systems. Furthermore, over half of the states passed legislation designed to tackle privacy concerns arising from the creation and dissemination of 'deepfakes'—malicious digital alterations of a person’s body or voice. Lawmakers broadly targeted AI-related privacy and data transparency issues, including those involving customer service bots and potentially discriminatory AI model outputs. State legislators and AGs also continued to express strong opposition to federal preemption of state AI laws, advocating for states to maintain a significant role in AI governance. Courts themselves emerged as important institutional actors; for example, the Arkansas Supreme Court adopted a rule requiring legal professionals to verify that AI tools used in court work do not retain or reuse confidential data, warning that failure to comply could constitute professional misconduct. Similar guidance restricting the use of generative AI that might compromise client confidentiality or judicial integrity was also issued in other jurisdictions, including New York and Pennsylvania.