Several recent class actions filed against Tempus AI, Inc., a health care technology company that combines AI with molecular and clinical data to develop precision medicine services, are the latest in a series of cases illustrating a fast-growing legal risk: the repurposing of genetic and clinical data — collected for diagnostic or treatment purposes — for artificial intelligence (AI) model training, analytics, and downstream commercialization following corporate acquisitions. At the same time, state genetic privacy regulation is expanding rapidly, with Utah and South Dakota being the most recent states to enact new statutes, and legislation advancing in several additional states. Organizations holding genetic datasets need to treat data governance as a core enterprise risk issue, not a downstream compliance matter.
Several recent class actions filed against Tempus AI, Inc., a health care technology company that combines AI with molecular and clinical data to develop precision medicine services, are the latest in a series of cases illustrating a fast-growing legal risk: the repurposing of genetic and clinical data — collected for diagnostic or treatment purposes — for artificial intelligence (AI) model training, analytics, and downstream commercialization following corporate acquisitions. At the same time, state genetic privacy regulation is expanding rapidly, with Utah and South Dakota being the most recent states to enact new statutes, and legislation advancing in several additional states. Organizations holding genetic datasets need to treat data governance as a core enterprise risk issue, not a downstream compliance matter.
The litigation has followed in the wake of Tempus AI’s acquisition of Ambry Genetics Corporation, a genetic testing firm, in 2025. The complaint alleges that, as a result of the acquisition, Tempus AI used the class members’ genetic testing information in ways that required notice and written authorization, and that their data was later disclosed through commercial life sciences relationships without adequate consent. The case reflects an accelerating focus by plaintiffs’ lawyers on whether legacy consent language and de-identification practices and methodologies are legally adequate when sensitive health and genetic data is repurposed for AI training, analytics, and licensing following a corporate transaction. These lawsuits are not isolated events. They are a predictable consequence of a market in which data-rich health care and diagnostics companies are acquired precisely because of the value of their datasets — and in which acquirers often intend post-close uses that go well beyond the purposes for which that data was originally collected. The 23andMe bankruptcy is a recent high-profile example of the types of issues that arise when a company holding a large genetic database faces acquisition and questions emerge about whether consent obtained for consumer genetic testing covers the acquirer's intended downstream uses of that data.
A recurring theme in these cases is the challenge to de-identification as a practical and legal safeguard for genetic information. Plaintiffs argue that genetic information is uniquely identifying by nature, and that removing conventional direct identifiers may not eliminate re-identification risk, particularly when genetic data can be compared against public reference resources or linked through familial relationships, and in certain circumstances re-identified using inference techniques. Plaintiffs also pair that scientific argument with statutory theories that, depending on the jurisdiction, may impose consent, use, and disclosure restrictions even where an organization characterizes data as de-identified. State laws do not take a uniform approach. Some statutes incorporate HIPAA de-identification concepts — Illinois’s Genetic Information Privacy Act, for example, includes a pathway for de-identified information created in accordance with HIPAA requirements. Other state laws, particularly direct-to-consumer frameworks, use different definitions and consent structures entirely. In this environment, organizations should assume that de-identification will be scrutinized both technically and legally. It should be supported by documented methodology, contractual restrictions including anti-re-identification provisions, and jurisdiction-specific analysis.
State legislatures are actively expanding genetic privacy regulation in a consistent direction: more granular consent requirements, stronger restrictions on secondary use and downstream transfer, and meaningful enforcement mechanisms including private rights of action and per-violation damages. Illinois's GIPA remains the most litigation-active statute in this space. South Dakota and Utah both enacted new genetic privacy laws in early 2026, and California is advancing legislation that would add criminal penalties to its existing civil framework. Connecticut, Rhode Island, and West Virginia have bills in progress covering direct-to-consumer (DTC) style consent requirements and foreign-adversary access restrictions. Compliance with HIPAA alone is increasingly insufficient, and the scope of HIPAA-based exemptions under state genetic privacy statutes varies materially across jurisdictions.
Treat AI training as a distinct data use. Consent obtained for clinical or diagnostic purposes should not be assumed to extend to AI model development, commercialization, or licensing. Authorizations should be assessed for whether they affirmatively cover use for AI models and algorithms post-acquisition transfers and third-party commercialization. Reassess de-identification positions. Evaluate methods, validation practices, access controls, and downstream contractual restrictions for consistency across public statements, privacy notices, and technical implementation — and against the state statutes that actually govern your data, not only the HIPAA standard. Review commercial agreements with life sciences partners. Key provisions include purpose limitations, restrictions on onward transfer, audit rights, breach notification obligations, compliance with applicable federal and state legal requirements (including state genetic privacy statutes), and allocation of responsibility for patient authorization representations. Build state-law compliance into data governance programs. Map genetic data flows against applicable state statutes on a jurisdiction-by-jurisdiction basis and do not assume HIPAA compliance is sufficient. Address genetic data governance in M&A diligence. Where an acquirer's intended post-close uses differ from the target's historic uses, the consent mismatch is a predictable litigation flashpoint. Diligence should treat consent scope and state-law compliance as asset constraints with direct bearing on deal value and post-close risk.
Our Privacy, Cybersecurity, and Life Sciences teams advise organizations on genetic data governance, AI data use controls, and commercialization strategies. We support clients with privilege-protected assessments of genetic and clinical data flows used in AI development; multi-state genetic privacy compliance reviews; transaction diligence playbooks for health data assets; and the negotiation and drafting of data-sharing and AI partnership agreements. We also help clients build defensible governance programs designed to withstand litigation and regulatory scrutiny, including documentation, vendor oversight, and incident readiness.