Artificial intelligence (AI) is increasingly used in claims handling through predictive analytics, automation, fraud detection, and cost estimation. While these tools provide speed and consistency, they also raise significant litigation risks regarding bad faith allegations.
Lokken v. UnitedHealth Group, Inc.
The Lokken case highlights how plaintiffs may plead that an AI tool, 'nH Predict,' effectively substituted physicians’ judgment in denying post-acute care coverage, leading to alleged harm and high reversal rates on appeal. While most state-law claims were preempted, claims for breach of contract and breach of the implied covenant of good faith and fair dealing survived, demonstrating judicial willingness to scrutinize AI's role in claims decisions.
Where AI Risk Shows Up
AI tools in claims handling present several risks. They can be framed as replacing individualized professional judgment if adjusters over-rely on them. Explainability quickly becomes a discoverability problem, with demands for model configuration, training data, and override rates. Data quality and bias in AI tools can lead to systematic errors. Furthermore, operational incentives that functionally penalize adjusters for deviating from AI outputs can be recast as institutional pressure favoring cost containment over accuracy, fueling bad faith allegations.
What Is An Insurer To Do?
Insurers must maintain thorough documentation demonstrating facts gathered, policy language applied, AI contributions, and adjuster reasoning, especially when deviating from AI recommendations. Adjusters need to document their thought process and any reliance on or deviations from AI outputs to ensure transparency for neutral reviewers. AI will be evaluated as an integral part of the claim-handling process, and its outputs will be discoverable, requiring insurers to uphold reasonable investigation, policy-based decision-making, and good-faith conduct.