Courts are beginning to weigh in on allegations that payors are improperly using automated decision-making tools as part of the claims review process. On March 31, the District Court for the Eastern District of California denied in part a motion to dismiss a putative class action complaint challenging a national payor’s alleged use of an automated algorithm to make medical necessity determinations.
Plaintiffs – current and former health plan members – contend that their insurer relied on an algorithm to facilitate coverage decisions en masse without individualized scrutiny based on required criteria. The members allege that, in contravention of statutory and plan terms, clinical reviewers used the tool to automatically reject hundreds or thousands of coverage decisions without opening patient files.
As discussed in a previous Reed Smith client alert, this case is one of several class action lawsuits filed against managed care organizations (MCOs) by members in federal district courts across the nation related to the use of automated decision-making tools (e.g., automated algorithms and, more recently, AI) in claims review. These emerging cases collectively provide payors with more insight into the specific allegations, causes of action, and proposed forms of relief that courts may accept, as well as hint at best practices that payors may want to adopt as they seek to reduce their litigation risk.
Tellingly, while the Eastern District of California dismissed some of the insurer’s claims in the above case, the court also allowed some of the members’ allegations to proceed past a motion to dismiss. Reviewed claims include:
- Standing: The court dismissed members’ claim for lack of standing under ERISA and California’s Unfair Competition Law, ruling that the members’ claims for benefits were not subjected to algorithmic review. The court reasoned that the benefit determinations made in response to claims submitted by the members were not decided using the algorithm, and therefore the denial of their claims was not traceable to the payor’s challenged behavior.
- Claim for relief under ERISA: The court dismissed members’ claim for relief under ERISA for wrongful denial of benefits, reasoning that members failed to identify the specific plan terms that entitle them to benefits. The court found that the terms identified were too “general” to demonstrate that the defendants breached the plan terms. However, the court allowed members’ claims to proceed under their claim for breach of fiduciary duty, finding that members adequately alleged that their insurer had violated the plan terms when it entrusted medical determinations to an algorithm.
- Alleged breach of fiduciary duty: The court effectively ruled on the case’s underlying merits by rejecting the payor’s interpretation of the plan language – which, they asserted, permitted use of an algorithm during review – on the grounds that the plan required a medical director to deny claims. While it is possible that further factual development during discovery could result in a different outcome on the merits, such a result is unlikely, as the payor asserted that its medical directors adopted algorithm-provided recommendations without reviewing any underlying records.
- Injunctive relief: The court found members’ claim for injunctive relief to be available under ERISA and their prayer for relief.
The court’s decision highlights the defenses MCOs should consider when faced with claims involving the use of automated decision-making tools in claims review processing, as well as potential challenges to those defenses. The latter may include members pursuing claims under causes of action such as breach of fiduciary duty and seeking injunctive relief as a remedy. Notably, the court in this case was willing to essentially rule on the merits of the plaintiff’s breach of fiduciary duty allegations, which could indicate that other courts may similarly classify payors’ use of automated decision-making tools as a breach of fiduciary duty if the methods used by clinical reviewers conflict with plan requirements (i.e., if clinical reviewers accept algorithm-provided recommendations without conducting individual reviews for each claim as a plan’s terms might require).
A quick note on the importance of terminology is warranted. The phrase “automated decision-making tools” as used here encompasses both traditional algorithms – which payors have used to assist in claims review for many years – and artificial intelligence (AI). Given that both tool types are used for the same purpose (e.g., categorizing submitted claims for approval, denial, or other action) and rely on algorithms to function, the definitions and court interpretations of the definitions in new and existing laws and regulations could apply to the use of traditional computer algorithms and newer technologies such as AI. Careful attention should be paid to existing laws and regulations, and court interpretations to evaluate whether the laws and regulations apply only to newer technologies (i.e., AI) or also to the use of more traditional algorithms.
In recent years, federal regulators and state lawmakers have increasingly scrutinized MCOs’ use of automated decision-making tools in claims review processing. At the federal level, CMS has already clarified how Medicare Advantage organizations can use such tools when making coverage determinations in an FAQ memo and CMS Final Rule, CMS-4201-F. In the past year, a few states, including Illinois (H2472) and California (SB 1120), have enacted legislation clarifying how and to what extent payors can use automated decision-making tools to analyze medical necessity in utilization reviews affecting members. Payors may soon face more regulation in other states as well.
Since the beginning of 2025, bills intended to clarify limitations on MCOs' use of automated decision-making tools have been introduced in several states, including Washington (H.B. 1566), Nebraska (L.B. 77), and North Carolina (S.B. 315). While the proposed laws differ in their details, many share commonalities with laws enacted in Illinois and California, including:
- Prohibiting the sole use of automated algorithms (a category that explicitly encompasses AI and other types of algorithms in all three bills) when making adverse benefit determinations
- Requiring medical necessity determinations to be based on individual clinical circumstances and not solely on group data sets
- Requiring payors to disclose how they use automated algorithms in prior authorization and utilization review processes to relevant stakeholders (including plan enrollees)
While it remains to be seen whether these proposed bills will be enacted, legislators are clearly interested in AI and other algorithm use. MCOs may be wise to prepare for the possibility of seeing more regulations implemented and, thereafter, lawsuits filed. By taking proactive steps to define, document, and communicate how automated algorithms and AI tools are being used during claims review, MCOs can assess their compliance with existing state and federal regulations, as well as gain a better understanding of how their procedures may potentially conflict with proposed requirements. In-house payor counsel may also benefit from confirming that they are fully apprised of how clinical reviewers are using automated decision-making tools during claims adjudication.
We encourage all MCOs who want to prepare for pending legislation to review our recommendations for minimizing AI risk and schedule a conversation with our attorneys.
Client Alert 2025-108