2017 could be big for litigation over “Big Data” applications in insurance. This past year saw the filing of several “price optimization” class actions, and claims against a fraud detection tool, similar to the ones used by insurers, survived an initial round of motions. Zynda v. Arwood, 175 F.Supp.3d 791 (E.D. Mich. 2016). But even as the industry braces for a wave of suits over cutting edge models and analytics, battles over less exotic systems are still being fought. Last month, in Johnson v. GEICO Cas. Co., No. 16-1132 (3rd Cir. Nov. 29, 2016), the U.S. Court of Appeals for the Third Circuit addressed several key theories and arguments that have been used against some early applications of machine learning—tools that identify excessive medical charges or unnecessary procedures. While ruling in favor of the insurer, the court left the status of those theories and arguments essentially unresolved. Because the same contentions are sure to re-surface when plaintiffs go after tools that rely on telematics, clickstreams and social media, Johnson provides a useful window into lawsuits yet to come.
Horse-And-Buggy Analysis
In August 2004 – long before Big Data went viral – Sharon Anderson was involved in a motor vehicle accident and was treated for headaches, neck pain and back pain. Nearly a year later, in June 2005, she complained of similar symptoms during a visit to her physician; he prescribed 12 sessions of physical therapy, which Ms. Anderson received over the next four months. Both the doctor and the physical therapist submitted bills for their services directly to Ms. Anderson’s automobile insurer, seeking reimbursement under her policy’s Personal Injury Protection (“PIP”) feature. PIP coverage is mandatory in Delaware, where Ms. Anderson resides, and it requires insurers to pay certain “reasonable and necessary expenses” for medical services arising out of a covered accident. 21 Del. C. § 2118(a)(2).
In the course of processing these bills, the insurer had them reviewed by an automated system. One component of that system is a database, to which multiple insurers contribute information about the millions of bills they receive for healthcare services and equipment. By accessing the database, the system can compare the prices on a given bill with the prices for the same services that are charged by other providers in the same geographic area.
The system then performs a machine learning function known as “classification”: prices that exceed the 80th percentile in that area (i.e., prices that are higher than the ones charged by 80% of the area’s providers) are classified as unreasonably high. (In many jurisdictions, they are said to exceed the “usual, customary and reasonable,” or “UCR” rate.) Like most carriers, Ms. Anderson’s insurer responded to such bills by making a “UCR reduction”—paying only the 80th-percentile amount. That practice (the “geographic reduction rule”) resulted in a $31 reduction of the bill for Ms. Anderson’s June 2005 doctor visit.
The bill review system also implemented computerized “rules” to identify services that are unlikely to have been medically necessary. These rules (known as a “knowledge base”) were designed to reflect reimbursement policies which the insurer had formulated independently of the bill review system. (Applications that apply rules in this way are called “expert systems”; they were used widely in the 1970s and ’80s.) One such rule (the “passive modality rule”) directed the system to flag certain procedures, if they were performed more than eight weeks after the date of the accident. It caused Ms. Anderson’s insurer to deny payment for several of her physical therapy sessions, on the ground that “physical therapy modalities … provide no therapeutic benefit during the chronic period [and] are not reimbursable.” The insurer provided a similar explanation for denying additional bills for “stimulation” and “hot/cold pack treatment.”
Challenging Technology With Class Consciousness
In 2006, Ms. Anderson brought an action over her medical bills in a Delaware state court, and the insurer removed it to the District of Delaware. (The case was called Johnson v. GEICO, because Ms. Anderson had a co-plaintiff, Mr. Johnson, who dismissed his claims in 2015.) Ms. Anderson’s case was one of dozens of challenges to similar bill-review systems that were brought in the first decade of this century.
The complaint in Johnson alleged breach of contract—a claim Ms. Anderson could win by proving (i) that the charges submitted by her medical providers had been reasonable and necessary to treat injuries caused by her covered accident, and (ii) that her insurer had failed to pay them. The total amount of the alleged underpayments was slightly more than $1,000. Ms. Anderson, however, also asserted claims on behalf of three putative classes, including classes consisting of all other policyholders affected by either the geographic reduction rule or the passive modality rule. Where the certification of a class is a real possibility, the value of the case increases exponentially, and the plaintiff has considerable leverage in forcing a settlement.
Establishing that every member of a putative class received medically necessary services at a reasonable price would be cumbersome and expensive. Moreover, if questions about individual bills “predominate” over common questions, they can also prevent a court from certifying a class (a problem that is discussed below). Ms. Anderson’s complaint, like those in many similar cases, said little about the reasonableness of the providers’ bills, but focused instead on the proposition that the insurer’s bill review system was inherently unreasonable—making it inevitable that the insurer would “systematically and arbitrarily” reduce or deny payments that its policies required. Relatedly, the plaintiff contended that the computerized system was inconsistent with policyholders’ reasonable expectations about how their claims would be processed.
- Breach of Contract
One possible way to establish that a system is inherently unreasonable is to contend that it uses flawed data, or that the data is in some way mishandled. In the past, allegations about systemic defects have been made against bill review systems of the type at issue in Johnson—especially those that used the database formerly known as “Ingenix,” which was itself the target of several class actions, as well as a highly-publicized investigation by New York’s Attorney General. See, e.g., M.W. Widoff, P.C. v. Encompass Ins. Co. of Am., No. 10 C 8159 (N.D. Ill. March 2, 1012) (alleging “selective data contribution,” “data scrubbing” and “flawed algorithms”); McGovern Physical Therapy Assocs. v. Metropolitan Pro. & Cas. Ins. Co., 802 F.Supp.2d 306 (D. Mass. 2011) (alleging system was “flawed, biased and unreliable”).
Ms. Anderson, however, chose a different approach. Instead of claiming her insurer’s system processed data incorrectly (a charge that might be difficult and expensive to prove), Ms. Anderson argued that the system was theoretically flawed: because it is possible for even a very high price to be reasonable in some circumstances, and for a contraindicated treatment to be medically necessary in some cases, Ms. Anderson—like most other plaintiffs in her position—asserted that her insurer’s system was necessarily overinclusive. See, e.g., Halvorson v. Auto-Owners Ins. Co., 85 Fed.R.Serv.3d 1491 (8th Cir. 2013) (alleging...