UnitedHealth’s AI denies the error claims, the lawsuit says

He plays

For years, vital decisions about who gets Medicare coverage have been made in the back offices of health insurance companies. Now, some of those life-changing decisions are being made by AI software.

At least that’s the claim of two families who sued UnitedHealth Group this week, saying the insurance giant used emerging technology to deny or shorten rehabilitation periods for two elderly men in the months before their deaths.

They say UnitedHealth’s artificial intelligence, or AI, makes “rigid and unrealistic” decisions about what it will take for patients to recover from serious illnesses and deny them care at skilled nursing and rehabilitation centers that should be covered under Medicare Advantage plans, according to a federal lawsuit filed in Minnesota by the estates of two elderly patients in Wisconsin. The lawsuit, which seeks class-action status, says it is illegal to allow artificial intelligence to override doctors’ recommendations for these men and patients like them. Families say such evaluations should be done by medical professionals.

The families indicate in the lawsuit that they believe the insurance company is depriving elderly patients of care who would not fight back even though evidence shows that artificial intelligence does a lackluster job of assessing people’s needs. They say the company used algorithms to determine coverage plans and override doctors’ recommendations despite the AI ​​program’s staggeringly high error rate.

More than 90% of patients’ claim denials were overturned by internal appeals or a federal administrative law judge, according to court documents. But in reality, only a small number of patients have challenged the algorithms’ decisions. A small percentage of patients – 0.2% – chose to fight claim denials through the appeals process. The vast majority of people insured through UnitedHealth’s Medicare Advantage plans “either pay out-of-pocket costs or forego the remainder of their prescribed post-acute care,” the suit says.

See also  Qualcomm (QCOM) Earnings Report for the Second Quarter of 2023

Lawyers representing the families suing the Minnesota-based insurance giant said the high rate of denials is part of the insurer’s strategy.

“They’re putting their own profits ahead of the people they’ve contracted with and then are legally obligated to cover them,” said Ryan Clarkson, a California attorney whose law firm has filed several cases against companies that use artificial intelligence. “It’s that simple. It’s just greed.”

naviHealth’s AI software, which was cited in the lawsuit, is not used to make coverage decisions, UnitedHealth told USA TODAY in a statement.

“The tool is used as a guide to help us inform providers, families and other caregivers about the type of assistance and care a patient may need both in the facility and after returning home,” the company said.

The company said coverage decisions are based on Centers for Medicare and Medicaid Services and consumer insurance plan standards.

The company said: “This lawsuit has no merit, and we will defend ourselves vigorously.”

Lawsuits of this kind are not new. They are part of a growing body of lawsuits.

In July, Clarkson’s law firm filed a lawsuit issue v. CIGNA Healthcare alleging the insurer used artificial intelligence to automate claims denials. The company has also filed cases against the maker of ChatGPT OpenAI And Google.

Families pay for expensive care that is denied by the insurance company

The plaintiffs in this week’s lawsuit are relatives of two deceased Wisconsin residents, Jane B. Loken and Del. Henry Tetzloff, both of whom are insured through UnitedHealth’s private Medicare plans.

In May 2022, Loken, 91, fell at home and broke his leg and ankle, requiring a short hospital stay followed by a month in a rehabilitation facility while he recovered. Lukin’s doctor then recommended physical therapy so he could regain his strength and balance. The Wisconsin man spent less than three weeks in physical therapy before his insurance company terminated his coverage and recommended that he be discharged from the hospital and sent to recover at home.

A physical therapist described Luken’s condition as “paralyzed” and “weak,” but his family’s pleas to continue covering treatment were denied, according to the lawsuit.

See also  Boomers are setting up a showdown with Millennials, who are getting older and spending hundreds of thousands on renovating those homes

His family chose to continue treatment despite the refusal. Without coverage, the family had to pay $12,000 to $14,000 a month for nearly a year of treatment at the facility. Lukin died at the facility in July 2023.

The other man’s family also raised concerns that the AI ​​algorithm would deny necessary rehabilitation services.

Tetzloff was recovering from a stroke in October 2022, and his doctors recommended that the 74-year-old be transferred from the hospital to a rehabilitation facility for at least 100 days. The insurance company initially sought to end his coverage after 20 days, but the family appealed. The insurance company then extended Tetzloff’s stay for another 20 days.

The man’s doctor had recommended additional physical and occupational therapy, but his coverage ended after 40 days. The family spent more than $70,000 on his care over the next 10 months. Tetzloff spent his final months in a nursing facility, where he died on October 11.

10 appeals for fractured hip rehabilitation

The legal action comes after Medicare advocates began raising concerns about the routine use of artificial intelligence technology to deny or reduce care for seniors in private Medicare plans.

In 2022, the Center for Medicare Advocacy examined several insurance companies’ use of AI software in rehabilitation and home health settings. Advocacy group a report It concluded that AI programs often made more restrictive coverage decisions than Medicare would have allowed, and the decisions lacked the level of granularity necessary to evaluate the unique circumstances of each case.

“We’ve seen more care that would have been covered under traditional Medicare being denied outright or prematurely terminated,” said David Lipshutz, associate director and chief policy attorney for the Center for Medicare Advocacy.

Some seniors who appeal the denial may get a reprieve to be shut down again, Lipschutz said. He cited the example of A Connecticut woman who sought a three-month stay at a rehabilitation center as she recovered from hip replacement surgery. She filed and won 10 appeals after the insurance company repeatedly tried to terminate her coverage and limit her stay.

See also  Morgan Stanley expects assets under management to reach $20 trillion

The importance of having a “human in the loop”

Legal experts not involved in the cases said artificial intelligence has become a fertile target for people and organizations seeking to curb or shape the use of emerging technology.

An important consideration for health insurers and others deploying AI programs is ensuring that humans are part of the decision, said Gary Marchant, faculty director of the Center for Law, Science and Innovation at Arizona State University’s Sandra Day O’Connor College of Law. Making process.

While AI systems can be efficient and complete rudimentary tasks quickly, programs on their own can also make mistakes, Marchant said.

“Sometimes AI systems aren’t reasonable, they don’t have common sense,” Marchant said. “You have to have a human in the loop.”

In cases involving insurance companies using AI to guide claims decisions, Marchant said the key legal factor may be the extent to which the company is subject to the algorithm.

UnitedHealth’s lawsuit states that the company restricted workers’ “discretion to deviate” from the algorithm. Employees who deviated from the AI ​​program’s expectations faced discipline or termination, the lawsuit said.

One factor to track in the UnitedHealth cases and similar lawsuits is how committed employees are to following the AI ​​model, Marchant said.

“Clearly there has to be an opportunity for the human decision maker to override the algorithm,” Marchant said. “This is just a big problem in AI and healthcare.”

He said it’s important to consider the consequences of how companies set up their AI systems. He said companies should consider how much respect they give to the algorithm, knowing that AI can ingest massive amounts of data and be “incredibly powerful” and “incredibly accurate.” Leaders should also keep in mind that AI can sometimes be completely wrong.”

Ken Alltucker is on X, formerly Twitter, at @kalltucker, or can be emailed at [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *