Skip to content
  1. Latest News

Slate: How the Pandemic Made Algorithms Go Haywire

From Slate:

Written by Drs. Amol Navathe and Ravi Parikh

Algorithms have always had some trouble getting things right—hence the fact that ads often follow you around the internet for something you’ve already purchased.

But since COVID upended our lives, more of these algorithms have misfired, harming millions of Americans and widening existing financial and health disparities facing marginalized groups. At times, this was because we humans weren’t using the algorithms correctly. More often it was because COVID changed life in a way that made the algorithms malfunction.

Take, for instance, an algorithm used by dozens of hospitals in the U.S. to identify patients with sepsis—a life-threatening consequence of infection. It was supposed to help doctors speed up transfer to the intensive care unit. But starting in spring of 2020, the patients that showed up to the hospital suddenly changed due to COVID. Many of the variables that went into the algorithm—oxygen levels, age, comorbid conditions—were completely different during the pandemic. So the algorithm couldn’t effectively discern sicker from healthier patients, and consequently it flagged more than twice as many patients as “sick” even though hospital capacity was 35 percent lower than normal. The result was presumably more instances of doctors and nurses being summoned to the patient bedside. It’s possible all of these alerts were necessary – after all, more patients were sick. However, it’s also possible that many of these alerts were false alarms because the type of patients showing up to the hospital were different. Either way, this threatened to overwhelm physicians and hospitals. This “alert overload” was discovered months into the pandemic and led the University of Michigan health system to shut down its use of the algorithm.

We saw a similar issue first-hand in the hospital where we both work: We recently published a study examining a health care machine-learning algorithm used to identify the sickest of patients with cancer. Flagging them gives clinicians an opportunity to talk to them about their preferences for end-of-life care.  Our data showed that, during the pandemic, this algorithm was 30 percent less likely to correctly identify a sick patient who needed such a timely conversation. Missed end-of-life conversations often translate to unnecessary treatments, hospitalizations, and worse quality of life for individuals who would have instead benefited from early hospice care.

Read more at Slate.