Medicine has been accused of implementing new interventions without a strong underlying evidence base, and that impulse for quick implementation is understandable, says Scott Halpern, Deputy Director for the University of Pennsylvania’s Center for Health Incentives and Behavioral Economics. “We have sick people. We need to do something. We can’t sit idly by,” he says. “But I wonder if that instinct to not just stand there — to do something — might have some unintended consequences, crowding out the potential for greater innovation that has better evidence to support it.” What can we learn from other industries that have grappled with this tension?
A decade ago, there was almost no A/B testing in the corporate sector, notes Harvard economist David Laibson. “The CEO or someone else had a great idea and they implemented it, and then they had anecdotes about how great it was — and that was the way you made progress,” he says, adding, “a lot of that was a huge waste of money.”
This attitude first started changing in Silicon Valley, where companies became experts at treatment and control. Today, they’ll immediately start randomizing a new idea and maybe within a day have enough data to know whether to proceed with it. “We can’t do that in health care, of course, because we don’t have the control and the systems are complicated,” says Laibson. “It’s not that we should aspire to 24-hour experiments, but we should aspire to experiments, to actually randomizing and figuring out what works before we roll things out nationally and embrace them as best practice.”
“If we take the treatment-and-control approach, we’re going to make a lot more progress quickly despite the fact that it’s frustrating,” says Laibson, “because when you commit to running randomized control trials, it slows down the legislation, it slows down the national adoption, because you’ve got to wait a year for those results to come in. But that’s a year that’s worth waiting.”
Those who are working to add more value to the health system may also need some training to help them understand what the best new design might be, adds Dartmouth College marketing expert Punam Keller. She has worked with the Centers for Disease Control and Prevention on HealthCommWorks, which uses an empirical algorithm to help you tailor your messages to your audience’s needs. The website also offers ProofWorks, which is “basically like PhD 101,” says Keller. It includes information on good design, control, random assignment, sample sizes, metrics validation, intervention and testing time, and outcome variables. “Those kinds of things are important if we are to improve the rigor of the data that we have in order to improve our collective understanding,” she says.
The post originally appeared with a video on the NEJM Catalyst blog.