Skip to content
  1. Latest From the Blog

When Compared to Each Other, Doctors Pay Attention

Social comparisons can improve physician practice

In two recent JAMA Viewpoints, Amol Navathe and colleagues point to the potential of social comparisons to motivate physicians to improve the care they provide. Fulfilling this potential, however, will take careful attention to how these comparisons are designed and delivered.

When done right, social comparisons can improve physician practice without changing payment structures. For example, when ranked against their peers, physicians reduced inappropriate antibiotic prescriptions. But in another case, public reporting of percutaneous revascularization likely prompted physicians to avoid high-risk patients, rather than improve care.

In the first viewpoint, Joshua Liao, Lee Fleisher, and Amol Navathe propose pairing social comparison feedback with professional norms to increase the value of comparisons while safeguarding against unintended consequences. In effect, this message communicates to physicians not only where they rank compared to their peers, but what ought to be done as well.

They note that this strategy takes advantage of existing norms in medicine, especially when they reflect well-defined, consensus professional values. In implementing such a program, the authors stress that individual organizations should adapt these norms to local programs. Doing so allows institutional leadership to frame comparisons as a quality improvement initiative, rather than a way to pit physicians against east other.

Implementing Peer Comparisons

In the second viewpoint, Navathe and Zeke Emanuel offer advice for successfully implementing social comparisons to produce desired change. They recommend that decision makers consider eight dimensions of an intervention in designing effective peer comparisons:

  • Does it encourage high-value practices or discourage low value practices? Inappropriate prescription of antibiotics is an example of low value practice that has improved with peer comparisons.  More research is needed to determine how well or how consistently peer comparisons work for different types of practices.
  • What kind of comparative information does it provide? Do you compare an individual physician with the entire distribution of peer physicians, with top performers? Do you report outliers?
  • Does it give blinded or unblinded comparative information? Unblinded comparisons provide the names of performance of physicians, and can be powerful in using social pressure to affect behavior change. The New York state CABG and ProPublica surgeon report cards are examples of unblinded peer comparisons available to the public.
  • What’s the scope of the reference group? Comparisons can be made at the department level, the institutional level or the national level.  For example, Advocate Physician Partners provides comparative feedbacks across its entire network of 4,000 physicians.
  • Does it use norms to highlight what is considered “good” and what is considered “bad.  As discussed earlier, explicit professional standards can be used as norms to encourage best practice.
  • Does it make individual- or group-level comparisons? For example, Medicare’s Physician Compare program makes group comparisons. Incremental benefits of individual comparisons vs. group comparisons have yet to be shown.
  • How uncertain is the evidence? Clear practice standards may make social norming more effective, but using comparisons when evidence-based choices are not clear may be most useful.
  • How does it report performance? Information could be presented as raw performance (for example, average patient satisfaction score) or deviation from a standard (for example, percentage of patient responses not highly satisfied). Comparisons can also be shown in text, tables, or graphs.

Although peer comparisons have great potential to improve care, the evidence of their effectiveness is limited. Peer comparisons seem most helpful when decision choices and performance metrics are clear and accepted by physicians. As Navathe and colleagues conclude, “If peer comparisons are to become a pervasive tool for high value care, then building an evidence base to guide implementation is paramount.”

This blog post originally appeared in LDI Health Policy$ense.