Strategy. Innovation. Brand.

medical treatment

The Doctor Won’t See You Now

Shouldn't you be at a meeting?

Shouldn’t you be at a meeting?

If you were to have major heart problem – acute myocardial infarction, heart failure, or cardiac arrest — which of the following conditions would you prefer?

Scenario A — the failure occurs during the heavily attended annual meeting of the American Heart Association when thousands of cardiologists are away from their offices or;

Scenario B — the failure occurs during a time when there are no national cardiology meetings and fewer cardiologists are away from their offices.

If you’re like me, you’ll probably pick Scenario B. If I go into cardiac arrest, I’d like to know that the best cardiologists are available nearby. If they’re off gallivanting at some meeting, they’re useless to me.

But we might be wrong. According to a study published in JAMA Internal Medicine (December 22, 2014), outcomes are generally better under Scenario A.

The study, led by Anupam B. Jena, looked at some 208,000 heart incidents that required hospitalization from 2002 to 2011. Of these, slightly more than 29,000 patients were hospitalized during national meetings. Almost 179,000 patients were hospitalized during times when no national meetings were in session.

And how did they fare? The study asked two key questions: 1) how many of these patients died within 30 days of the incident? and; 2) were there differences between the two groups? Here are the results:

  • Heart failure – statistically significant differences – 17.5% of heart failure patients in Scenario A died within 30 days versus 24.8% in Scenario B. The probability of this happening by chance is less than 0.1%.
  • Cardiac arrest — statistically significant differences – 59.1% of cardiac arrest patients in Scenario A died within 30 days versus 69.4% in Scenario B. The probability of this happening by chance is less than 1.0%.
  • Acute myocardial infarction – no statistically significant differences between the two groups. (There were differences but they may have been caused by chance).

The general conclusion: “High-risk patients with heart failure and cardiac arrest hospitalized in teaching hospitals had lower 30-day mortality when admitted during dates of national cardiology meetings.”

It’s an interesting study but how do we interpret it? Here are a few observations:

  • It’s not an experiment – we can only demonstrate cause-and-effect using an experimental method with random assignment. But that’s impossible in this case. The study certainly demonstrates a correlation but doesn’t tell us what caused what. We can make educated guesses, of course, but we have to remember that we’re guessing.
  • The differences are fairly small – we often misinterpret the meaning of “statistically significant”. It sounds like we found big differences between A and B; the differences, after all, are “significant”. But the term refers to probability not the degree of difference. In this case, we’re 99.9% sure that the differences in the heart failure groups were not caused by chance. Similarly, we’re 99% sure that the differences in the cardiac arrest groups were not caused by chance. But the differences themselves were fairly small.
  • The best guess is overtreatment – what causes these differences? The best guess seems to be that cardiologists – when they’re not off at some meeting – are “overly aggressive” in their treatments. The New York Times quotes Anupam Jena: “…we should not assume … that more is better. That may not be the case.” Remember, however, that this is just a guess. We haven’t proven that overtreatment is the culprit.

It’s a good study with interesting findings. But what should we do about them? Should cardiologists change their behavior based on this study? Translating a study’s findings into policies and protocols is a big jump. We’re moving from the scientific to the political. We need a heavy dose of critical thinking. What would you do?

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives