Strategy. Innovation. Brand.

evidence-based medicine

Healing Architecture

I feel better already.

I feel better already.

We were in Barcelona last month with our two favorite architects, Julia and Elliot. Of course, we wanted to see the many buildings created by another favorite architect, Antoni Gaudi. A friend also clued us in that, if we wanted to see some really good architecture, we shouldn’t miss the Hospital de Sant Pau.

I enjoy discovering cities but had never thought about visiting hospitals as part of a tourism agenda. Hospitals seem very functional and efficient and somewhat drab. They also look pretty much alike whether you’re in Denver or Paris or Bangkok. They seem to be built for the benefit of the medical staff rather than the patients.

So I was very surprised to find that the Hospital de Sant Pau contained some of the most beautiful buildings I’ve ever seen. The hospital dates to 1401 but the major complex that we visited consisted of about a dozen buildings constructed between 1901 and 1930. The Catalan architect Lluis Doménech I Montaner designed the entire campus, which today claims to be the largest art nouveau site in Europe. The campus is like a fairy tale – every which way you turn reveals something new and stimulating. (My photo above barely does it justice).

(The art nouveau campus was a working hospital until 2009 when it was replaced by a newer hospital – also an architectural gem – just beside it. The art nouveau campus is now a museum and cultural heritage site).

As I wandered about the campus, I thought if I were sick, this is the kind of place I would want to be. It’s beautiful and inspiring. That led me to a different question: Can the architecture of a hospital affect the health of its patients? The answer seems to be: Yes, it can.

The earliest paper I found on healing and architecture was a 1984 study by Roger Ulrich published in Science magazine. The title summarizes the findings nicely: “View Through a Window May Influence Recovery from Surgery.” Ulrich studied the records of patients who had gall bladder surgery in a Philadelphia hospital between 1972 and 1981.

Ulrich matched patients based on whether they had a view of trees out the window or a view of a brick wall. He studied only those patients who had had surgery between May and October “…because the tress have foliage during those months.” He also matched the pairs based on variables such as age, gender, smoking status, etc. As much as possible, everything was equal except the view.

And the results? Patients “…with the tree view had shorter postoperative hospital stays, had fewer negative evaluative comments from nurses, took fewer moderate and strong analgesic doses, and had slightly lower scores for minor postsurgical complications.”

Ulrich’s study (and others like it) has led to a school of thought called evidence-based design. Amber Bauer, writing in Cancer.Net, notes, “Like its cousin, evidence-based medicine, evidence-based design relies on research and data to create physical spaces that will help achieve the best possible outcome.”

Bauer cites Dr. Ellen Fisher, the Dean of the New York School of Interior Design, “An environment designed using the principles of evidence-based design can improve the patient experience and enable patients to heal faster, and better.” Among other things, Dr. Fisher suggests, “A view to the outdoors and nature is very important to healing.” It’s Ulrich redux.

I’ll write more about evidence-based design and the impact of architecture on healing in the coming weeks. In the meantime, put a vase full of fresh flowers beside your bed. You’ll feel better in the morning.

Business School And The Swimmer’s Body Fallacy

He's tall because he plays basketball.

He’s tall because he plays basketball.

Michael Phelps is a swimmer. He has a great body. Ian Thorpe is a swimmer. He has a great body. Missy Franklin is a swimmer. She has a great body.

If you look at enough swimmers, you might conclude that swimming produces great bodies. If you want to develop a great body, you might decide to take up swimming. After all, great swimmers develop great bodies.

Swimming might help you tone up and trim down. But you would also be committing a logical fallacy. Known as the swimmer’s body fallacy, it confuses selection criteria with results.

We may think that swimming produces great bodies. But, in fact, it’s more likely that great bodies produce top swimmers. People with great bodies for swimming – like Ian Thorpe’s size 17 feet – are selected for competitive swimming programs. Once again, we’re confusing cause and effect. (Click here for a good background article on swimmer’s body fallacy).

Here’s another way to look at it. We all know that basketball players are tall. But would you accept the proposition that playing basketball makes you tall? Probably not. Height is not malleable. People grow to a given height because of genetics and diet, not because of the sports they play.

When we discuss height and basketball, the relationship is obvious. Tallness is a selection criterion for entering basketball. It’s not the result of playing basketball. But in other areas, it’s more difficult to disentangle selection factors from results. Take business school, for instance.

In fact, let’s take Harvard Business School or HBS. We know that graduates of HBS are often highly successful in the worlds of business, commerce, and politics. Is that success due to selection criteria or to the added value of HBS’s educational program?

HBS is well known for pioneering the case study method of business education. Students look at successful (and unsuccessful) businesses and try to ferret out the causes. Yet we know that, in evidence-based medicine, case studies are considered to be very weak evidence.

According to medical researchers, a case study is Level 3 evidence on a scale of 1 to 4, where 4 is the weakest. Why is it so weak? Partially because it’s a sample of one.

It’s also because of the survivorship bias. Let’s say that Company A has implemented processes X, Y, and Z and been wildly successful. We might infer that practices X, Y, and Z caused the success. Yet there are probably dozens of other companies that also implemented processes X, Y, and Z and weren’t so successful. Those companies, however, didn’t “survive” the process of being selected for a B-school case study. We don’t account for them in our reasoning.

(The survivorship bias is sometimes known as the LeBron James fallacy. Just because you train like LeBron James doesn’t mean that you’ll play like him).

So we have some reasons to suspect the logical underpinnings of a case-base education method. So, let’s revisit the question: Is the success of HBS graduates due to selection criteria or to the results of the HBS educational program? HBS is filled with brilliant professors who conduct great research and write insightful papers and books. They should have some impact on students, even if they use weak evidence in their curriculum. Shouldn’t they? Being a teacher, I certainly hope so. If so, then the success of HBS graduates is at least partially a result of the educational program, not just the selection criteria.

But I wonder …

The Doctor Won’t See You Now

Shouldn't you be at a meeting?

Shouldn’t you be at a meeting?

If you were to have major heart problem – acute myocardial infarction, heart failure, or cardiac arrest — which of the following conditions would you prefer?

Scenario A — the failure occurs during the heavily attended annual meeting of the American Heart Association when thousands of cardiologists are away from their offices or;

Scenario B — the failure occurs during a time when there are no national cardiology meetings and fewer cardiologists are away from their offices.

If you’re like me, you’ll probably pick Scenario B. If I go into cardiac arrest, I’d like to know that the best cardiologists are available nearby. If they’re off gallivanting at some meeting, they’re useless to me.

But we might be wrong. According to a study published in JAMA Internal Medicine (December 22, 2014), outcomes are generally better under Scenario A.

The study, led by Anupam B. Jena, looked at some 208,000 heart incidents that required hospitalization from 2002 to 2011. Of these, slightly more than 29,000 patients were hospitalized during national meetings. Almost 179,000 patients were hospitalized during times when no national meetings were in session.

And how did they fare? The study asked two key questions: 1) how many of these patients died within 30 days of the incident? and; 2) were there differences between the two groups? Here are the results:

  • Heart failure – statistically significant differences – 17.5% of heart failure patients in Scenario A died within 30 days versus 24.8% in Scenario B. The probability of this happening by chance is less than 0.1%.
  • Cardiac arrest — statistically significant differences – 59.1% of cardiac arrest patients in Scenario A died within 30 days versus 69.4% in Scenario B. The probability of this happening by chance is less than 1.0%.
  • Acute myocardial infarction – no statistically significant differences between the two groups. (There were differences but they may have been caused by chance).

The general conclusion: “High-risk patients with heart failure and cardiac arrest hospitalized in teaching hospitals had lower 30-day mortality when admitted during dates of national cardiology meetings.”

It’s an interesting study but how do we interpret it? Here are a few observations:

  • It’s not an experiment – we can only demonstrate cause-and-effect using an experimental method with random assignment. But that’s impossible in this case. The study certainly demonstrates a correlation but doesn’t tell us what caused what. We can make educated guesses, of course, but we have to remember that we’re guessing.
  • The differences are fairly small – we often misinterpret the meaning of “statistically significant”. It sounds like we found big differences between A and B; the differences, after all, are “significant”. But the term refers to probability not the degree of difference. In this case, we’re 99.9% sure that the differences in the heart failure groups were not caused by chance. Similarly, we’re 99% sure that the differences in the cardiac arrest groups were not caused by chance. But the differences themselves were fairly small.
  • The best guess is overtreatment – what causes these differences? The best guess seems to be that cardiologists – when they’re not off at some meeting – are “overly aggressive” in their treatments. The New York Times quotes Anupam Jena: “…we should not assume … that more is better. That may not be the case.” Remember, however, that this is just a guess. We haven’t proven that overtreatment is the culprit.

It’s a good study with interesting findings. But what should we do about them? Should cardiologists change their behavior based on this study? Translating a study’s findings into policies and protocols is a big jump. We’re moving from the scientific to the political. We need a heavy dose of critical thinking. What would you do?

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives