
Hey sexy.
Why are we creative? Other animals don’t create much and yet they’re often very successful. The horseshoe crab, for instance, has been around for 450 million years. That’s a pretty good success story – I hope we humans can stick around that long. Yet nobody accuses horseshoe crabs of being creative.
Some researchers argue that creativity derives from competitive, evolutionary pressures. If we can develop creative solutions to problems, we can out-compete other animals. We might even out-compete other humans, like the Neanderthals.
Other researchers suggest that creativity has more to do with mate selection. The basic argument: creativity is sexy. Geoffrey Miller, for instance, suggests that creativity is not so very different from a peacock’s tail. It’s an advertisement to lure a mate.
If that’s true, it raises a different question: what kinds of creativity are the sexiest? Fortunately, Scott Barry Kaufmann, Gregory Feist, and their colleagues looked into that very question in a recent article (“Who Finds Bill Gates Sexy?”) in the Journal of Creative Behavior. (You can find less technical descriptions of the study here and here).
Feist had previously proposed that there are three forms of creativity and that they might vary in their degree of sexiness. In the current paper, Kauffman and Feist and their colleagues, tested this hypothesis on 119 men and 696 women using a variety of cognitive and personality tests. Feist’s three general forms of creativity are:
Which of the three do you find sexiest? In the study, both men and women found “… ornamental/aesthetic forms of creativity … more sexually attractive than applied/technological forms of creativity.”
Further, the sexiest creative behaviors included playing sports, playing in a band, making a clever remark, writing music, dressing in a unique style, and writing poetry. The least sexy creative behaviors included interior design, writing a computer program, creating a website, growing and gardening, creating scientific experiments, and creating ad campaigns.
In an earlier post, we learned that men who do more chores around the house have less sex than men who do fewer chores. With the new research, we now have a more complete picture of what’s sexy and what’s not. What to do? I don’t know about you but I’m going to sell the vacuum cleaner and start taking guitar lessons.

Shouldn’t you be at a meeting?
If you were to have major heart problem – acute myocardial infarction, heart failure, or cardiac arrest — which of the following conditions would you prefer?
Scenario A — the failure occurs during the heavily attended annual meeting of the American Heart Association when thousands of cardiologists are away from their offices or;
Scenario B — the failure occurs during a time when there are no national cardiology meetings and fewer cardiologists are away from their offices.
If you’re like me, you’ll probably pick Scenario B. If I go into cardiac arrest, I’d like to know that the best cardiologists are available nearby. If they’re off gallivanting at some meeting, they’re useless to me.
But we might be wrong. According to a study published in JAMA Internal Medicine (December 22, 2014), outcomes are generally better under Scenario A.
The study, led by Anupam B. Jena, looked at some 208,000 heart incidents that required hospitalization from 2002 to 2011. Of these, slightly more than 29,000 patients were hospitalized during national meetings. Almost 179,000 patients were hospitalized during times when no national meetings were in session.
And how did they fare? The study asked two key questions: 1) how many of these patients died within 30 days of the incident? and; 2) were there differences between the two groups? Here are the results:
The general conclusion: “High-risk patients with heart failure and cardiac arrest hospitalized in teaching hospitals had lower 30-day mortality when admitted during dates of national cardiology meetings.”
It’s an interesting study but how do we interpret it? Here are a few observations:
It’s a good study with interesting findings. But what should we do about them? Should cardiologists change their behavior based on this study? Translating a study’s findings into policies and protocols is a big jump. We’re moving from the scientific to the political. We need a heavy dose of critical thinking. What would you do?

Did he or didn’t he?
What can Serial teach us? How to think.
Suellen and I just drove 1,100 miles to visit Grandma for the holidays. Along the way, we listened to the 12 episodes of the podcast, Serial, created by the journalist Sarah Koenig (pictured).
Serial focuses on the murder of Hae-Min Lee, a high school senior who disappeared in Baltimore in January 1999. Hae’s ex-boyfriend, Adnan Syed, was convicted of murder and is now serving a life sentence. Adnan’s trial revolved around: 1) testimony from Jay Wilds, a high school student and small-time dope dealer who admitted to helping dispose of Hae’s body, and ; 2) cell phone records that noted the time and rough location of numerous calls between various students.
Shortly before she disappeared, Hae broke up with Adnan and started dating Don, a slightly older boy who worked at a local mall. When Hae’s body was discovered (under unusual circumstances), both Don and Adnan came under suspicion. Don had a good alibi, however, and Adnan did not. Jay’s testimony pointed at Adnan. Some of the cell phone records supported Jay’s story but others contradicted it. Adnan has consistently maintained his innocence.
Throughout the 12 episodes, Koenig tries to chase down loose ends and unanswered questions. She interviews everyone, including Adnan, multiple times. The case seemed pretty solid at first. Indeed, the jury deliberated only a few hours before returning a guilty verdict. Under Koenig’s relentless scrutiny however, doubts begin to emerge. Could Adnan really have done it? Maybe. Maybe not.
I won’t give away the ending, but Serial is a fascinating look at police investigations and our criminal justice system. For me, it’s also a fascinating way to teach critical thinking. Here are a few of the critical thinking concepts (and biases) that I noted.
Satisficing – we find a solution that suffices in satisfying our needs and cease searching for other solutions (even though better ones might be available). Adnan argues that the police were satisficing when they decided that he was the chief suspect. They stopped looking for other suspects and only gathered evidence that pointed to Adnan. This led to …
Confirmation bias – Adnan notes that the prosecution only presented the cell phone evidence that supported Jay’s version of events. They ignored any cell record that contradicted Jay. That’s a pretty good example of confirmation bias – selecting the evidence based on what you already believe.
Stereotyping – Adnan is an American citizen who is also a Muslim of Pakistani heritage. Many Americans have heard that Muslim men can take multiple wives and that “honor killings” are still practiced in Pakistan. This seems to give Adnan a motive for killing Hae. But is it true or is it just a stereotype?
Projection bias – as we listen to the program, we might say, “Well, if I were going to kill someone, I wouldn’t do it that way. It’s not a smart way to commit murder. So maybe he didn’t do it.” But, then, what is smart about murder? On the other hand, Adnan comes across as smart, articulate, and resigned to his fate. We might project these feelings: “If I were wrongly convicted of murder, I would be angry and bitter. Since Adnan is neither, perhaps he really did do it”. Our projections could go either way. But note that our projections are really about us, not about Adnan. They tell us nothing about what really happened. To get to the truth, we have to ignore how we would do it.
Are there other critical thinking lessons in Serial? Probably. Listen to it and see what you think. I’m not sure I agree with Koenig’s conclusions but I sure love the way she led us there.

Is it drunk?
How do you know if someone is dead? Or drunk? Or dead drunk? Or, for that matter, how do you know if the turkey is done?
Suellen loves to cook and often asks me to check up on things. She might ask, “Honey, is the turkey done?” or “Are the madeleines ready to serve?” My standard response is “How would I know?” That’s not as flippant as it might sound. I’m really just asking for the procedure I need to perform to answer the question accurately.
The question revolves around a definition: what does it mean to be “done”? It also involves an operation that I need to perform. To test a turkey, the standard operation (in our house) is to stick a sharp fork in it and, if the juices run clear, it’s ready. It’s an operation that anyone can perform. No matter who performs the operation, the results are the same: if the juices are clear, the turkey is done. If the juices are cloudy, … well, cook it some more. Note that this is not a judgment call. It’s clear to all observers and reliable no matter who does the observing.
The procedure that Suellen prescribes is usually known as an operational definition. You define something by performing a standard, consistent operation. Such definitions form a critical part of critical thinking. Definitions are fundamental. If they’re solid, you can build a logical argument on top of them. If they’re wobbly, it doesn’t matter how good the rest of your logic is – the foundation won’t support it.
How would you define drunkenness? You may know what it feels like to be drunk. You may also know what person looks like (or smells like) when he’s drunk. But your view and mine may be different. You may think he’s drunk; I may think he’s a dork. Though we make the same observation, our conclusions are different. We don’t have a reliable, observable, objective test of drunkenness.
So, let’s operationalize drunkenness. We’ll ask the person to breathe into a breathalyzer. We’ll also agree on a number that defines drunkenness. In Colorado, that number is 0.08 grams of alcohol per deciliter of blood. The person breathes into the device and the reading comes out 0.09. The reading is observable, objective, and reliable. In Colorado, the person is legally drunk and should not drive a car.
Notice also that we choose the number by agreement. There’s nothing magical about 0.08 – we’ve simply agreed on it. (The risks of an accident do increase as compared to, say, 0.04). In Sweden, which aims to eliminate all traffic fatalities, the cutoff is much lower: 0.02. So, it’s possible to be drunk in Sweden while being perfectly sober in Colorado.
What about the definition of death? You wouldn’t want to get that wrong. It used to be simple: just take the person’s pulse. If there is no pulse, the person is dead. It’s an operation that’s observable, objective, and reliable. However, the definition has changed in the recent past. We now focus more on brain activity than on pulse. We have new operations to perform.
When building a logical argument, it’s always good to probe the definitions. They dictate how we perceive phenomena and gather data. Having good definitions doesn’t necessarily mean that you’ll have a good argument. On the other hand, bad definitions necessarily lead to failed arguments.
And, how about those madeleines? I just can’t remember. I’ll ask my friend, Marcel.

Baloney or bologna?
I like to tell wildly improbable stories with a very straight face. I don’t want to embarrass anyone but it’s fun to see how persuasive I can be. Friends know to look to Suellen for verification. With a subtle shift of her head, she lets them know what’s real and what’s not.
My little charades have taught me that many people will believe very unlikely stories. That includes me, even though I think I have a pretty good baloney detector. So how do you tell what’s true and what’s not? Here are some clues.
Provenance — one of the first steps is to assess the information’s source. Did it come from a reliable source? Is the source disinterested – that is, does he or she have no interest in how the issue is resolved? Was the information derived from a study or is it hearsay? Does the source have a hidden agenda?
Assess the information— Next, you need to assess the information itself. What are the assumptions that underlie the information? Are there any logical fallacies? What inferences can you draw? Always remember to ask about what’s left out. What are you not seeing? What’s not being presented? Here’s a good example.
Assess the facts — with the assessment phase, be sure to investigate the facts. Are they really facts? How do you know? Sometimes “facts” are not really factual. Here’s an example.
Definition – as you assess the information, you also need to think about definitions. Definitions are fundamental – if they’re wrong, everything else is called into question. A good definition is objective, observable, and reliable. Here’s a not-so-good definition: “He looked drunk.” Here’s a better one: “His blood alcohol reading was 0.09.” The best definitions are operational – you perform a consistent, observable operation to create the definition.
Interpretation – we now know something about the information – where it comes from, how it’s defined and so on. How much can we interpret from that? Are we building an inductive argument – from specific cases to general conclusions? Or is it a deductive argument – from general principles to specific conclusions?
Causality – causality is part of interpretation and it’s very slippery. If variables A and B behave similarly, we may conclude that A causes B. But it could be a random coincidence. Or perhaps variable C causes both A and B. Or maybe we’ve got it backwards and B causes A. The only way to prove cause-and-effect is through the experimental method. If someone tells you that A causes B but hasn’t run an experiment, you should be suspicious. (For more detail, click here and here).
Replicability – if a study is done once and points to a particular conclusion, that’s good (assuming the definitions are solid, the methodology is sound, etc.) If it’s done multiple times by multiple people in multiple locations, that’s way better. Here’s a scale that will help you sort things out.
Statistics and probability – you don’t need to be a stat wizard to think clearly. But you should understand what statistical significance means. When we study something, we have various ways to compute whether the result is caused by chance or not. These are reported as probabilities. For instance, we might say that we found a difference between treatments A and B and there’s only a 1% probability that the difference was caused by chance. That’s not bad. But notice that we’re not saying that there are big differences between A and B. Indeed, the differences might be quite small. The differences are not “significant” in terms of size but rather in terms of probability.
When you trying to discover whether something is true or not, keep these steps and processes in mind. It’s good to be skeptical. If something sounds too good to be true, … well, that’s a good reason to be doubtful. But don’t be cynical. Sometimes miracles do happen.