Let’s say that I’m a vocal and vehement advocate of a flat tax. I’m adamant that implementing a flat tax will solve all the world’s problems. I read widely on flat tax theories … and I only read articles that agree with me. Why bother reading the opposite side? I already know they’re wrong.
What could be wrong with this? Well, two things. First, it could lead to dementia. The theory is that reading things that I agree with only reinforces existing connections in my brain. It doesn’t create new connections. Creating new connections seems to be a good way to forestall dementia. So, reading contrary opinions may lead to improved brain health.
Second, reading only things I agree with could lead to more extreme positions and greater animosity between people of different political convictions. The body politic polarizes. Does that sound familiar? Can we blame it all on the Internet? Maybe.
Now let’s extend the example. Let’s say that I use a search engine to find new articles about flat taxes. Let’s also assume that the search engine is smart enough to recognize that I only select articles that are positive about flat taxes. So, the search engine does me a “favor” and only presents positive articles. Not only do I not read contrary opinions, I’m not even aware that such opinions exist. Further proof that I must be right!
Several years ago, Eli Pariser coined the term “the filter bubble” to describe this phenomenon. Pariser argues that we live in bubbles that filter out important information, including information that would counter our opinions. (I typically use the term “echo chamber” for the same effect).
To my way of thinking (which may be filtered), this is a serious problem. Fortunately, according to a recent article in Technology Review, there may be ways to build recommendation engines that expose people to contrary ideas in such a way that they receive them with reasonably open minds.
Researchers in Barcelona built an engine based on the “idea that, although people may have opposing views on sensitive topics, they may also share interests in other areas. [The] recommendation engine … points these kinds of people towards each other based on their own preferences.”
According to the researchers who designed the engine, ““We nudge users to read content from people who may have opposite views … while still being relevant according to their preferences.”
The engine creates a “data portrait” for users and compares them. Users who have similar data portraits except for the “sensitive issue” (the flat tax, in our case) are connected to each other. They find that they share many interests even though they’re on opposite sides of the sensitive issue. However, because there is some common ground, the people are more willing to listen to each other.
It seems like a promising start and I’m going to try to use the recommendation engine to see what my data portrait looks like. I’ll report results as soon as I can. In the meantime, you can the original research article here.