Strategy. Innovation. Brand.

Neuromancer

From AI to UBI

We'll fill it when you're born.

We’ll fill it when you’re born.

In his 1984 novel, Neuromancer, that kicked off the cyberpunk wave, William Gibson wrote about a new type of police force. Dubbed the Turing Police, the force was composed of humans charged with the task of controlling non-human intelligence.

Humans had concluded that artificial intelligence – A.I. – would always seek to make itself more intelligent. Starting with advanced intelligence, an A.I. implementation could add new intelligence with startling speed. The more intelligence it added, the faster the pace. The growth of pure intelligence could only accelerate. Humans were no match. A.I. was a mortal threat. The Turing Police had to keep it under control.

Alas, the Turing Police were no match for gangsters, drug runners, body parts dealers, and national militaries. The most threatening A.I. in the novel was “military-grade ice” developed by the Chinese Army. Was Gibson prescient?

If the Turing Police couldn’t control A.I., I wonder if we can. Three years ago, I wrote a brief essay expressing surprise that a computer could grade a college essay better than I could. I thought of grading papers as a messy, fuzzy, subtle task and assumed that no machine could match my superior wit. I was wrong.

But I’m a teacher at heart and I assumed that the future would still need people like me to teach the machines. Again, I was wrong. Here’s a recent article from MIT Technology Review that describes how robots are teaching robots. Indeed, they’re even pooling their knowledge in “robot wikipedias” so they can learn even more quickly. Soon, robots will be able to tune in, turn on, and take over.

So, is there any future for me … or any other knowledge worker? Well, I still think I’m funnier than a robot. But if my new career in standup comedy doesn’t work out, I’m not sure that there’s any real need for me. Or you, for that matter.

That raises an existential question: are humans needed? We’ve traditionally defined “need” based on our ability to produce something. We produced goods and services that made our lives better and, therefore, we were needed. But if machines can produce goods and services more effectively than we can, are we still needed? Perhaps it’s time to re-define why we’re here.

Existential questions are messy and difficult to resolve. (Indeed, maybe it will take A.I. to figure out why we’re here). While we’re debating the issue, we have a narrower problem to solve: the issue of wealth distribution. Traditionally, we’ve used productivity as a rough guide for distributing wealth. The more you produce, the more wealth flows your way. But what if nobody produces anything? How will we parcel out the wealth?

This question has led to the development of a concept that’s now generally known as Universal Basic Income or U.B.I. The idea is simple – the government gives everybody money. It doesn’t depend on need or productivity or performance or fairness or justice. There’s no concept of receiving only what you deserve or what you’ve earned. The government just gives you money.

Is it fair? It depends on how you define fairness. Is it workable? It may be the only workable scheme in an age of abundance driven by intelligent machines. Could a worldwide government administer the scheme evenhandedly? If the government is composed of humans, then I doubt that the scheme would be fair and balanced. On the other hand, if the government were composed of A.I.s, then it might work just fine.

 

Aristotle, Cyberpunk, and Extended Minds

How far does it go?

How far does it go?

Aristotle argued against teaching people to read. If we can store our memories externally, he argued, we won’t need to store them internally, and that would be a tragic loss. We’ll stop training our brains. We’ll forget how to remember.

Aristotle was right, of course. Except for a few “memory athletes”, we no longer train our brains to remember. And our plastic brains may well have changed because of it. The brain of a Greek orator, trained in advanced memory techniques, was probably structurally different from our modern brains. What we learn (or don’t learn) shapes our physical brains.

Becoming literate was one step in a long journey to externalize our minds. Today, we call it the “extended mind” based on a 1998 paper by the philosophers Andy Clark and David Chalmers. Clark and Chalmers ask the simple question: “Where does our mind stop and the rest of the world begin?” The answer, they suggest, is “… active externalism, based on the active role of the environment in driving cognitive processes.”

If our minds extend beyond our skulls, where do they stop? I see at least three answers.

First, the mind extends throughout the rest of the body. As we’ve seen with embodied cognition, we think with our bodies as much as our brains. The physical brain is within our skulls but the mind seems to encompass our entire body.

Second, our minds extend to other people. We know that the people around us affect our behavior. (My mother warned me against running with a fast crowd). It turns out that other people affect our thoughts as well, in direct and physical ways.

The physical mechanism for “thought transfer” is the mirror neuron – “…a neuron that fires both when an animal acts and when the animal observes the same action performed by another.” When we see another person do something, our mirror neurons imitate the same behavior. Other people’s actions and moods affect our thoughts. We can – and do –read minds.

The impact of our mirror neurons varies from person to person. The radio show Invisibilia recently profiled a woman who could barely leave her own home so affected was she by other people’s thoughts. (You can find the podcast, called Entanglement, here). The woman was so entangled with others that it’s nearly impossible to draw a line between one mind and another. Perhaps we’re all entangled – each brain is like a synapse in a much larger brain.

Third, we can extend our minds through our external devices. We now have many ways to externalize our memories and, perhaps, even our entire personas. In Neuromancer, the novel that launched the cyberpunk wave, people save their entire personalities and memories on cassette tapes. (How quaint). They extend their minds not only spatially but also into the future.

Neuromancer is about the future, of course. What about today’s devices … and, especially, the world’s most popular device, the smartphone? As we extend our minds through smartphones, do we reduce the “amount of mind” that remains within us? Do smartphones make us dumb? Or, conversely, do they increase the total intelligence availability to humanity – some of it in our brains and bodies and some of it in our external devices?

Good questions. Let’s talk about them tomorrow.

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives