Strategy. Innovation. Brand.

From AI to UBI

We'll fill it when you're born.

We’ll fill it when you’re born.

In his 1984 novel, Neuromancer, that kicked off the cyberpunk wave, William Gibson wrote about a new type of police force. Dubbed the Turing Police, the force was composed of humans charged with the task of controlling non-human intelligence.

Humans had concluded that artificial intelligence – A.I. – would always seek to make itself more intelligent. Starting with advanced intelligence, an A.I. implementation could add new intelligence with startling speed. The more intelligence it added, the faster the pace. The growth of pure intelligence could only accelerate. Humans were no match. A.I. was a mortal threat. The Turing Police had to keep it under control.

Alas, the Turing Police were no match for gangsters, drug runners, body parts dealers, and national militaries. The most threatening A.I. in the novel was “military-grade ice” developed by the Chinese Army. Was Gibson prescient?

If the Turing Police couldn’t control A.I., I wonder if we can. Three years ago, I wrote a brief essay expressing surprise that a computer could grade a college essay better than I could. I thought of grading papers as a messy, fuzzy, subtle task and assumed that no machine could match my superior wit. I was wrong.

But I’m a teacher at heart and I assumed that the future would still need people like me to teach the machines. Again, I was wrong. Here’s a recent article from MIT Technology Review that describes how robots are teaching robots. Indeed, they’re even pooling their knowledge in “robot wikipedias” so they can learn even more quickly. Soon, robots will be able to tune in, turn on, and take over.

So, is there any future for me … or any other knowledge worker? Well, I still think I’m funnier than a robot. But if my new career in standup comedy doesn’t work out, I’m not sure that there’s any real need for me. Or you, for that matter.

That raises an existential question: are humans needed? We’ve traditionally defined “need” based on our ability to produce something. We produced goods and services that made our lives better and, therefore, we were needed. But if machines can produce goods and services more effectively than we can, are we still needed? Perhaps it’s time to re-define why we’re here.

Existential questions are messy and difficult to resolve. (Indeed, maybe it will take A.I. to figure out why we’re here). While we’re debating the issue, we have a narrower problem to solve: the issue of wealth distribution. Traditionally, we’ve used productivity as a rough guide for distributing wealth. The more you produce, the more wealth flows your way. But what if nobody produces anything? How will we parcel out the wealth?

This question has led to the development of a concept that’s now generally known as Universal Basic Income or U.B.I. The idea is simple – the government gives everybody money. It doesn’t depend on need or productivity or performance or fairness or justice. There’s no concept of receiving only what you deserve or what you’ve earned. The government just gives you money.

Is it fair? It depends on how you define fairness. Is it workable? It may be the only workable scheme in an age of abundance driven by intelligent machines. Could a worldwide government administer the scheme evenhandedly? If the government is composed of humans, then I doubt that the scheme would be fair and balanced. On the other hand, if the government were composed of A.I.s, then it might work just fine.

 

One Response to From AI to UBI

Leave a Reply

Your email address will not be published. Required fields are marked *

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives