Do men and women think differently? If they do, who should develop artificial intelligence? As we develop AI, should we target “feminine” intelligence or “masculine” intelligence? Do we have enough imagination to create a non-gendered intelligence? What would that look like?
First of all, do the genders think differently? According to Scientific American, our brains are wired differently. As you know, our brains have two hemispheres. Male brains have more connections within each hemisphere as compared to female brains. By contrast, female brains have more connections between hemispheres.
Men, on average, are better at connecting the front of the brain with the back of the brain while women are better at connecting left and right hemispheres. How do these differences influence our behavior? According to the article, “…male brains may be optimized for motor skills, and female brains may be optimized for combining analytical and intuitive thinking.”
Women and men also have different proportions of white and gray matter in their brains. (Click here). Gray matter is “…primarily associated with processing and cognition…” while white matter handles connectivity. The two genders are the same (on average) in general intelligence, so the differences in the gray/white mix suggest that there are two different ways to get to the same result. (Click here). Women seem to do better at integrating information and with language skills in general. Men seem to do better with “local processing” tasks like mathematics.
Do differences in function drive the difference in structure or vice-versa? Hard to tell. Men have a higher percentage of white matter and also have somewhat larger brains compared to women. Perhaps men need more white matter to make connections over longer distances in their larger brains. Women have smaller heads and may need less white matter to make the necessary connections — just like a smaller house would need less electrical wire to connect everything. Thus, a larger proportion of the female brain can be given over to gray matter.
So men and women think differently. That’s not such a surprise. As we look ahead to artificial intelligence, which model should we choose? Should we emphasize language skills, similar to the female brain? Or local processing skills, similar to the male brain? Should we emphasize processing power or information integration?
Perhaps we could do both, but I wonder how realistic that is. I try to imagine what it would be like to think as a woman but I find it difficult to wrap my head around the concept. As a feminist might say, I just don’t get it. I have to imagine that a woman trying to think like a man would encounter similar difficulties.
Perhaps the best way to develop AI would involve mixed teams of men and women. Each gender could contribute what it does best. But that’s not what’s happening today. As Jack Clark points out, “Artificial Intelligence Has A “Sea of Dudes’ Problem”. Clark is mainly writing about data sets, which developers use to teach machines about the world. If men choose all the data sets, the resulting artificial intelligence will be biased in the same ways that men are. Yet male developers of AI outnumber females by a margin of about eight-to-one. Without more women, we run the risk of creating male chauvinist machines. I can just hear my women friends saying, “Oh my God, no!”
What’s the difference between an art and a craft?
A traditional definition focuses on differences in expectations and outcomes. With a craft, we know precisely what the outcome will be, even before we start. We have a set of instructions and, if followed faithfully, the outcome is guaranteed.
By contrast, an artist doesn’t know what the outcome will be. Creating an artwork involves exploration, doubt, questioning, trial-and-error, and no small amount of anxiety. An artist explores the unknown and aims to give us new insights. A good artwork may not be beautiful in a classic sense, but it is always imaginative. A craftsman, on the other hand, creates the expected and delivers beauty and pleasure through execution more than imagination.
I thought of these distinctions the other day when I toured a major new exhibition, Women of Abstract Expressionism*, at the Denver Art Museum. The exhibition highlights a dozen leaders of the abstract expressionist movement and their work from roughly 1945 to 1960. Here’s how two of the artists describe the creative process:
It occurs to me that the distinction between art and craft also applies to organizational development. Change management is a craft. Organizational transformation is an art.
We often invoke change management when we begin a concise and well-delineated project. We understand the boundaries and the players. We move through well-defined phases that we can measure objectively. We expect changes to occur between people – perhaps with new reporting structures and alignments. Change management is not easy to master but it seems to me that it is a craft. We often celebrate the end result. We can do that precisely because it is a craft – we know when the process ends.
Organizational transformation is much more like an art. When we seek to transform an organization’s culture, we have only a fuzzy idea of where we’re going. Milestones exist but they’re not well defined. Transformation requires changes within people rather than only between people. We can’t see those changes; nor can we measure them. If we try to measure the unmeasurable, we’ll go off course. Like any other art, transformation involves exploration, doubt, questioning, trial-and-error, and no small amount of anxiety. Paraphrasing Grace Hartigan, “Eventually, the organization tells you what it wants to be.” The secret to success is listening, not measuring.
I sometimes ask my artist friends how they know when a piece they’re working on is finished. None of them has very good answers. Some say they “just know”. Others say that they just get tired of it. Others say that it’s never finished. Whenever they see it again, they’re tempted to make “just a few minor changes.”
It occurs to me that I’ve never been to a party to celebrate an organizational transformation. Perhaps it’s because we just don’t know when the transformation is finished. It’s an art not a craft.
*The Women of Abstract Expressionism exhibit is both superb and unexpected. The paintings are exciting and energetic. The painters are almost anonymous. This is the first major exhibition – anywhere in the world — of the women who energized the abstract expressionist movement. That it happened in my hometown makes me more than just a little bit proud. You can see it – and should see it — until September 25th.
The painting illustrating this article is Grace Hartigan’s The King Is Dead, from 1960.
We live in an individualistic culture and I wonder if that doesn’t bias our understanding of how we behave and think. For instance, we view humans as self-contained and self-sufficient units. There’s a clear boundary between one human and another. Similarly, there’s a clear boundary between each individual and the environment around us. We are separate from each other and from the world.
But what if that’s not the case? What if humans are entangled with each other in much the same way that quantum particles are entangled? Mirror neurons are still somewhat mysterious but what if they allow us to entangle our thoughts with those of other people? Similarly, we’ve learned in the recent past that we think with our bodies as much as our brains. What if our thinking actually extends beyond our bodies and interacts with other thoughts?
Similarly, what if the environment is not separate from us but part of us? What if the environment shapes us much like a river shapes a stone? In a sense, it would mean that we’re not entities but processes. We’re not things but actions. The Buddhists might be right: impermanence is the very essence of our being.
If these things are true, it may give us a key to understanding consciousness. Defining consciousness is known as the “hard problem”. Neuroscientists often phrase the question simply: “What is consciousness?” What if that’s the wrong question? The question implies that consciousness is a thing. It also suggests that consciousness exists somewhere, most likely in the brain. But what if consciousness is not a thing but an action? What if it’s something we do as we interact with the environment? What if we’re swimming in consciousness?
You may have guessed by now that I’ve been reading the works of the philosopher, Alva Noë. (See here and here). Noë studies perception and consciousness and tries to understand how they are entangled. Noë states flatly that, “Consciousness is not something that happens in us. It is something we do.”
Noë goes on to compare consciousness to a dancer, who is influenced by myriad external factors, including the music, the dance floor, and her partner. Dancing is not within the dancer. Noë writes that, “The idea that the dance is a state of us, inside of us, or something that happens in us is crazy. Our ability to dance depends on all kinds of things going on inside of us, but that we are dancing is fundamentally an attunement to the world around us.” Similarly, Noë suggests, consciousness is not within us, rather it is “…a way of being part of a larger process.”
Noë similarly argues that consciousness is not located in a given place. The analogy is life itself. If we look at other people, we can tell that they’re alive. But where is life located in them? We quickly realize that we don’t think of life as a thing that is located in a certain place. Life is not a thing but a dynamic. Noë argues that the same is true of consciousness.
Noë also suggests that cognitive scientists are pursuing the wrong analogy – the computer. This “distinctively nonbiological approach” converts consciousness into a mere computational function that is “…very much divorced from the active life of the animal.” The active life – and engagement with the world around us – creates consciousness in a way that a “brain in a vat” could never do.
What’s it all mean? We’re looking for consciousness in all the wrong places. As Noë concludes, “…the idea that you are your brain or that the brain alone is sufficient for consciousness is really just a mantra, and … there is no reason to believe it.”
We didn’t really understand the human heart until the mid 17th century, when engineers developed vacuum pumps to move water out of mines. Anatomists realized that such pumps provided an excellent analogy for what the heart does and how it does it. As technology advanced, we used it to learn about our own biology.
In the 20th century, with the advent of the digital computer, we humans reached a similar conclusion-by-analogy: computers show us how our brains work. In the computer, we see elementary logic, various switches flipping on and off, and memory cells that hold information in its most elemental form – binary digits. Perhaps our brains work the same way.
The brain-as-computer analogy has never been perfect, however. The computer, for instance, has a central processing unit (CPU) that manages pretty much everything. The brain doesn’t appear to have an analogous organ. Rather, human thinking seems to be diffuse and decentralized. Indeed, much of our thinking seems to occur outside our brain; the mind is, apparently, much bigger than the brain. Similarly, we can precisely locate a “memory” in a computer. No such luck with a human brain. Memories are elusive and difficult to pinpoint.
Further, the brain is plastic in ways that computers are not. For instance, a good chunk of our brainpower is given over to visual processing. If I go blind, however, my brain can redeploy that processing power to other tasks. The brain can analyze its own limitations and change its functions in ways that computers can’t.
Given the shortcomings of the brain-as-computer analogy, perhaps it’s time to propose a new analogy. Having absorbed a healthy dose of Daniel Dennett (see here and here), I’d like to propose a simple alternative: the brain functions much like the United Sates of America.
That may sound bizarre but let’s go through the reasoning. First, Dennett points out that brain cells, as living organisms, can have their own agendas in ways that silicon cannot. Yes, brain cells may switch on and off as electricity pulses through them, but they could conceivably do other things as well. Perhaps they can plot and plan. Perhaps they can cooperate – or collude, depending on how you look at it. Perhaps they can aim to do things that are in their best interests, as opposed to the interests of the overall organism.
Second, Dennet notes that all biological creatures descended from single-celled organisms. Once upon a time, single-cell organisms were free to do as they pleased. Some chose to associate with similar organisms to form multi-celled organisms. In doing so, cells started to specialize and create communities with much greater potential. However, they also gave up some of their primordial freedom. They worked not just for themselves but also for the organism as a whole. Perhaps our cells have some “memory” of that primordial freedom and some desire to return to it. Perhaps some of our cells just want to go feral.
And how is this like the United States? The original colonies were free to do as they pleased. When they joined together, they gave up some freedom and created a community with much greater potential. We assume that each state works for the good of the union. But each state also has strong incentives to work for its own good, even if doing so undermines the union. Similarly, each state has a “memory” of its primordial freedom and an inchoate desire to return there. Indeed, states’ rights are jealously guarded.
Let’s assume, for a moment, that we have a microscope as big as the solar system. When we examine the United States, we see 50 cells. Each cell seems to be similar in function and process. We might assume that they always function for the good of the whole. But when we look closer, we see that each cell has its own agenda. Some cells (Texas?) may want to go feral to recapture their primordial freedom. Other cells are jockeying for position and advantage. Some are forming alliances and coalitions with like-minded cells to accomplish their aims. Red cells seem to have different values and processes than blue cells.
Could our brains really be as chaotic as the good old USA? It’s possible. If nothing else, such an analogy frees up our thinking. We’re no longer in a silicon straitjacket. We recognize the possibility that living cells may have complex agendas. We start to see possibilities that we were previously blind to. I would write more but I suspect that some of my neurons have just gone feral.
In his 1984 novel, Neuromancer, that kicked off the cyberpunk wave, William Gibson wrote about a new type of police force. Dubbed the Turing Police, the force was composed of humans charged with the task of controlling non-human intelligence.
Humans had concluded that artificial intelligence – A.I. – would always seek to make itself more intelligent. Starting with advanced intelligence, an A.I. implementation could add new intelligence with startling speed. The more intelligence it added, the faster the pace. The growth of pure intelligence could only accelerate. Humans were no match. A.I. was a mortal threat. The Turing Police had to keep it under control.
Alas, the Turing Police were no match for gangsters, drug runners, body parts dealers, and national militaries. The most threatening A.I. in the novel was “military-grade ice” developed by the Chinese Army. Was Gibson prescient?
If the Turing Police couldn’t control A.I., I wonder if we can. Three years ago, I wrote a brief essay expressing surprise that a computer could grade a college essay better than I could. I thought of grading papers as a messy, fuzzy, subtle task and assumed that no machine could match my superior wit. I was wrong.
But I’m a teacher at heart and I assumed that the future would still need people like me to teach the machines. Again, I was wrong. Here’s a recent article from MIT Technology Review that describes how robots are teaching robots. Indeed, they’re even pooling their knowledge in “robot wikipedias” so they can learn even more quickly. Soon, robots will be able to tune in, turn on, and take over.
So, is there any future for me … or any other knowledge worker? Well, I still think I’m funnier than a robot. But if my new career in standup comedy doesn’t work out, I’m not sure that there’s any real need for me. Or you, for that matter.
That raises an existential question: are humans needed? We’ve traditionally defined “need” based on our ability to produce something. We produced goods and services that made our lives better and, therefore, we were needed. But if machines can produce goods and services more effectively than we can, are we still needed? Perhaps it’s time to re-define why we’re here.
Existential questions are messy and difficult to resolve. (Indeed, maybe it will take A.I. to figure out why we’re here). While we’re debating the issue, we have a narrower problem to solve: the issue of wealth distribution. Traditionally, we’ve used productivity as a rough guide for distributing wealth. The more you produce, the more wealth flows your way. But what if nobody produces anything? How will we parcel out the wealth?
This question has led to the development of a concept that’s now generally known as Universal Basic Income or U.B.I. The idea is simple – the government gives everybody money. It doesn’t depend on need or productivity or performance or fairness or justice. There’s no concept of receiving only what you deserve or what you’ve earned. The government just gives you money.
Is it fair? It depends on how you define fairness. Is it workable? It may be the only workable scheme in an age of abundance driven by intelligent machines. Could a worldwide government administer the scheme evenhandedly? If the government is composed of humans, then I doubt that the scheme would be fair and balanced. On the other hand, if the government were composed of A.I.s, then it might work just fine.