Strategy. Innovation. Brand.

artificial intelligence

Will AI Be The End Of Men?

Can you say “empathy”?

A little over two years ago, I wrote an article called Male Chauvinist Machines. At the time, men outnumbered women in artificial intelligence development roles by about eight to one. A more recent report suggests the ratio is now about three to one.

The problem is not just that men outnumber women. Data mining also presents an issue. If machines mine data from the past (what other data is there?), they may well learn to mimic biases from the past. Amazon, for instance, recently found that its AI recruiting system was biased against women. The system mined data from previous hires and learned that resumés with the word “woman” or “women” were less likely to be selected. Assuming that this was the “correct” decision, the system replicated it.

Might men create artificial intelligence systems that encode and perpetuate male chauvinism? It’s possible. It’s also possible that the emergence of AI will mean the “end of men” in high skill, cognitively demanding jobs.

That’s the upshot of a working paper recently published by the National Bureau of Economic Research (NBER) titled, “The ‘End of Men’ and Rise of Women In The High-Skilled Labor Market”.

The paper documents a shift in hiring in the United States since 1980. During that time the probability that a college-educated man would be employed in a

“… cognitive/high wage occupation has fallen. This contrasts starkly with the experience for college-educated women: their probability of working in these occupations rose.”

The shift is not because all the newly created high salary, cognitively demanding jobs are in traditionally female industries. Rather, the shift is “….accounted for by a disproportionate increase in the female share of employment in essentially all good jobs.” There seems to be a pronounced female bias in hiring for cognitive/high wage positions — also known as “good jobs”.

Why would that be? The researchers consider that “…women have a comparative advantage in tasks requiring social and interpersonal skills….” So, if industry is hiring more women into cognitive/high-wage jobs, it may indicate that such jobs are increasingly requiring social skills, not solely technical skills. The researchers specifically state that:

“… our hypothesis is that the importance of social skills has become greater within high-wage/cognitive occupations relative to other occupations and that this … increase[s] the demand for women relative to men in good jobs.”

The authors then present 61 pages on hiring trends, shifting skills, job content requirements, and so on. Let’s just assume for a moment that the authors are correct – that there is indeed a fundamental shift in the good jobs market and an increasing demand for social and interpersonal skills. What does that bode for the future?

We might want to differentiate here between “hard skills” and “soft skills” – the difference, say, between physics and sociology. The job market perceives men to be better at hard skills and women to be better at soft skills. Whether these differences are real or merely perceived is a worthy debate – but the impact on industry hiring patterns is hard to miss.

How will artificial intelligence affect the content of high-wage/cognitive occupations? It’s a fair bet that AI systems will displace hard skills long before they touch soft skills. AI can consume data and detect patterns far more skillfully than humans can. Any process that is algorithmic – including disease diagnosis – is subject to AI displacement. On the other hand, AI is not so good at empathy and emotional support.

If AI is better at hard skills than soft skills, then it will disproportionately displace men in good jobs. Women, by comparison, should find increased demand (proportionately and absolutely) for their skills. This doesn’t prove that the future is female. But the future of good jobs may be.

Big Daddy/Big Data

Big data repository.

Are older people wiser? If so, why?

Some societies believe strongly that older people are wiser than younger people. Before a family or community makes a big decision in such societies, they would be sure to consult their elders. The elders’ advice might not be the final word but it’s highly influential. Further, elders always have say in important matters. Nobody would think of not including them.

Why would elders be wiser than others? One theory suggests that older people have simply forgotten more than younger people. They tend to forget the cripcrap details and remember the big picture. They don’t sweat the small stuff. They can see the North Star, focus on it, and guide us toward it without being distracted. (Click here for more).

For similar reasons, you can often give better advice to friends than you can give to yourself. When you consider your friend’s challenges and issues, you see the forest. When you consider your own challenges and issues, you not only see the trees, you actually get tangled up in the underbrush. For both sets of advisors – elders and friends – seeing the bigger picture leads to better advice. The way you solve a problem depends on the way you frame it.

According to this theory, it’s the loss of data that makes older people wiser. Is that all there is to it? Not according to Seth Stephens-Davidowitz, the widely acclaimed master of big data and aggregated Google searches. Stephens-Davidowitz has written extensively on the value of big data in illuminating how we behave and what we believe. He notes that companies and government agencies are increasingly trawling big data sets to spot patterns and predict – and perhaps nudge – human behaviors.

What does big data have to do with the wisdom of the aged? Well … as Stephens-Davidowitz points, what’s an older person but a walking, talking big data set? Our senior citizens have more experiences, data, information, stories, anecdotes, old wives’ tales, quotes, and fables than anybody else. And – perhaps because they’ve forgotten the cripcrap detail – they can actually retrieve the important stuff. They provide a deep and useful data repository with a friendly, intuitive interface.

As many of my readers know, my wife and I recently became grandparents. One of the pleasures of grandparenting is choosing the name you’d like to be known by. I had thought of asking our grandson to call me Big Daddy. But I think I’ve just come up with a better name. I think I’ll ask him to call me Big Data.

Seth Stephens-Davidowitz is probably best know for writing the book, Everybody Lies: Big Data, New Data, and What The Internet Can Tell Us About Who We Really Are. He’s also a regular contributor to the Op-Ed pages of the New York Times. I heard his idea about seniors-as-big-data on an episode of the podcast Hidden Brain. (Click here). I mentioned his work a few years ago in an article on baseball and brand loyalty. (Click here). He’s well worth a read.

Are Machines Better Than Judges?

No bail for you.

Police make about 10 million arrests every year in the United States. In many cases, a judge must then make a jail or bail decision. Should the person be jailed until the trial or can he or she be released on bail? The judge considers several factors and predicts how the person will behave. There are several relevant outcomes if the person is released:

  1. He or she will show up for trial and will not commit a crime in the interim.
  2. He or she will commit a crime prior to the trial.
  3. He or she will not show up for the trial.

A person in Category 1 should be released. People in Categories 2 and 3 should be jailed. Two possible error types exist:

Type 1 – a person who should be released is jailed.

Type 2 – a person who should be jailed is released.

Jail, bail, and criminal records are public information and researchers can massively aggregate them. Jon Kleinberg, a professor of computer science at Cornell, and his colleagues did exactly that and produced a National Bureau of Economic Research Working Paper earlier this year.

Kleinberg and his colleagues asked an intriguing question: Could a machine-learning algorithm, using the same information available to judges, reach different decisions than the human judges and reduce either Type 1 or Type 2 errors or both?

The simple answer: yes, a machine can do better.

Klein and his colleagues first studied 758,027 defendants arrested in New York City between 2008 and 2013. The researchers developed an algorithm and used it to decide which defendants should be jailed and which should be bailed. There are several different questions here:

  1. Would the algorithm make different decisions than the judges?
  2. Would the decisions provide societal benefits by either:
    1. Reducing crimes committed by people who were erroneously released?
    2. Reducing the number of people held in jails unnecessarily?

The answer to the first question is very clear: the algorithm produced decisions that varied in important ways from those that the judges actually made.

The algorithm also produced significant societal benefits. If we wanted to hold the crime rate the same, we need only have jailed 48.2% of the people who were actually jailed. In other words, 51.8% of those jailed could have been released without committing additional crimes. On the other hand, if we kept the number of people in jail the same – but changed the mix of who was jailed and who was bailed – the algorithm could reduce the number of crimes committed by those on bail by 75.8%.

The researchers replicated the study using nationwide data on 151,461 felons arrested between 1990 and 2009 in 40 urban counties scattered around the country. For this dataset, “… the algorithm could reduce crime by 18.8% holding the release rate constant, or holding the crime rate constant, the algorithm could jail 24.5% fewer people.”

Given the variables examined, the algorithm appears to make better decisions, with better societal outcomes. But what if the judges are acting on other variables as well? What if, for instance, the judges are considering racial information and aiming to reduce racial inequality? The algorithm would not be as attractive if it reduced crime but also exacerbated racial inequality. The researchers studied this possibility and found that the algorithm actually produces better racial equity. Most observers would consider this an additional societal benefit.

Similarly, the judges may have aimed to reduce specific types of crime – like murder or rape – while de-emphasizing less violent crime. Perhaps the algorithm reduces overall crime but increases violent crime. The researchers probed this question and, again, the results were negative. The algorithm did a better job of reducing all crimes, including very violent crimes.

What’s it all mean? For very structured predictions with clearly defined outcomes, an algorithm produced by machine learning can produce decisions that reduce both Type I and Type II errors as compared to decisions made by human judges.

Does this mean that machine algorithms are better than human judges? At this point, all we can say is that algorithms produce better results only when judges make predictions in very bounded circumstances. As the researchers point out, most decisions that judges make do not fit this description. For instance, judges regularly make sentencing decisions, which are far less clear-cut than bail decisions. To date, machine-learning algorithms are not sufficient to improve on these kinds of decisions.

 

(This article is based on NBER Working Paper 23180, “Human Decisions and Machine Predictions”, published in February 2017. The working paper is available here and here. It is copyrighted by its authors, Jon Kleinberg, Himabindu Lakkaraju, Jure Lesovec, Jens Ludwig, and Sendhil Mullainathan. The paper was also published, in somewhat modified form, as “Human Decisions and Machine Predictions” in The Quarterly Journal Of Economics on 26 August 2017. The paper is behind a pay wall but the abstract is available here).

Jevons Paradox and The Future of Employment

My new teaching assistant.

My new teaching assistant.

Four years ago, I wrote a somewhat pessimistic article about Jevons paradox. A 19th-century British economist, William Jevons, noted that as energy-efficient innovations are developed and deployed, energy consumption goes up rather than down. The reason: as energy grows cheaper, we use more of it. We find more and more places to apply energy-consuming devices.

Three years ago, I wrote a somewhat pessimistic article about the future of employment. I argued that smart machines would either: 1) augment knowledge workers, making them much more productive, or; 2) replace knowledge workers altogether. Either way, we would need far fewer knowledge workers.

What if you combine these two rather pessimistic ideas? Oddly enough, the result is a rather optimistic idea.

Here’s an example drawn from a recent issue of The Economist. The process of discovery is often invoked in legal disputes between companies or between companies and government agencies. Each side has the right to inspect the other side’s documents, including e-mails, correspondence, web content, and so on. In complex cases, each side may need to inspect massive numbers of documents to decide which documents are germane and which are not. The actual inspecting and sorting has traditionally been done by highly trained paralegals – lots of them.

As you can imagine, the process is time-consuming and error-prone. It’s also fairly easy to automate through deep learning. Artificial neural networks (ANNs) can study the examples of which documents are germane and which are not and learn how to distinguish between the two. Just turn suitably trained ANNs loose on boxes and boxes of documents and you’ll have them sorted in no time, with fewer errors than humans would make.

In other words, artificial neural networks can do a better job than humans at lower cost and in less time. So this should be bad news for paralegal employment, right? The number of paralegals must be plummeting, correct? Actually no. The Economist tells us that paralegal employment has actually risen since ANNs were first deployed for discovery processes.

Why would that be? Jevons paradox. The use of ANNs has dramatically lowered the obstacles to using the discovery process. Hence, the discovery process is used in many more situations. Each discovery process uses fewer paralegals but there are many more discovery processes. The net effect is greater – not lesser – demand for paralegals.

I think of this as good news. As the cost of a useful process drops, the process itself – spam filtering, document editing, image identification, quality control, etc. – can be deployed to many more activities. That’s useful in and of itself. It also drives employment. As costs drops, demand rises. We deploy the process more widely. Each human is more productive but more humans are ultimately required because the process is far more widespread.

As a teacher, this concept makes me rather optimistic. Artificial intelligence can augment my skills, make me more productive, and help me reach more students. But that doesn’t mean that we’ll need fewer teachers. Rather, it means that we can educate many, many more students. That’s a good thing – for both students and teachers.

Male Chauvinist Machines

Yanks win last night?

Yanks win last night?

Do men and women think differently? If they do, who should develop artificial intelligence? As we develop AI, should we target “feminine” intelligence or “masculine” intelligence? Do we have enough imagination to create a non-gendered intelligence? What would that look like?

First of all, do the genders think differently? According to Scientific American, our brains are wired differently. As you know, our brains have two hemispheres. Male brains have more connections within each hemisphere as compared to female brains. By contrast, female brains have more connections between hemispheres.

Men, on average, are better at connecting the front of the brain with the back of the brain while women are better at connecting left and right hemispheres. How do these differences influence our behavior? According to the article, “…male brains may be optimized for motor skills, and female brains may be optimized for combining analytical and intuitive thinking.”

Women and men also have different proportions of white and gray matter in their brains. (Click here). Gray matter is “…primarily associated with processing and cognition…” while white matter handles connectivity. The two genders are the same (on average) in general intelligence, so the differences in the gray/white mix suggest that there are two different ways to get to the same result. (Click here). Women seem to do better at integrating information and with language skills in general. Men seem to do better with “local processing” tasks like mathematics.

Do differences in function drive the difference in structure or vice-versa? Hard to tell. Men have a higher percentage of white matter and also have somewhat larger brains compared to women. Perhaps men need more white matter to make connections over longer distances in their larger brains. Women have smaller heads and may need less white matter to make the necessary connections — just like a smaller house would need less electrical wire to connect everything. Thus, a larger proportion of the female brain can be given over to gray matter.

So men and women think differently. That’s not such a surprise. As we look ahead to artificial intelligence, which model should we choose? Should we emphasize language skills, similar to the female brain? Or local processing skills, similar to the male brain? Should we emphasize processing power or information integration?

Perhaps we could do both, but I wonder how realistic that is. I try to imagine what it would be like to think as a woman but I find it difficult to wrap my head around the concept. As a feminist might say, I just don’t get it. I have to imagine that a woman trying to think like a man would encounter similar difficulties.

Perhaps the best way to develop AI would involve mixed teams of men and women. Each gender could contribute what it does best. But that’s not what’s happening today. As Jack Clark points out, “Artificial Intelligence Has A “Sea of Dudes’ Problem”. Clark is mainly writing about data sets, which developers use to teach machines about the world. If men choose all the data sets, the resulting artificial intelligence will be biased in the same ways that men are. Yet male developers of AI outnumber females by a margin of about eight-to-one. Without more women, we run the risk of creating male chauvinist machines. I can just hear my women friends saying, “Oh my God, no!”

My Social Media

YouTube Twitter Facebook LinkedIn

Newsletter Signup
Archives