Questions for the AI revolution

There sure is a lot of fearmongering in the media these days. Watch any clip of mainstream news and you’ll get scared out of your pants about terrorism, nuclear war, biosecurity, climate change, and machine intelligence taking over the world. Yes – these are real and important global issues ripe for discussion – but jeez, must we talk about them in such a doomsday (and in fact, completely useless) way?

Let’s take the societal fear of machine super-intelligence taking over the world, a.k.a. Man versus Machine. The scenario we all fear is this: Man programs machines to learn how to learn. Machines become more intelligent than man. Man loses control over machines. Machines overtake man. Man is exterminated or become enslaved by machines.

Or another scenario: Man programs machines to learn how to learn. Machines augment mans’ intelligence. Man employ machines as support to achieve mans’ goals. Depending on what those goals are, the world may (or may not) become a better/fairer/more equal/ideal place.

The first step mentioned above – “Man programs machines to learn how to learn” – is inevitable. In fact, we’re already there. Anytime you use a spam filter, choose a movie from a list of personalized recommendations, or type a search query, you are using artificial intelligence (AI) and machine learning. And arguably, you are benefiting from it.

The next step – whether we choose to compete with machines (as in Man vs. Machine”) or collaborate with machines (as in “Machine augments mans’ intelligence”) – is up to us.

It all depends on what question we ask.

161114_Calgary_SuperMoon_0031-2.jpg
The Calgary Supermoon in 2016. Photo taken by Mark X. He (aka my dad!)

If we take the first scenario, we might begin by asking How we can program machines to be as smart as us? First, we could duplicate whatever capabilities man has, so that the machine will have it too – that is, programming machines to mimic human intelligence. We’ll start with manual tasks which are simple for man but difficult for machines, such as turning a doorknob or vacuuming the living room. Later, we’ll move on to more complex tasks like recognizing debris versus people when rescuing victims after an earthquake, or responding to subtle emotions in a human face. Once we’ve achieved these tasks, maybe our question might evolve to Can we program machines to be smarter than us?  For some tasks, the answer is yes. Examples are Garry Kasparov – the former World Chess Champion who competed against IBM’s supercomputer “Deep Blue” in 1997 and lost, or how Google AI beat Ke Jie, the world’s Go champion in May 2017.

In the second scenario of collaboration (“Machine augments mans’ intelligence”), we would take a completely different approach. As Tom Gruber states in his TED talk“instead of asking ‘How smart can we make our machines?’, maybe we’ll ask ‘How smart can our machines make us?” To answer this question, maybe we could first identify what mans’ strengths and weaknesses are, particularly when compared to machines. What are the things we humans do easily and intuitively that machines can’t? And what are the things we consistently fail at or come up short on? (For examples, check out our many cognitive biases). How can machines augment, supplement or overcome our weaknesses to make us better/more rational/empathetic people? On that note, perhaps instead of pursuing “smart” or “intelligent” machines, perhaps the question we really should be asking is How can machines make us better people? How can machines help us to live our best lives?

161114_Calgary_SuperMoon_0008-Edit.jpg

161114_Calgary_SuperMoon_0049-Edit
Photo credit: Mark X He

But wait – how you would you define “better” or “best”? In Zeynep Tufekci’s TED talk, she states that “We’re asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden. We’re asking questions like, Who should the company hire? Which convict is more likely to re-offend? Which news item or movie should be recommended to people?”

Tufekci says: “These systems are often trained on data generated by our actions, human imprints. Well, […] these systems could be picking up on our biases, amplifying them and showing them back to us, while we’re telling ourselves, ‘We’re just doing objective, neutral computation’.  For example, “Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don’t know, can have life-altering consequences.” (More about inherent biases in our technology design here.)

She argues that “Artificial intelligence does not give us a ‘Get out of ethics free’ card. We need to cultivate algorithm suspicion, scrutiny and investigation. […] We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms”.

The AI revolution is already here. There is no point in denying and restraining progress. Human beings have an innate curiosity and a creative spirit. Innovation may be the hallmark of our species. As machine learning and AI advances in the next years, let’s make sure we are asking and focusing on the right questions. The most effective questions. Like, what does it mean to be human? What values and morals do we care about, across countries, across cultures, across the human species? What are the goals we want to achieve? What kind of world do we want to live in, our children to live in?

And how can AI get us there?

2 thoughts on “Questions for the AI revolution

  1. Timely blog post! Did you see this Guardian article:

    https://www.theguardian.com/commentisfree/2018/feb/25/artificial-intelligence-going-bad-futuristic-nightmare-real-threat-more-current

    The headline really hit the nail on the head for me – don’t worry about AI, worry about the people. Kind of like any technology these days; social networks aren’t evil (or good), but they can be (ab-)used for either. Same with AI.

    Greetings from Denmark! 😉

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.