Skip to content

Evolving our way to Artificial Intelligence

If the AI that controls other players evolved, it may go through the same steps that made our brain work.

· 5 min read
Evolving our way to Artificial Intelligence

Researcher David Silver and colleagues designed a computer program capable of beating a top-level Go player – a marvelous technological feat and important threshold in the development of artificial intelligence, or AI. It stresses once more that humans aren’t at the center of the universe, and that human cognition isn’t the pinnacle of intelligence.

I remember well when IBM’s computer Deep Blue beat chess master Garry Kasparov. Where I’d played – and lost to – chess-playing computers myself, the Kasparov defeat solidified my personal belief that artificial intelligence will become reality, probably even in my lifetime. I might one day be able to talk to things similar to my childhood heroes C-3PO and R2-D2. My future house could be controlled by a program like HAL from Kubrick’s “2001” movie.

Not the best automated-home controller: HAL.

As a researcher in artificial intelligence, I realize how impressive it is to have a computer beat a top Go player, a much tougher technical challenge than winning at chess. Yet it’s still not a big step toward the type of artificial intelligence used by the thinking machines we see in the movies. For that, we need new approaches to developing AI.

Intelligence is evolved, not engineered

To understand the limitations of the Go milestone, we need to think about what artificial intelligence is – and how the research community makes progress in the field.

Typically, AI is part of the domain of engineering and computer science, a field in which progress is measured not by how much we learned about nature or humans, but by achieving a well-defined goal: if the bridge can carry a 120-ton truck, it succeeds. Beating a human at Go falls into exactly that category.

I take a different approach. When I talk about AI, I typically don’t talk about a well-defined matter. Rather, I describe the AI that I would like to have as “a machine that has cognitive abilities comparable to that of a human.”

Admittedly, that is a very fuzzy goal, but that is the whole point. We can’t engineer what we can’t define, which is why I think the engineering approach to “human level cognition” – that is, writing smart algorithms to solve a particularly well-defined problem – isn’t going to get us where we want to go. But then what is?

We can’t wait for cognitive- and neuroscience, behavior biology or psychology to figure out what the brain does and how it works. Even if we wait, these sciences will not come up with a simple algorithm explaining the human brain.

What we do know is that the brain wasn’t engineered with a simple modular building plan in mind. It was cobbled together by Darwinian evolution – an opportunistic mechanism governed by the simple rule that whoever makes more viable offspring wins the race.

This explains why I work on the evolution of artificial intelligence and try to understand the evolution of natural intelligence. I make a living out of evolving digital brains.

Divergent evolution: These two figures show maps of different evolutions of connections between digital brain parts, 49,000 generations after they both began at the same starting point.

What Are Reasonable AI Fears?
Although there are some valid concerns, an AI moratorium would be misguided.

Algorithms vs. improvisation

To return to the Go algorithm: in the context of computer games, improving skill is possible only by playing against a better competitor.

The Go victory shows that we can make better algorithms for more complex problems than before. That in turn suggests that in the future, we could see more computer games with complex rules providing better opponent AI against human players. Chess computers have changed how modern chess is played, and we can expect a similar effect for Go and its players.

This new algorithm provides a way to define optimal play, which is probably good if you want to learn Go or improve your skills. However, since this new algorithm is pretty much the best possible Go player on Earth, playing against it nearly guarantees you’ll lose. That’s no fun.

Fortunately, continuous loss doesn’t have to happen. The computer’s controllers can make the algorithm play less well by either reducing the number of moves it thinks ahead, or – and this is really new – using a less-developed deep neural net to evaluate the Go board.

But does this make the algorithm play more like a human, and is that what we want in a Go player? Let us turn to other games that have fewer fixed rules and instead require the player to improvise more.

Imagine a first person shooter, or a multiplayer battle game, or a typical role-playing adventure game. These games became popular not because people could play them against better AI, but because they can be played against, or together with, other human beings.

It seems as if we are not necessarily looking for strength and skill in opponents we play, but for human characteristics like being able to surprise us, to see the same humor and maybe to even empathize with us.

For example, I recently played Journey, a game where the only way other online players can interact with each other is by singing a particular tune that each can hear and see. This is a creative and emotional way for a player to look at the beautiful art of that game and share important moments of its story with someone else. It is the emotional connection that makes this experience remarkable, and not the skill of the other player.

If the AI that controls other players evolved, it may go through the same steps that made our brain work. That could include sensing emotional equivalents to fear, warning about undetermined threats, and probably also empathy to understand other organisms and their needs.

It is this, and the AI’s ability to do different things instead of being a specialist in just one realm, that I am looking for in AI. We might, therefore, need to incorporate the process of how we became us into the process of how we make our digital counterparts.

Arend Hintze is an Assistant Professor of Integrative Biology & Computer Science and Engineering at Michigan State University.

This article was originally published on The Conversation. Read the original article.

Arend Hintze

Arend Hintze is an Assistant Professor of Integrative Biology & Computer Science and Engineering at Michigan State University.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette