Good user-experience design is all about setting proper expectations then meeting or exceeding them. When designing an interface that promises a taste of “artificial intelligence,” we’re basically screwed from the get-go. I’m convinced that a big reason why the average person is uncomfortable with, or unsatisfied by applications that tout themselves as artificially intelligent has to do with the fact that no one is quite sure what the phrase even means.

Lately, in the tech world, “artificial intelligence” or “A.I” has become a shorthand for any system that uses a neural network – a pattern recognition system loosely inspired by the signal processing that goes on in the human brain. Simple neural networks don’t do much more than analyze things and sort them into categories. Others, like IBM’s Watson use a lot of computing power to automatically detect patterns in mountains of seemingly unrelated data. This process, sometimes called “deep-learning,” has a wide range of sophisticated applications such as natural language processing, facial recognition, and beating Ken Jennings at Jeopardy!.

The colloquial definition of “artificial intelligence” refers to the general idea of a computerized system that exhibits abilities similar to those of the human mind, and usually involves someone pulling back their skin to reveal a hideous metallic endoskeleton. It’s no surprise that the phrase is surrounded by so many misconceptions since “artificial” and “intelligence” are two words that are notoriously difficult to define.

“Intelligence” is Dumb

Let’s start with “intelligence.” Intelligence is a lousy word, as far as words go. Determining whether or not something possesses intelligence usually involves some measurement of abstract reasoning, language use, learning, problem-solving, or another poorly-defined criteria. Tests like the IQ (Intelligence Quotient) test have been used for decades to sort people into categories such as “precocious” or “moron.” Schools use some measurement of intelligence to decide whether a student should be put on the career path towards “white collar office drone” or “prison inmate.” And if an animal exhibits intelligence, it should be featured three times a day in a live stage show at a wildlife attraction. If not, then it’s okay to eat it.

Deciding whether a computer is intelligent has been a very troublesome project, mostly because the standard for what constitutes intelligence keeps changing. Computers have been performing operations similar to those of the human brain since they were invented, yet no one is quite willing to call them intelligent.

Here are just a few computer capabilities that we once believed only a human could possess.

  • Solve a math problem
  • Play chess
  • Beat Garry Kasparov at chess
  • Tell you the recipe for Belgian waffles
  • Create a recipe for Belgian bacon pudding
  • Give you directions to the nearest subway stop
  • Know the difference between a subway stop and a Subway restaurant

And yet the headlines keep reading, “Will this be the year that computers finally become intelligent?” Most people would argue that such abilities don’t really make a computer “intelligent” because a computer would never know how to do these things if it weren’t for human programmers who basically typed in a clever system for figuring out the right answers. It wasn’t really “thinking.”

The criteria for true intelligence then shifts to the question of whether a machine is “thinking,” which, on the surface seems like an interesting question, but is actually just a semantic argument. As computer scientist Edsger Dijkstra said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Or as Drew McDermott (another computer scientist) said when discussing the chess-playing computer Deep Blue, “Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.”

So when using the word “intelligence” in the context of computing, all we’re left with is an ever-lowering limbo stick of criteria that become increasingly vague the more you try to meet them.

“Artificial” is Fabricated

Then there’s the word “Artificial” – which implies that something is just a cheap imitation of the genuine article – like artificial turf, or artificial banana flavoring. The word stems from the word “artifice” which means a thing designed to trick or deceive others. Like lip-syncing or plastic surgery. Distrust and resentment is built into the word itself.

De-constructing this word even further can lead one into some pretty interesting philosophical territory. The word marks a clear distinction between things that exist, and things that exist as a direct result of intentional human tinkering. There are “natural” things, like the seed-bearing plants and the unsullied beasts of earth and sky that the Lord God created. Then there’s the “artificial” stuff – all the satanic gadgetry built by us sinners after getting kicked out of Eden.

Having a word that places humans in a special category comes in handy when we want to make sure Mother Nature doesn’t get the credit for something we worked really hard on. Like, say there was a dam-building contest, we could call it an “artificial dam-building contest” to make sure some beaver didn’t try to enter his pathetic mud-packed stick-pile up against the Hoover Dam.

Invoking the powerful implications of the word “artificial” erodes away at our ability to conceptualize where the human race truly stands in the greater context of the planet. Though humans are indeed pretty amazing, we’re still animals. We’re still a product of nature’s complex machinery, and the things we build, no matter how metallic, square-edged, or electronic, are also by-products of the same “natural” processes. The notion of artificiality helps bolster the dangerous illusion that humans exist in a sovereign domain that’s cut off from the oceans, forests, wildlife and all the other subjects of PBS documentaries narrated by David Attenborough.

The insistence that we are somehow separate from, or superior to the rest of the natural world is an outdated artifact of pre-millennial Western thought which has resulted in some pretty disastrous consequences. If you were to ask a Hopi chief or a Maori elder if such a separation exists, they would shake their head solemnly and maybe shed a tear for the follies of mankind.
If you were to ask a polar bear sitting on a melting iceberg, he would probably just try to eat you.

So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines “artificial intelligence.” We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.

Also, “Artificial Intelligence” just makes me think of that movie and those weird blue robo-beings at the end.

There are lot of other words for man-made, electronic systems that exhibit abilities similar to those of the human brain – ones without all the unrealistic expectations, threatening connotations, and old-school hubris.

Cognitive Computing
Expert Systems
Neural Networks

Or why not just computers?

This essay also appears on The Charming Device – my new blog about the emerging art of digital personality design.