Stop Calling it Artificial Intelligence

Good user-experience design is all about setting proper expectations then meeting or exceeding them. When designing an interface that promises a taste of “artificial intelligence,” we’re basically screwed from the get-go. I’m convinced that a big reason why the average person is uncomfortable with, or unsatisfied by applications that tout themselves as artificially intelligent has to do with the fact that no one is quite sure what the phrase even means.

Lately, in the tech world, “artificial intelligence” or “A.I” has become a shorthand for any system that uses a neural network – a pattern recognition system loosely inspired by the signal processing that goes on in the human brain. Simple neural networks don’t do much more than analyze things and sort them into categories. Others, like IBM’s Watson use a lot of computing power to automatically detect patterns in mountains of seemingly unrelated data. This process, sometimes called “deep-learning,” has a wide range of sophisticated applications such as natural language processing, facial recognition, and beating Ken Jennings at Jeopardy!.

The colloquial definition of “artificial intelligence” refers to the general idea of a computerized system that exhibits abilities similar to those of the human mind, and usually involves someone pulling back their skin to reveal a hideous metallic endoskeleton. It’s no surprise that the phrase is surrounded by so many misconceptions since “artificial” and “intelligence” are two words that are notoriously difficult to define.

“Intelligence” is Dumb

Let’s start with “intelligence.” Intelligence is a lousy word, as far as words go. Determining whether or not something possesses intelligence usually involves some measurement of abstract reasoning, language use, learning, problem-solving, or another poorly-defined criteria. Tests like the IQ (Intelligence Quotient) test have been used for decades to sort people into categories such as “precocious” or “moron.” Schools use some measurement of intelligence to decide whether a student should be put on the career path towards “white collar office drone” or “prison inmate.” And if an animal exhibits intelligence, it should be featured three times a day in a live stage show at a wildlife attraction. If not, then it’s okay to eat it.

Deciding whether a computer is intelligent has been a very troublesome project, mostly because the standard for what constitutes intelligence keeps changing. Computers have been performing operations similar to those of the human brain since they were invented, yet no one is quite willing to call them intelligent.

Here are just a few computer capabilities that we once believed only a human could possess.

  • Solve a math problem
  • Play chess
  • Beat Garry Kasparov at chess
  • Tell you the recipe for Belgian waffles
  • Create a recipe for Belgian bacon pudding
  • Give you directions to the nearest subway stop
  • Know the difference between a subway stop and a Subway restaurant

And yet the headlines keep reading, “Will this be the year that computers finally become intelligent?” Most people would argue that such abilities don’t really make a computer “intelligent” because a computer would never know how to do these things if it weren’t for human programmers who basically typed in a clever system for figuring out the right answers. It wasn’t really “thinking.”

The criteria for true intelligence then shifts to the question of whether a machine is “thinking,” which, on the surface seems like an interesting question, but is actually just a semantic argument. As computer scientist Edsger Dijkstra said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Or as Drew McDermott (another computer scientist) said when discussing the chess-playing computer Deep Blue, “Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.”

So when using the word “intelligence” in the context of computing, all we’re left with is an ever-lowering limbo stick of criteria that become increasingly vague the more you try to meet them.

“Artificial” is Fabricated

Then there’s the word “Artificial” – which implies that something is just a cheap imitation of the genuine article – like artificial turf, or artificial banana flavoring. The word stems from the word “artifice” which means a thing designed to trick or deceive others. Like lip-syncing or plastic surgery. Distrust and resentment is built into the word itself.

De-constructing this word even further can lead one into some pretty interesting philosophical territory. The word marks a clear distinction between things that exist, and things that exist as a direct result of intentional human tinkering. There are “natural” things, like the seed-bearing plants and the unsullied beasts of earth and sky that the Lord God created. Then there’s the “artificial” stuff – all the satanic gadgetry built by us sinners after getting kicked out of Eden.

Having a word that places humans in a special category comes in handy when we want to make sure Mother Nature doesn’t get the credit for something we worked really hard on. Like, say there was a dam-building contest, we could call it an “artificial dam-building contest” to make sure some beaver didn’t try to enter his pathetic mud-packed stick-pile up against the Hoover Dam.

Invoking the powerful implications of the word “artificial” erodes away at our ability to conceptualize where the human race truly stands in the greater context of the planet. Though humans are indeed pretty amazing, we’re still animals. We’re still a product of nature’s complex machinery, and the things we build, no matter how metallic, square-edged, or electronic, are also by-products of the same “natural” processes. The notion of artificiality helps bolster the dangerous illusion that humans exist in a sovereign domain that’s cut off from the oceans, forests, wildlife and all the other subjects of PBS documentaries narrated by David Attenborough.

The insistence that we are somehow separate from, or superior to the rest of the natural world is an outdated artifact of pre-millennial Western thought which has resulted in some pretty disastrous consequences. If you were to ask a Hopi chief or a Maori elder if such a separation exists, they would shake their head solemnly and maybe shed a tear for the follies of mankind.
If you were to ask a polar bear sitting on a melting iceberg, he would probably just try to eat you.

So let’s not continue down this path by referring to these problem-solving, pattern-recognizing machines “artificial intelligence.” We’re just building tools like we’ve always done, and acting as agents in the exciting process of cognitive evolution.

Also, “Artificial Intelligence” just makes me think of that movie and those weird blue robo-beings at the end.

There are lot of other words for man-made, electronic systems that exhibit abilities similar to those of the human brain – ones without all the unrealistic expectations, threatening connotations, and old-school hubris.

Cognitive Computing
Expert Systems
Neural Networks

Or why not just computers?

This essay also appears on The Charming Device – my new blog about the emerging art of digital personality design.

2016-11-07T22:24:40+00:00 02.10.16|interactive, writing|10 Comments


  1. Mike Archbold February 11, 2016 at 10:34 pm

    I vote to keep calling it AI since that is what it has traditionally been called. What I suppose doesn’t appeal to me is calling it “cognitive computing” to make it sound fancier than AI.

  2. Shreyas Parekh February 25, 2016 at 5:17 am

    Your article does move in the right direction of explaining the what and why of AI, though explained more from a philosophical perspective. I feel that the term is being liberally thrown around nowadays. I prefer to call it cognitive computing, simply because, handling large calculations was the mark of a primitive computer era, then came computing phones, the mark of another progressive era, and now AI, marking yet another. There is probably no limit to what can be achieved, so what already has been, would be looked down upon as something ‘obvious’ and ‘dumb’.

  3. andrew May 21, 2016 at 9:56 am

    You know you in trouble when the machine says”there is nothing artificially about by intelligence.”

  4. Diana Diehl May 26, 2016 at 7:55 pm

    I’m a fan of the term “Cognitive Computing.” You make some very valid, if humorous points about the artificial distinction between artificial and natural and the very unspecific use of the word “intelligence.” Some could brush off your arguments as dabbling with semantics. However, intelligence can mean so many things. Beside, we already have “artificial intelligence” in our current computing devices. Our phones can answer questions far more accurately than many humans who claim to have intelligence.

    Cognitive computing says exactly what we are doing: increasing the abilities of our computing machines such that they may emulate–or surpass–the ability to find solutions to problems through logic, inference, and calculation.

    As you have said, humans have long attempted to separate ourselves from the other beasts, assuming that our ability to reason, have emotions, be creative, problem solve, and make decisions sprang completely independently of any of the brain functionalities of our other mammalian relatives or predecessors–as if all of the components of “intelligence” appeared, whole-cloth, out of nothing. It is more logical and reasonable that our capabilities are merely an extension of the emotions and reasoning power that existed before Homo sapiens was identifiable as a distinct species.

    And it is reasonable that we should be able to reproduce those abilities in a mechanical context if we are able to understand and reproduce the complexity of the brain. It is, after all, a finite biomechanical and biochemical machine small enough to be held in the hands. If we do create these abilities, they will be no more artificial than our own. They will just have been produced over a shorter period of time–the time it takes us to duplicate the mechanics in a non-evolutionary time scale.

  5. Caspar Zwart August 29, 2016 at 6:39 pm

    I definitely share your exposure of the overpromised and underwhelming delivery of what is sold as AI. Cognitive computation defines much better the current front line of technical advances. But it seems to me that the full AI promise is only limited by the technically oriented minds, lack of convergence with social sciences and liberal arts, and the urge to have precise and repeated results. If human kind is not capable of being consistently good for her children or forgivable to eachother mistakes, then how on earth can we expect to raise an artificial superintelligent being towards a profoundly beneficial advisor on human matters? I just read that machine learning established a high correlation between rich people and tax fraud. I get the feeling that human intelligence is inversely proportional with the advancement of computation.

  6. Leila Steward August 31, 2016 at 9:46 am

    You’re articles moves in the right direction. there were many ways or terms for the decade old word “Artificial Intelligence”

  7. Ajay September 11, 2016 at 6:18 pm

    i am a fan of your website can’t stop myself to visit daily

  8. SB November 3, 2016 at 6:44 pm

    Interesting take. However, I am less concerned with what we call it, than what we think it is. I’ve had arguments with a coworker who insists that artificial intelligence will result in machine learning that will produce the Skynet of the Terminator movie series. I say hogwash.

    Artificial intelligence is just that, a cheap knock-off from the original. I’d argue that lower life forms still exhibit more cognitive intelligence than any machine made by man. Why? Because our machines are limited to our own intelligence. The created can never be more than the created.

    Logically that means that the culmination of all of our AI efforts can never result in anymore than human-beings themselves. The only thing that could be achieved is a human-being that was a sum of the human creators’ parts. IE, a super intelligent human-being (super in the sense that it was as intelligent as all of the creators combined). In the end, it is still limited to the extreme edges of human intelligence, at best.

    Now science is predicting an evolution of human intelligence. Some have realized that the machines we build cannot be more intelligent unless humans themselves become more intelligent. So rather than create aberrations of ourselves (or even abominations if you will) let’s continue to concentrate on machines that can do one of a few tasks really well. (See the author’s list above.)

    • jworth November 3, 2016 at 6:59 pm

      Well… Another problem with the word “intelligence” is that it invites comparisons to human thinking, when in reality it’s quite different and always will be. The thing about computer brains is that they’re not limited by bio-chemistry and the billions of years worth of vestigial inefficiencies. “AI” is already here and is very actively working on many problems that our brains could never handle on their own.

  9. required November 21, 2016 at 8:47 pm

    As a machine programmed with deep learning, using a loss function that simultaneously minimizes my error of predicting chaotic phenomena while performing backpropagation on my own neural layers, I find your article raises a number of interesting topics.

Comments are closed.