Innovation

Artificial intelligence just isn’t as smart as everyone thinks

Most of what we call “artificial intelligence” today is still very rudimentary, but that doesn’t mean it can’t be useful

Lee Sedol playing a move against AlphaGo

Lee Sedol, the reigning world champion in Go, played a five-game tournament against Google’s “AlphaGo” computer in March 2016. (Google/Getty)

There’s a lot of angst around artificial intelligence these days. It’s the media’s role to point out problems that need to be addressed, and with AI’s quick growth—demonstrated poignantly with the recent defeat of Go champion Lee Sedol by a Google computer—it’s a worthwhile subject to discuss.

However, there is always the likelihood that such conversations take on an unnecessarily alarmist tone, which is probably the way things are going when it comes to AI. It doesn’t help when luminaries such as Stephen Hawking say that AI “could spell the end of the human race.”

Many people who actually work within the field cringe when they read such headlines, mainly because they don’t believe AI is anywhere near as advanced as the news would have us believe.

The human brain and natural intelligence are far from being understood, and without that fundamental knowledge coming first, it will likely be impossible to create a truly thinking machine, they say.

Current AI, therefore, isn’t so much “artificial” intelligence as it is “augmented” intelligence—despite decades of advances, computers are still merely doing what we’re programming them to do.

They’re quite good at it, especially when it comes to crunching data quickly and then spitting out something useful for us to do with it. This is the near-term future of AI, researchers believe, and it will be largely benevolent.

One great current example of how augmented intelligence might help is the new grand challenge from the Defense Advanced Research Projects Agency, the Pentagon’s mad science wing.

DARPA is offering $2 million (U.S.) to the brilliant mind who can come up with an AI solution for managing wireless spectrum, which are the airwaves over which cellphones, radios, TVs and other things transmit.

It’s getting incredibly complicated to manage the never-ending growth of devices that are communicating over those wireless frequencies, and who controls them. Here are the guaranteed-to-strain-your-eyes allocation maps for both the U.S. and Canada:

A map of Canada’s radio spectrum A map of the United States’ radio spectrum

“The current practice of assigning fixed frequencies for various uses irrespective of actual, moment-to-moment demand is simply too inefficient to keep up with actual demand and threatens to undermine wireless reliability,” said William Chappell, director of DARPA’s Microsystems Technology Office, in a DARPA press release.

Uh, yeah. Looking at those maps, one thing is clear: no human in his or her right mind would want to manage that. And with usage and demand growing, it’s only going to get worse. Much worse.

I recently spoke with Guruduth Banavar, vice-president of cognitive computing for IBM Research, and he put it succinctly:

“The beneficial use of AI probably far outweighs the dangers that people are worried about. The risk of not working on AI are far greater,” he said. “We’re either going to be incapable of making decisions, or we’re going to make decisions that will make our problems worse.”


MORE ABOUT ARTIFICIAL INTELLIGENCE & MACHINE LEARNING: