Posted:January 25, 2016

Why the Resurgence in AI?

AI Spring, wallpaper from pixelstalk.netArtificial Intelligence is in Bloom; But it Was Not Always So

Anyone beyond a certain age may recall the waning and waxing of the idea of AI, artificial intelligence. In fact, the periodic dismal prospects and poor reputation of artificial intelligence have been severe enough at times so as to warrant its own label: the “AI winters.” Clearly, today, we are in a resurgence of AI. But why is this so? Is the newly re-found popularity of AI merely a change in fashion, or is it due to more fundamental factors? And if it is more fundamental, what might those factors be that have led to this resurgence?

We only need to look at the world around us to see that the resurgence in AI is due to real developments, not a mere change in fashion. From virtual assistants that we can instruct or question by voice command to self-driving cars and face recognition, many mundane or tedious tasks of the past are being innovated away. The breakthroughs are real and seemingly at an increasing pace.

As to the reasons behind this resurgence, more than a year ago, the technology futurist Kevin Kelly got it mostly right when he posited these three breakthroughs [1]:

1. Cheap parallel computation
Thinking is an inherently parallel process, billions of neurons firing simultaneously to create synchronous waves of cortical computation. To build a neural network—the primary architecture of AI software—also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbors to make sense of the signals it receives. To recognize a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could only ping one thing at a time. . . . That began to change more than a decade ago, when a new kind of chip, called a graphics processing unit, or GPU, was devised for the intensely visual—and parallel—demands of videogames . . . .
2. Big Data
Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples before it can distinguish between cats and dogs. That’s even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI breakthrough lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital universe became the teachers making AI smart.
3. Better algorithms
Digital neural nets were invented in the 1950s, but it took decades for computer scientists to learn how to tame the astronomically huge combinatorial relationships between a million—or 100 million—neurons. The key was to organize neural nets into stacked layers. Take the relatively simple task of recognizing that a face is a face. When a group of bits in a neural net are found to trigger a pattern—the image of an eye, for instance—that result is moved up to another level in the neural net for further parsing. The next level might group two eyes together and pass that meaningful chunk onto another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. . . .

To these factors I would add a fourth: 4. Distributed architectures (beginning with MapReduce) and new performant datastores (NoSQL, graph DBs, and triplestores). These new technologies, plus some rediscovered, gave us the confidence to tackle larger and larger reference datasets, while also helping us innovate high-performance data representation structures, such as graphs, lists, key-value pairs, feature vectors and finite state transducers. In any case, Kelly also notes the interconnection amongst these factors in the cloud, itself a more general enabling factor. I suppose, too, one could add open source to the mix as another factor.

Still, even though these factors have all contributed, I have argued in my series on knowledge-based artificial intelligence (KBAI) the role of electronic data sets (Big Data) as the most important enabling factor [2]. These reference datasets may range from images for image recognition (such as ImageNet) to statistical compilations from text (such as N-grams or co-occurrences) to more formal representations (such as ontologies or knowledge bases). Knowledge graphs and knowledge bases are the key enablers for AI in the realm of knowledge management and representation.

Some also tout algorithms as the most important source of AI innovation, but Alexander Wissner-Gross in the Edge online magazine comes down squarely on the side of data in AI as the most interesting news in recent science [3]:

. . . perhaps many major AI breakthroughs have actually been constrained by the availability of high-quality training datasets, and not by algorithmic advances. For example, in 1994 the achievement of human-level spontaneous speech recognition relied on a variant of a hidden Markov model algorithm initially published ten years earlier, but used a dataset of spoken Wall Street Journal articles and other texts made available only three years earlier. In 1997, when IBM’s Deep Blue defeated Garry Kasparov to become the world’s top chess player, its core NegaScout planning algorithm was fourteen years old, whereas its key dataset of 700,000 Grandmaster chess games (known as the “The Extended Book”) was only six years old. In 2005, Google software achieved breakthrough performance at Arabic- and Chinese-to-English translation based on a variant of a statistical machine translation algorithm published seventeen years earlier, but used a dataset with more than 1.8 trillion tokens from Google Web and News pages gathered the same year. In 2011, IBM’s Watson became the world Jeopardy! champion using a variant of the mixture-of-experts algorithm published twenty years earlier, but utilized a dataset of 8.6 million documents from Wikipedia, Wiktionary, Wikiquote, and Project Gutenberg updated one year prior. In 2014, Google’s GoogLeNet software achieved near-human performance at object classification using a variant of the convolutional neural network algorithm proposed twenty-five years earlier, but was trained on the ImageNet corpus of approximately 1.5 million labeled images and 1,000 object categories first made available only four years earlier. Finally, in 2015, Google DeepMind announced its software had achieved human parity in playing twenty-nine Atari games by learning general control from video using a variant of the Q-learning algorithm published twenty-three years earlier, but the variant was trained on the Arcade Learning Environment dataset of over fifty Atari games made available only two years earlier.
Examining these advances collectively, the average elapsed time between key algorithm proposals and corresponding advances was about eighteen years, whereas the average elapsed time between key dataset availabilities and corresponding advances was less than three years, or about six times faster, suggesting that datasets might have been limiting factors in the advances.

Seeing these correlations only affirms the importance of looking at knowledge bases from the specific lens of how they may best support training AI machine learners. We see the correlation; it is now time to optimize the expression of these KB potentials. We need to organize the KBs via coherent knowledge graphs and express the KBs in types, entities, attributes and relations representing their inherent, latent knowledge structure. Properly expressed KBs can support creating positive and negative training sets, promote feature set generation and expression, and create reference standards for testing AI learners and model parameters.

Past AI winters arose from lofty claims that were not then realized. Perhaps today’s claims may meet a similar fate.

Yet somehow I don’t think so. The truth is, today, we are seeing rapid progress in AI tasks of increasing usefulness and value all around us. The benefits from what will continue to be seen as ubiquitous AI should now ensure an economic and innovation engine behind AI for many years to come. One way that the AI engine will continue to be fueled is through a systematic understanding of how knowledge bases and their features can work hand in hand with machine learning to more effectively automate and meet our needs.

You can see Part II of
this series here.

[1] Kevin Kelly, 2014. “The Three Breakthroughs That Have Finally Unleashed AI on the World,” in Wired.com, October 27, 2014.
[2] For example, from the perspective of hardware, see Jen-Hsun Huang, 2016. “ Accelerating AI with GPUs: A New Computing Model,” Nvidia blog, January 12, 2016.

Schema.org Markup

headline:
Why the Resurgence in AI?

alternativeHeadline:
Artificial Intelligence is in Bloom; But it Was Not Always So

author:

image:
http://www.mkbergman.com/wp-content/themes/ai3v2/images/2016Posts/Colourful-flowers-images.jpg

description:
I argue that knowledge graphs and knowledge bases are the key enablers for AI in the realm of knowledge management and representation.

articleBody:
see above

datePublished:

One thought on “Why the Resurgence in AI?

  1. Why the resurgence in AI? Excellent question. But I think Kevin Kelly’s trifecta misses the mark. The crux of the biscuit wasn’t hardware or algorithms. It’s data.

    What have we seen in the last 5 years that makes us think AI is now a Big Deal? It’s IBM Watson winning at Jeopardy; Google’s self-driving car driving on public streets; Siri and Cortana understanding simple verbal commands and queries, responding in a human voice, and doing so on YOUR cell phone. Academically we’ve seen significant improvement in pattern matching and classification tasks using Deep Learning. And we’ve watched flexibly dextrous autonomous (toy) drone aircraft fly like bumblebees. All that seems to be revolutionary AI. But what enabled all this NOW?

    The game changer is not new AI algorithms. Honestly, these haven’t changed significantly in decades, ever since the current probabilistic emphases arose. Neural nets, even deep ones, have been around for 20 years. It’s not hardware. We’ve had supercomputers with gigaflops for 20+ years and GPGPUs have been around for over a decade. That’s strike two. So what’s left?

    It’s data of course. But not just data. It’s very large amounts of data that has just enough semantics added to make it useful, to make a meaningful impact. Sure enough, vast stores of very slightly curated (tagged) data became accessible to researchers only in the last 5-10 years. Could IBM Watson have won Jeopardy without Wikipedia or some other info repository rich with bigram and trigram key phrases? No. Could Deep Learning recognize cats or learn to identify hundreds of objects within images, without a vast training set of labeled images? No. The game changer here is big data with just enough semantics to let us to 1) detect complex patterns, 2) generalize those patterns meaningfully using labels.

    Yes, the presence of such data in the absence of GPU farms or multicore servers would not have been enough. And applying good old fashioned symbolic AI methods would not have found the patterns or made the connections, even on fast hardware. The failures of many past research initiatives that extended AI algorithms or parallelism or GPUs *without* tagged data should teach us something about what’s necessary for AI to thrive. Unless we can recognize patterns AND generalize them to a meaningful identity using labels, no amount of fancy footwork will be enough for AI to Get Smart.

Leave a Reply

Your email address will not be published. Required fields are marked *