Cognitive computing is among the major trends in computing today and seems destined to change how business people think about the ways in which computers can be used in business environments. “Cognitive computing” is a vague term used in a myriad of ways. Given the confusion in the market as to the nature of cognitive computing, our recent Executive Report (Part I in a two-part series) describes what we mean by cognitive computing by exploring five different perspectives on the topic: (1) rules-based expert systems, (2) big data and data mining, (3) neural networks, (4) IBM’s Watson, and (5) Google’s AlphaGo. Here is a brief description of each.
- Rules-Based Expert Systems
There have been other attempts to commercialize artificial intelligence (AI) technology. The most notable was an effort to create rules-based expert systems in the 1980s. Although there were some successes, most expert systems efforts were abandoned by the end of that decade because they were expensive to build and nearly impossible to maintain. The format of a rules-based expert system required that any improvements be hand-tailored by human experts, and it simply proved too expensive to keep such systems up to date.
- Big Data and Data Mining
AI techniques were also used with more modest applications such as data mining tools that could analyze and generate suggestions after examining large databases. Given the large databases that organizations created in the 1990s and the 2000s, data mining tools have proved very valuable. These tools depend on machine learning techniques and provide the basis for many of today’s cognitive computing advances.
- Neural Networks
The most important technologies being used in cognitive computing applications are called neural networks. This basic technology has been around for decades but breakthroughs in the 1990s and 2000s allowed developers to create much more powerful and robust applications. The key new technologies aredeep neural networks and reinforcement learning. These techniques have enabled major progress in natural language (NL) systems, in visual systems, and in creating decision-making systems that can examine information in databases and “learn” from them.
- IBM’s Watson Plays Jeopardy!
In 2011 IBM’s Watson demonstrated its cognitive capabilities by playing two Jeopardy! champions and winning the game. It listened to questions, searched its huge database of information, and generated winning answers in under three seconds. People watched the performance on TV and were quite impressed. IBM preceded to set up a new organization to commercialize Watson and suggested that cognitive computing was the future of computing — and of IBM.
- Google’s AlphaGo
In March 2016, Google demonstrated that its AlphaGo application could beat the world champion of Go(the most complex strategic game humans play). AlphaGo began by beating the European Go champion in October 2015. Those who observed the play were confident that the world champion, who was considered much better than the European champion, would have no trouble beating AlphaGo. Between the October and March matches, however, AlphaGo was able to play millions of games with itself, constantly learning more about Go and getting better. It easily beat the world champion when they played head-to-head.
Putting It All Together
The commercial attention focused on the possibilities of AI technologies in the 1980s was stimulated by the success of two expert systems: Dendral and MYCIN. The current round of commercial interest in AI is driven by the popular successes of Watson and AlphaGo. These victories, in themselves, aren’t of too much value, but the capabilities demonstrated in the course of these wins are hugely impressive. In the case of Watson, it’s now clear that applications can be provided with NL interfaces that can query and respond to users in more or less open-ended conversations. At the same time, Watson is capable of examining huge databases and organizing the knowledge to answer complex, open-ended questions. In the case of AlphaGo, it’s equally clear that an application capable of expert performance can continue to learn by examining huge online databases of journals and news stories or by working against itself to perform a task faster, better, or cheaper and improve rapidly.
There have been no recent major technological breakthroughs; all the basic technologies used today have been around for at least two decades. There have, however, been minor technological breakthroughs, and these, in turn, have forced researchers to review older techniques and reevaluate their power. Consequently, deep neural networks, various types of feed-backward techniques, and reinforcement learning have been combined with techniques for searching massive databases and the steady growth of computing power to generate a powerful new generation of AI applications. The new applications are designed around architectures that combine many different techniques — sometimes the same technique used in multiple ways — running on multiple machines, which brings forth different problem-solving approaches and has led to exciting new solutions.
Cognitive computing does not describe a specific technology or even a well-defined approach to computing. The term is now used to describe a broad approach to application development that combines a wide variety of different techniques. Cognitive applications combine AI and non-AI techniques in complex architectures that include not only knowledge capture and knowledge analysis capabilities, but also NL and visual front ends and large-scale database search capabilities. Significant features of cognitive applications are their proven ability to rapidly learn and to improve on their own (in at least some circumstances). In the future, cognitive applications will link to the Internet — constantly reading journals, newsfeeds, and conference proceedings and forever improving their problem-solving capabilities.