Jul 282015
 

Brain Design Inspired Computing is Here 550x250

Computing inspired by the design of brains is rapidly progressing. Very rapidly.

Companies like IBM and Qualcomm are financing neurochip projects, and in the case of IBM’s Cognitive Computing push, it may be betting its own future on neuromorphic technology. Europe is investing US $1.3 billion in the Human Brain Project, which sets out to simulate the human brain. Not to be left behind, the US announced in 2013 it is investing $300 million in its own Brain Initiative with similar objectives. Researchers in the UKCanada, at Stanford University, and at DARPA are all working on various aspects of the neuromorphic computing puzzle, and are now publishing their results.

Deep thinkers like Stephen Hawking and tech billionaires like Bill Gates and Elon Musk ominously warn about the impeding perils of this technology while proponents (including Paul Allen, also of Microsoft fame) fight backMany world scientists are dismayed over how the Human Brain Project is unfolding, fearing the project is quixotic and not transparent. They are now raising a ruckus. Philosophers continue to rail against the whole matter of intelligent machines, but this time not so safely detached since, with recent technical advances, the future is a lot closer now than it was in the last artificial intelligence (AI) go-around more than 25 years ago.

With so much global investment activity, wide-eyed optimism, and frightened hyperventilating, is there something happening here?

There is. It’s a combustive mix of two things. First, computers are nearing brain scale, and second, human beings have always been intrigued with themselves. Our interest in the mystery of our own minds is as old as philosophy itself and, from my perspective, that fascination fuels not only hyperbole, but incredible persistence. Furthermore, in the case of whole brain emulation (uploading a brain/mind into a machine), immortality beckons, attracting many more and especially the obsessive. The motivations here are all too well known.

On the topic of scale, computing is now getting very interesting. Just keep in mind two numbers: 1010 and 1014. The first represents the approximate number of neurons within the brain, within one order of magnitude of accuracy. The second represents an estimate of the numbers of connections between neurons within the brain. Brains get their power not in their speed (they are awfully slow) but in their connections between neurons, which enables massive, low-energy parallelism. IBM’s neuralchip program has this scale of neurochip computing as a target. On the way toward this ambitious goal, IBM conducted the first software simulation of this scale of neural networks in 2012.

Using a smaller-scale neural network, researchers in Canada created the Spaun project, which carefully attempts to simulate human working memory (that part of our brain used to handle problem solving) and rudimentary parts of vision processing, publishing their results in Science in 2012. How much time will transpire before they or someone attempts something similar but using the full human brain scale of 1010 and 1014? How much more time will elapse before we leave behind the well-used term FLOP (floating point operation) and instead just merely move to “this computer accommodates 20 million HBS”?

Using simple math, and taking into account both IBM’s direction for improved power savings for neural chips and AMD’s announced plans for significant reductions in power consumption for conventional chips, and assuming these trends continue, one could calculate at what point a computing architecture will accommodate the 1010 and 1014 neural power all within the human brain’s current 20W energy footprint (yes, our brains run on 20W of energy). Energy consumption might achieve human scale by 2028, but by that time, computing architecture will have scaled way beyond 1 HBS. By 2028, we may be measuring processing by the mega human brain scale. Is a 20-MHBS system in our future?

Twenty-five years ago, this scale of neural network design was unimaginable. Today, it is just around the corner. Twenty-five years ago we didn’t understand as much about human working memory. Today, functioning simulations, however crude, are here. Because the hardware and software approaches have advanced this far, nearly everyone of note is taking this seriously.

As impressive as these recent developments may sound, there is still a seemingly overwhelming amount that we don’t know about brains. Researchers have significant debate on how much computing power will be needed to simulate a human brain. The estimates listed here may be several orders of magnitude too low. But we also don’t yet fully understand how closely a computer simulation of a human brain would need to follow all the only partially understood and messy biological and chemical details in order to prove useful for simulating human brains. Scientists are just at the very, very earlier stages of mapping out the different processes at different levels — from molecules, to neurons, to neural networks, to functional groupings, and finally to inference and behavior. Profound arguments abound as to what constitutes satisfactory understanding at each level, how these levels actually map to each other, and what actually constitutes intelligence (or, for that matter, consciousness or recursive improvement).

Even the on-the-surface-silly concept of whole brain emulation is getting serious attention by scientists and philosophers. What would happen if human kind created a machine superintelligence? While Ray Kurzweil may have reignited interest in 2006 with his book The Singularity is Near, in which he posited an upcoming technological singularity, plenty others have since put pen to paper and are elucidating the multifaceted points of this very multifaceted topic. In the past, all this talk was pure conjecture. Today, it is only partially conjecture. We are stepping into the doorway of a new era of computing, for which we are still uncertain if it is even a doorway, or, if we are even at the threshold and if it is a doorway, what lies on the other side.

But between these wilder points of conjecture about the ultimate fate of humankind in the face of a superintelligence singularity and the world today stands a stream technology innovation, here and still to come, that has been inspired by brains. Google has been documenting its uses of a form of artificial neural networks, called deep learning networks. These networks take advantage of the large scale of computing available today and create much larger and more complex neural networks patterned after brain biology. Google can extract street numbers from street view images for all of France, within an hour. IBM’s TrueNorth chip, which was inspired by biological brains, isn’t at all focused on brain simulation but on more general-purpose pattern recognition in images, audio, and words across many sources of big and messy data.

As this era of computing unfolds, it will be useful to break this down. How should one analyze the different methods being used in a way that makes business sense? What predictions could one make about specific industries that could be affected? How would one make sense of the wild and diverging predictions about the longer-term future? And finally, while predicting whether a sentient, computer-based superintelligence is in our future or not may be impossible, predicting human reaction to such a proposition is a little easier. We are humans, after all. Predicting our behavior ought to be easier than predicting the behavior of a machine superintelligence.

Early in my career I was somewhat optimistic about the simpler utility of artificial neural networks, but I have been very skeptical for a long time about whether any kind of computing would simulate brains at all. Over the past few months, I have found myself doing simple paper-napkin math on when computing power just might be up to the task. Observing myself doing this, I have now concluded that with regard to my prior beliefs, I am not certain anymore. If human brains do have machine analogs, the machine may soon be powerful enough.

Whatever your perspective is on this new age of neuromorphic computing, it is here. It has a lot of money and ambitious people surrounding it and it has computing scale underneath it. Now is the time for businesses to begin thinking more deeply about it, while they still can.

Objects in the future may be closer than they appear.

avatar

Vince Kellen, Ph.D.

Vince Kellen, Ph.D. is a Senior Consultant with Cutter's Business Technology & Digital Transformation Strategies and Data Analytics and Digital Technologies practices. Dr. Kellen's 25+ years of experience involves a rare combination of IT operations management, strategic consulting, and entrepreneurialism. He is currently CIO at the University of Kentucky, one of the top public research institutions and academic medical centers in the US.

Discussion

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)