Novel Computers that Learn From Mistakes to Debut in 2014

Posted in Medical Computing by Brian Buntz on December 30, 2013

Computers with a brain-inspired ‘neuromorphic processor'—systems able to learn from their experiences—could transform the face of computing, potentially doing everything from advancing facial and speech recognition technology to rendering computer crashes obsolete. The breakthrough could also greatly simplify the task of programming.

Neurosynaptic Cores
A network using neurosynaptic cores is based on the monkey brain. Image courtesy of IBM.

Qualcomm, IBM, and Stanford University are among those pioneering the technology, which enables computers to actively learn from new data and dynamically adapt it in a manner similar to how organisms learn by interfacing with the external world.

Advances will obviously have implications in the medical field. Think how IBM’s Watson supercomputer is already being used to help diagnose cancer, or how San Diego-based Qualcomm is involved in pretty much everything when it comes to the digital health revolution

Qualcomm says a commercial version of the neuromorphic processor could be out later in 2014, according to a recent New York Times story.

Speaking in a sponsored talk at MIR Technology Review’s EmTech conference in October, Qualcomm’s chief technology officer Matt Grob mentioned applications ranging from artificial vision sensors to robot controllers and even brain implants.

“What is new now is the ability to drop down large numbers of these structures on silicon. The tools we can create are very sophisticated. The promise of this is a kind of machine that can learn, and be programmed without software—be programmed the way you teach your kid,” Grob said at the conference, according to MIT Technology Review.

One of the scientists involved in neuromorphic processor research, Stanford’s Andrew Ng, has long said that the software driving current-generation robotics technology greatly limits the scope of its applications.

For instance, Ng has pointed out that researchers have long predicted the development of domestic robots that are capable of cleaning people’s houses, which are capable of doing everything from doing dishes and laundry to vacuuming. While some domestic robots have hit the market, like iRobot’s Roomba vacuum cleaning device, the intelligence of such devices have been extremely limited.

Advances in artificial intelligence will, as Ng points out, enable smarter robots to accurately perceive and understand the world around them. They can then use this information to exert control over their environment.

This capacity is illustrated in researchers’ failed attempts to write software that permits a remote-controlled helicopter to operate autonomously. Ng then worked to develop software that enabled a computer to learn how to fly such a helicopter through experimentation. The technique proved successful, and Ng’s code enabled the computer to even learn how to perform stunt maneuvers comparable to those demonstrated by the best human pilots.

The field of machine learning could be further bolstered by what Ng terms the “one-program hypothesis,” which assumes that a single algorithm could enable artificial intelligence to perceive visual, auditory, and tactile data. To support the hypothesis, he points to research into the brain’s neuroplasticity, which enables neurons linked to tactile perception, for instance, to be rewired to be used for visual perception.