Neuromorphic computing is producing microchip architecture that mimics the way the human brain works.
Pulling from various aspects of physics, biology, electronic engineering, computer science and mathematics to create computers systems that emulate human systems.
Here is the video with an overview of the project from IBM…
This new field borrows its design principles and architecture from biological nervous systems to create auditory processors, vision systems and other sensory processors that enable modern autonomous robots to function.Neuromorphic engineering as a whole is straight out of a sci-fi movie.Click To Tweet
You probably want to pay attention to this technology over the next two decades or so. We are in the equivalency of the late 60’s with home computers, or even the late 40’s with televisions in terms of how this technology will translate to consumers.
Most people think of assistive robots for the elderly, but it could be used for much more. For example, the library of cognitive logic embedded in nueromorphic computers could power and improve how self driving cars respond to collisions in real-time.
An entire cottage industry of retail software would improve the accuracy of their in-store tracking systems.
Security systems would move beyond reacting to a ridged set of procedural commands, and be able to bring “brain power” to analyze information across distributed networks of data much faster and more accurately.NC isn't about AI, it's about creating CPUs that can handle the demands of modern computing.Click To Tweet
A Little History
A concept originally developed by Carver Mead in the late 1980s, neuromorphic computing is the use of systems of electronic circuits that essentially mimic neuro-biological architectures present in the human nervous system.
More recently, neuromorphic has been used to describe digital, analog and various mixed-mode analog/digital VLSI (very-large-scale integration) as well as software that implement models of neural systems.
As we speak, there was a chip by the name of “Spikey” representing one of the first neuromorphic chips developed by the Electronic Vision(s) group at the University of Heidelberg. The production of the chip was part of the EU’s Human Brain Project.
Last year, Dr. Dharmendra S. Modha came out announcing IBM has a chip with one million neurons, 256 million synapses, and 4096 cores. To provide some level of context at how quickly the technology is progressing, he stated that…
In 2011, we had a chip with one core,” Modha told me. “We have now scaled that to 4096 cores, while shrinking each core 15x by area and 100x by power.
It is important to know one of the biggest investors in this new technology is IBM. They have been dumping billions of dollars over the last 5-6 years to develop a brain chip.
The idea of this hardware architecture meeting advanced AI in the hands of the company that profited from the Holocaust, is well…
fucking a bit creepy.
According to their website, one of the primary goals from their research page is…
We envision augmenting our neurosynaptic cores with synaptic plasticity to create a new generation of field-adaptable neurosynaptic computers capable of online learning.
“Online learning?!” So apparently they want to hook these things up to the internet. Great. Now we can argue with AI in Twitter.
I’ve had this weird feeling that’s what we’ve been doing with their latest traffic numbers so low.
My key areas of interest in this new technology is how it might accelerate various aspects of robotics and artificial intelligence research.
There is also a lot of promise in how it might distribute computational resources across networks as the demands of parallel, real-time processing increases with the rise of the Internet of Things.
What do you think about this new technology and how it might change, improve or disrupt your industry? Leave a comment below.
Want to know if your website is built for success?
Get a FREE, no obligation preliminary website audit with actionable tips on how to improve it. No BS. No snake oil. No spam. Ever.