Jeff Hawkins, the founder of Palm Inc., the company responsible for the first handheld computer and first-generation smartphones, discussed how the human brain absorbs information in his 2004 book On Intelligence. Hawkins trained to be an electric engineer, but he took up an interest in how our brain functions and in 2002, he ended up founding the Redwood Neuroscience Institute at UC Berkeley. This institute is private and nonprofit, and does research that focuses on comprehending how the neocortex of the brain processes information.
Speaking about his book today, Hawkins says, “There was very little I would change about that book. There’s a lot I would add. There’s a ton of stuff where I know exactly how it works, that I didn’t know when I wrote it.”
Nowadays, Hawkins leads research at Numenta, a company stationed in Redwood City, CA that constructs computer memory systems based on simple connections between neurons in our brain. Numenta, a privately funded business, began in 2005 and bases its research almost primarily on the concepts of Hawkins’ book. What do they do? They build software that scrutinizes large sets of data to detect trends and anomalies. Thus far, the algorithms and software developed by Numenta have been used on data-heavy technology ranging from GPS coordinates to performance metrics for computing systems on the cloud. If recent reports are to be believed, then IBM created a group of 100 people from its Almaden research lab to further test Hawkins’ algorithms and delve into ways of creating next generation systems that are able to run on them.
Hawkins is not alone in believing that the structure of the human brain will be the future of computing and will allow us to understand the world much more than we already can with our eyes and ears. Below you can find explanations from Hawkins pertaining to why he believes that the largest possibilities in deep learning all center around the brain.
When asked what he defines intelligence as, Hawkins starts off basically by saying that the human brain is where intelligence lies and that it’s the only system that everyone can universally agree is intelligent. He concedes that some other animals have intelligent brains, but by far, humans have the most intelligence. Hawkins goes on to ask the question, “What does the human brain do that makes intelligence?” And his answer to this question is what he feels intelligence truly is. He explains that the act of learning something isn’t intelligence. A person knowing a whole bunch of stuff isn’t intelligence, either. In Hawkins’ mind, intelligence is the path one takes to learn something. The ways, methods, techniques people use to be able to master a skill or learn and process information. It’s how our brains function and develop and Hawkins believes that is what intelligence truly is.
Hawkins goes on to explain that everyone’s brain is a blank slate when they are born. They know nothing because they have not experienced any aspect of life to be able to learn anything. Hawkins actually compares the neocortex to software, as the neocortex pretty much acts as memory for the brain and is where intelligence is stored. He says that people learn nothing if they don’t do anything. You have to take action and explore the world in order for your brain to learn stuff. New experiences, new people, new ways of life, new perspectives. All of that can lead to an expansion in one’s intelligence. And according to Hawkins, there is no pinnacle for intelligence. You can’t just reach the top and proclaim to be intelligent. It’s a never-ending process of learning more and more things based on stuff you’ve experienced in your life. This is why he feels there’s a huge difference between ‘biological intelligence’ and artificial intelligence that scientists are trying to master. The scientists input knowledge into these AIs instead of letting them gain intelligence the same way that the human brain does.
Hawkins explains that these universal algorithms in their most basic form is the pattern of learning that the brain follows to develop our basic skills like touching, hearing, and seeing. Now obviously, these are all different kinds of behaviors that rely on different systems and parts to do, however, the brain uses the same exact algorithm to master. The brain cannot distinguish between these skills when learning them, which is why it can use the same algorithm. This is why he calls it the ‘universal algorithm’. According to Hawkins, this whole process is just a result of complex “mathematics”.
He goes on to say that this same principle applies when developing artificial intelligence. Scientists don’t have to construct separate systems for each type of behavior (ie. touching, hearing, seeing, etc.), all they need is one system that has the algorithm to allow the AI to develop those skills. Remember, intelligence is the method in which something is learned. The brain uses the same method to learn everything. That same idea is transferred onto AIs.
Brain vs Software & Hardware
The brain is an incredible machine. It has the uncanny ability to manage and overcome obstacles such as losing a body part. It is able to completely rewire its systems to deal with the loss of that finger and still operate properly. The question is: does modern software and hardware have this same kind of resilience?
Hawkins does not believe so, as computers easily fail when they lose one little part, no matter how minor. They don’t have the ability to reconfigure themselves to work without the missing part, unlike the brain. Hawkins explains that data in our brain is all distributed between millions of cells, so losing some of them won’t make any difference. Computers aren’t designed this way, as every individual part has a specific function that would otherwise make the computer fail without it.
The brain also writes a whole bunch of new connections while it’s going, so even if one connection is severed, it can use one of the new connections to keep running without missing a beat. It’s truly amazing.
Hawkins is confident in his ability to apply this same kind of resiliency to the silica AIs. He says that his team will engineer memory banks within the silica, much like the brain’s neocortex, that function very similarly to the neocortex. Also following the brain’s model, the systems in those memory banks will all be distributed so the whole system won’t fail if one part stops working. Hawkins explains, “Parts of the memory system talk with other parts of the memory system so if some part craps out, not only can things keep working but it will route the information to new areas.” Now, remember silicon dioxide is very universal and has many applications upon many fronts. Compare this silica with what we are referring to here and you will see it is completely different context (but also speaks to silica’s overall importance, uses, and significance to supporting life itself.)
This sounds pretty cool and it gives the silica brain the ability to keep running after traumatic injuries, much like our own brain. The best part: Hawkins states that he’s already tested this with his software and it’s worked! They purposely ruin particular parts to see how the system reacts and it ends up recovering itself even without the destroyed part.
Hawkins believes that scientists are doing it all wrong when they talk about “computer vision”. Hawkins states that they are really talking about image classification, which is only a “subset of vision”. He says that the image classification is important, but not nearly as important as vision, as the brain uses a lot of resources to overcome the vision issue. Hawkins considers vision to be much more complex and that it requires a lot more brain hardware to be replicated for AIs that just can’t be done at this point in time technologically.
Hawkins also believes that there are no real tangible limits to what can be done with artificial intelligence in the future. He says that humans can construct an AI brain that is way larger than our own, but the problem lies in actually teaching it. It takes a lot of time for simple humans to learn new skills, and it takes even longer for AI brains to learn. Hawkins says the focus shouldn’t be on building he biggest AI brain to learn the most information, the focus should be on building a brain that learns the information the fastest. The system must be given a huge speed boost if scientists really want AI to catch on. Make sure you read his full interview here as well.
If Hawkins is right in his beliefs, don’t expect to see AI become commonplace in your lifetime. This is truly futuristic stuff, comparable to terraforming other planets.