Book Review: A Thousand Brains

Link to Google Doc

The brain is a magnificent organ. Its complexity has made it hard to understand, and we are probably far away from a complete model of the brain. However, I still find it interesting to read about theories, models, and possible explanations to the brain’s amazing abilities. In this post, I will review and explain the book A Thousand Brains, by Jeff Hawkins. In the book Jeff Hawkins describes his, and his team’s, theory of the neocortex. I like the theory, and I think it has some interesting implications for creating intelligent agents. This is the first part of a postsequence on intelligence, the brain, and AI. 

One popular view of the brain in neuroscience nowadays is that the brain is a prediction machine. What ever we do, the brain is trying to predict the coming inputs based on the previous sensory inputs. To do this the brain has to have a model of the world. That the brain is a prediction machine is also a concept that Karl Friston explores in his theory of the free-energy principle. According to Friston, the brain acts to minimize free-energy by predicting coming inputs. (This is a very simplified account of Friston’s theory.)

But, how does the brain keep track of the model? And how does the brain make the predictions? These are the types of questions that Hawkins is trying to answer with his theory. Neuroscience have long known that our thoughts, ideas, and perceptions come from the activity of neurons and that everything we know is stored in the connections between neurons. However, how this happens is not yet known.  Hawkins argues that the secret lies in the neocortex, the part of the brain that is responsible for most of the functions we associate with intelligence. In the neocortex, we have regions for language, perception, planning, and much more. 

The neocortex is made up of hundred thousands of small columns of about one square millimetre in base area and 2.5 millimetres in height. These columns are called cortical columns, and they are the main characters of Hawkins’ theory. The theory is that the entire neocortex, no matter the special function of a certain region, works using the same basic algorithm. The algorithm allows the cortical columns to learn models of thousands of objects and concepts, all of which comes together in our model of the world. For the brain to be able to function and to make predictions about the world, the brain mystery maintain a model of the world. A model of the world is very complex, which is why I am first going to describe Hawkins’ model for how cortical columns model physical objects. 

When looking at an object, we immediately know how far away from our body it is. Similarly, if it is a familiar object, such as a coffee cup, we can know what it looks like even without seeing the entire object. Hawkins argues that both of these phenomena can be understood through reference frames. Reference frames are best understood as the gridlines on a map, or a timeline for historical events. Reference frames allow us to locate things relative to each other. For physical objects, the brain attaches a reference frame to the object which allows us to know the location of the handle relative to the bottom of the cup and so on. Similarly, the brain has attached a reference frame to the body, allowing the brain to gauge the distance to all objects. 

But the human brain can learn and understand complex, non-physical things as well. The same basic algorithm works for learning the coffee cup and the events of world war II. The dimensions of the reference frame does not have to be the same though, for the object we mostly use space to model and remember, but for abstract things the dimensions can be something else. For WWII, the brain might use time, movies, location, and many more dimensions. The important thing is that the brain can separate different knowledge from each other by their relative location in a reference frame. 

As the neocortex consists of hundred thousands of cortical columns, Hawkins thinks that each column can model hundreds of objects. He also thinks that many columns have a simultaneous model of an object. The name of the theory, The Thousand Brains Theory, comes from the hypothesis that the brain is like a distributed database that comes to understanding by having cortical columns vote about which model is appropriate. I find this part quite confusing, it is best explained by imagining being dropped into a random city. 

Suppose that you are being dropped into a random city centre. Before even dropping in, you have models of a number of cities that you have been too. These models are all distributed over a number of cortical columns. When you land, you immediately see a cafe and a library. Now, all cortical columns that have a model of a city with a cafe and a library spike and to know which city you are in, the columns vote and the model which is most represented among the cortical columns is the answer to which city you are going to think you are in. 

This is what the first part of Hawkins’ book is about, his theory of the neocortex. The rest of the book is about Hawkins’ views on AGI and the future of society. That was not as interesting, which is why I have only focused on his theory of the neocortex in this post. Personally, I like the theory while also acknowledging that I am not in any way knowledgeable enough about neuroscience to judge it in a serious way. Reading this book has increased my belief in the possibility of building intelligent machines by mimicking parts of the human brain. (More on that in coming posts.) Hawkins’ theory suggests that human intelligence is learned by just following a basic algorithm that is determined by the architectural structure of the brain. This suggests that it might be possible to create an intelligent machine by just giving it a basic structure and by having it learn by a simple algorithm. 

In fact, in the coming posts I will argue that a reinforcement learning approach to AI might be the way to be able to achieve general intelligence in machines. First, I will look at what reinforcement learning is and DeepMind’s paper ‘Reward is Enough’. Then, I will take a look at how DeepMind has implemented ‘Reward is Enough’ in one of their latest projects. I will also update my prior on when AGI will be achieved. 

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.