Could a neuroscientist understand a microprocessor? Eric Jonas and Paul Kording, Commentary
Published:
Neuroscience has recently developed tools to record from large number of neurons. Even with a rapid influx of data however, understanding principles of brain function has remained challenging and the progress has been slow. This research article uses a microprocessor as an analogy of the brain to show why the current ways of thinking in neuroscience may not be fruitful. A complete netlist at transistor level of MOS 6502 is available. This is like having the full connectome of an organism (think C. elegans). The activity of the microprocessor when three games (Donkey Kong, Space Invaders, Pitfall) are played is analyzed. This is analogous to studying the neural activity while an organism is behaving in a particular way.
In the future it may be possible to execute simpler, custom code on the processor to tease apart aspects of computation, but we currently lack such capabilities in biological organisms
The paper lists following dissimilarities between microprocessor and brain:
The brain computes stochastically. (Is this the reason why it can compute with just 20 W of power and little data while computers need many order of magnitude more power and data? Can we understand the energy cost disparity associated with stochastic and deterministic computing using thermodynamic arguments?) Brain has lots of redundancy and is robust to damages. Brain has hundreds of types of neurons which are individually very complex, while the microprocessor has just one type of transistor. Human brain has 100 billion neurons while the model microprocessor has a few thousand transistors making it much simpler than the brain. So, the best analogy to think of is C. elegans whose nervous system comes closest to the microprocessor in computational spirit. The authors missed the fact that microprocessor is intelligently designed while the brain is product of natural selection. So, there are many relics in the brain from evolution which are of no use today while microprocessors are much cleaner in this sense. The authors missed the fact that the brain changes in time both in connectivity and in number of units while microprocessor remains largely the same. The brain has a much greater parallel architecture compared to microprocessor The paper lists the following similarities between microprocessor and brain:
They both consist of interconnected modules which perform specific tasks They both operate on multiple timescales. (I disagree with this statement as computers have a single clock which sets the timescale on which they operate. Yes, transistor flipping operations may happen in multiple timescales but the computation itself takes place at a rate determined by the clock rate) They both process information hierarchically They both can retain memory over time. The paper gives two possible definitions for what it means to understand something. The first way is to say that if we can fix a broken system then we can understand it. In other words, if we can replace the broken parts of system(think replacing hippocampus) then we understand how that part of the system works. The second way is to use David Marr’s three levels of analysis. For any module of the system we would “understand” it if we know its computational utility, its algorithmic implementation and its physical design.
The paper starts the analysis of the microprocessor by highlighting the fact that the netlist of the miroprocessor is completely known and the state of all the transistors at any particular time while it is running some computation is also known.
The first method used to analyze the microprocessor is graph analysis techniques. They identified different “types” of transistors from their connectivity (circuit motif) and temporal evolution.
The second method was to do lesion experiments. They found out that removal of certain transistors are lethal, that is all games stop. Removal of other subsets of transistors affected only one particular game while removal of still other subsets of transistors had no effect at all.
The authors highlight the fact that even though we have isolated certain transistors they bring us no closer to understanding how the brain works. They then reiterate that it would be ideal to study behavior which elicits minimal and stereotyped neural activity. An analogy would be studying simple math operation which involving very few transistors.
The third method was to study tuning properties of transistors. How individual transistors responded to luminance of the screen. Even though this led to some interpretable plots it didn’t lead to any understanding.
The fourth method was to study the correlation between the spiking (rising edge behaviour) of transistors. They were found to be very weakly correlated.
The fifth method was to study the Local Field Potential (See Fig 8). They analyzed the spiking activity in extended regions of the processor and studied the power spectrum of the various frequency components.
The sixth method was to use Granger Causality. Suppose we have two time sequences, X(t) and Y(t). We now ask how well can we predict future values of Y(t) given X(t). To do this, we study two models. First, how well past values of Y(t) can predict future values of Y(t) and secondly, how well past values of X(t) and Y(t) can predict future values of Y(t). Comparison of the two models allows us to figure out the predictive power of X(t). This analysis shows (correctly) which parts of the microprocessor interacts with other parts but the computational significance of the connectivity remains elusive.
The seventh method was to use analogue of whole-animal recordings.
Altogether, there seems to be little reason to assume that any of the methods we used should be more meaningful on the brains than on the processor
The key take away from the paper is that the current methods of neuroscience are insufficient to understand the brain.