MATH SHOWS HOW BRAIN STAYS STABLE AMID INTERNAL NOISE AND A WIDELY VARYING WORLD
Whether you are playing Go in a park amid chirping birds, a gentle breeze and kids playing catch nearby or you are playing in a den with a ticking clock on a bookcase and a purring cat on the sofa, if the game situation is identical and clear, your next move likely would be, too, regardless of those different conditions. You’ll still play the same next move despite a wide range of internal feelings or even if a few neurons here and there are just being a little erratic. How does the brain overcome unpredictable and varying disturbances to produce reliable and stable computations? A new study by MIT neuroscientists provides a mathematical model showing how such stability inherently arises from several known biological mechanisms.
More fundamental than the willful exertion of cognitive control over attention, the model the team developed describes an inclination toward robust stability that is built in to neural circuits by virtue of the connections, or “synapses” that neurons make with each other. The equations they derived and published in PLOS Computational Biology show that networks of neurons involved in the same computation will repeatedly converge toward the same patterns of electrical activity, or “firing rates,” even if they are sometimes arbitrarily perturbed by the natural noisiness of individual neurons or arbitrary sensory stimuli the world can produce.
“How does the brain make sense of this highly dynamic, non-linear nature of neural activity?” said co-senior author Earl Miller, Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences (BCS) at MIT. “The brain is noisy, there are different starting conditions – how does the brain achieve a stable representation of information in the face of all these factors that can knock it around?”
To find out, Miller’s lab, which studies how neural networks represent information, joined forces with BCS colleague and mechanical engineering Professor Jean-Jacques Slotine, who leads the Nonlinear Systems Laboratory at MIT. Slotine brought the mathematical method of “contraction analysis,” a concept developed in control theory, to the problem along with tools his lab developed to apply the method. Contracting networks exhibit the property of trajectories that start from disparate points ultimately converging into one trajectory, like tributaries in a watershed. They do so even when the inputs vary with time. They are robust to noise and disturbance, and they allow for many other contracting networks to be combined together without a loss of overall stability – much like brain typically integrates information from many specialized regions.
“In a system like the brain where you have [hundreds of billions] of connections the questions of what will preserve stability and what kinds of constraints that imposes on the system’s architecture become very important,” Slotine said.
Math reflects natural mechanisms
Leo Kozachkov, a graduate student in both Miller’s and Slotine’s labs, led the study by applying contraction analysis to the problem of the stability of computations in the brain. What he found is that the variables and terms in the resulting equations that enforce stability directly mirror properties and processes of synapses: inhibitory circuit connections can get stronger, excitatory circuit connections can get weaker, both kinds of connections are typically tightly balanced relative to each other, and neurons make far fewer connections than they could (each neuron, on average, could make roughly 10 million more connections than it does).
“These are all things that neuroscientists have found, but they haven’t linked them to this stability property,” Kozachkov said. “In a sense, we’re synthesizing some disparate findings in the field to explain this common phenomenon.”
The new study, which also involved Miller lab postdoc Mikael Lundqvist, was hardly the first to grapple with stability in the brain, but the authors argue it has produced a more advanced model by accounting for the dynamics of synapses and by allowing for wide variations in starting conditions. It also offers mathematical proofs of stability, Kozachkov added.
Though focused on the factors that ensure stability, the authors noted, their model does not go so far as to doom the brain to inflexibility or determinism. The brain’s ability to change – to learn and remember – is just as fundamental to its function as its ability to consistently reason and formulate stable behaviors.
“We’re not asking how the brain changes,” Miller said. “We’re asking how the brain keeps from changing too much.”
Still, the team plans to keep iterating on the model, for instance by encompassing a richer accounting for how neurons produce individual spikes of electrical activity, not just rates of that activity.
They are also working to compare the model’s predictions with data from experiments in which animals repeatedly performed tasks in which they needed to perform the same neural computations, despite experiencing inevitable internal neural noise and at least small sensory input differences.
Finally, the team is considering how the models may inform understanding of different disease states of the brain. Aberrations in the delicate balance of excitatory and inhibitory neural activity in the brain is considered crucial in epilepsy, Kozachkov notes. A symptom of Parkinson’s disease, as well, entails a neurally-rooted loss of motor stability. Miller adds that some patients with autism spectrum disorders struggle to stably repeat actions (e.g. brushing teeth) when external conditions vary (e.g. brushing in a different room).
Source: https://neurosciencenews.com/math-internal-noise-16796/?fbclid=IwAR0UIjZwPJ7XAlHpobyzJwNF267StISiPyVcXBHTfIb6UywrAyJ4dWZItjw