New Research: Real Neurons Show Similarities to Neurons in AI Systems
UMD/NIMH Researchers Apply Large-Scale Neural Stimulation, Uncovering an Important Feature Shared by Neurons in Mouse Brains and AI Systems
A University of Maryland researcher and his collaborators published a new study on the brain of a mouse, helping to shed light not only on brain function, but the input/output processes in Artificial Intelligence (AI) and Machine Learning (ML) uses, including ChatGPT.
Paul LaFosse is lead author of the paper, published in Proceedings of the National Academy of Sciences. LaFosse is a graduate student in UMD’s Program in Neuroscience and Cognitive Science (NACS) and part of a graduate partnership program between UMD and the National Institute of Mental Health (NIMH).
With his colleagues, LaFosse developed a method of using light to control brain cell activity in an awake mouse. This method uses a holography device as a beam splitter to direct light at many neurons at the same time to make them fire. This population stimulation simulates the kind of inputs normal brain networks receive, and allows study of the central unit of brain operation: a pattern of activity across a population of tens to thousands of neurons.
LaFosse used this approach to measure the input-output function of neurons, the transformation neurons apply to their inputs to set the neurons’ spike rate, or activity level. Past studies measured this quantity in mice in anesthetized states, or cells in vitro.
“We study the shape of neurons’ input-output functions, and we do so in the state that the real brain operates in, when subjects are actively using their sensory system. These input-output functions are key brain parameters. The input-output function, or activation function, influences neural computation and how the brain works,” LaFosse said.
The researchers said the new optical stimulation methods they used allowed them to measure the function for the first time for many neurons when the brain is in a normal awake state.
“These functions can change as brain state changes,” LaFosse said, underscoring why it is important to measure them during normal vision.
The researchers also find that neurons rest right at the transition point of the function’s shape, between a straight region and a curved (nonlinear) region.
“This means that when inputs arrive to the brain that make a cell less active, this decreases the effect of other inputs even more, resulting in a feedback loop that can dramatically suppress or filter those inputs,” LaFosse said.“This could be a new aspect of neural operation. And on top of these biological findings, we see that the function shape in the real mouse neurons we examined is similar to activation functions in the ML context.”
Mark Histed, the senior author, an adjunct professor of NACS and investigator at NIMH who is also LaFosse’s thesis advisor, highlighted the significance of this study’s findings.
“Real neurons have activation functions, too. They’re hard to measure because they depend on ongoing brain activity and whether an organism’s brain is awake, attentive, or asleep.That is what makes the experimental approach so powerful,” Histed said. “The dual-laser stimulation and imaging approach we deployed allows these kinds of measurements to be made from large numbers of neurons.
“That’s what has made Paul’s work possible. Paul contributed to both optical and biological genetic delivery methods to enable this work,” Histed said.
The authors speculate that this could mean that both the brain and AI systems feel selection pressure to work in similar ways. According to Histed, “We found, somewhat to our surprise, that real neurons’ activation functions look a lot like the activation functions used in recent ML systems, including ChatGPT. They have a large straight or linear region and a smooth, curved transition region. AI systems seem to have settled on this shape because it seems to be good for learning or training under many conditions. Perhaps that means brains face similar computational demands during learning.”
The connections between brain function and AI/ML function have long intrigued neuroscientists.
“This research weighs on questions about learning and capabilities of both brain networks and AI networks. In fact, the input-output function we find in the brain has been implemented for mainstream machine learning packages,” LaFosse said.
The study’s authors look forward to building on this research.
“A new direction that our research could take could be trying to find what conditions actually cause these functions to change shape. There could be a variability between individual cells, while the general shape is consistent. There are slight differences in the slope or the gain of these functions between cells. And so they could be changing in some way to support learning over time. Ongoing work in machine learning is now seeking to understand how learning can change the activation function,” LaFosse said.
Illustration via iStock
Published on Fri, Nov 15, 2024 - 3:16PM