The synergy between neuroscience and technology is becoming increasingly evident in the rapidly evolving landscape of artificial intelligence (AI). As researchers delve into the intricate workings of the brain, they find inspiration for innovative AI architectures and validation for existing models. This blog explores how insights from neuroscience, particularly the retina and the primary visual cortex (V1), can guide the next generation of AI, addressing current limitations and enhancing performance.
Why neuroscience?
While we can wax poetically about what a superintelligence may do, we have learned — and will continue to learn — from an abundant computational system that we can observe, test and understand: the human brain! Neuroscience offers a treasure trove of insights into how biological systems process information. Researchers can develop AI architectures that mimic human cognition and improve their adaptability and efficiency by studying the brain’s mechanisms. The visual system provides a compelling model for revealing fundamental principles of computation in the brain. For example, research has shown that the brain can be rewired, such that the auditory cortex can also process visual information, and that visual information makes the auditory cortex resemble the connectivity of the visual cortex. The field of NeuroAI is dedicated to leveraging these insights to inspire AI designs that address existing shortcomings — such as rigidity, excessive resource demands, and a lack of dynamic interaction with the environment.
Neuroscience-driven innovations in AI
Influencing AI architectures
The profound influence of neuroscience on AI architecture has a long history. For instance, the perceptron, one of the foundational elements of neural networks, is modeled after the computation of a single biological neuron. Similarly, convolutional neural networks (CNNs) draw inspiration from the brain’s visual system, incorporating hierarchical structures and pooling layers that reflect how biological systems process visual information. Going beyond vision, reinforcement learning was also largely inspired by the neural activity during learning in animals.
Validating AI mechanisms
Even modern architectures like transformers — which don’t directly mimic specific brain processes, nor were designed to — have been found (by this paper, for example) to exhibit mechanisms analogous to those in the brain. When studies identify the counterpart in the brain for a specific AI algorithm, it becomes more plausible to claim that the algorithm could be a potential stepping stone toward artificial general intelligence (AGI). This cross-validation of ideas between neuroscience and AI reinforces the significance of studying neural mechanisms in the quest for advanced AI systems.
The retina: A window into neural function
The retina stands out as one of the most accessible and well-studied neural networks. It contains five primary types of neurons and more than 60 distinct types, featuring complex connectivity that includes feedforward, feedback and lateral connections, as illustrated in Figure 1 of the Annual Review of Neuroscience article, “Information Processing in the Primate Retina: Circuitry and Coding.” Its primary role is to transduce light into action potentials, which the brain interprets as visual signals.
Building an accurate retina model is crucial for deepening our understanding of visual processing. Researchers have employed linear-nonlinear-Poisson models and CNNs to estimate firing rates and generate spikes based on input stimuli.
In my PhD research on retina modeling, I constructed a computational and biophysical retina model using leaky integrators and Hodgkin-Huxley dynamics. This model can be used to generate synthetic training data tailored to various conditions for deep learning models, which is particularly valuable given the scarcity and high cost of collecting real neuroscientific data. In general, this approach couples tightly with optimizing visual prosthetics and brain-computer interfaces end-to-end with deep learning, which is a key research objective of my PhD lab.
Beyond the retina: Exploring V1
The primary visual cortex (V1) represents the next frontier in understanding visual processing. As the first brain region to receive input from the retina, V1 neurons are specialized for various functions. A representative example is orientation selectivity: Some neurons in V1 detect edges at specific angles, akin to the edge detectors found in the early layers of CNNs.
Recent research has leveraged CNNs to achieve state-of-the-art results in modeling the responses of V1 neurons in both macaques and mice. However, most studies have been conducted under stationary conditions, failing to capture the dynamic nature of real-world vision.
In my PhD research on V1 modeling, I utilized data from freely moving mice, incorporating both visual and behavioral information through CNNs and recurrent neural networks (RNNs). This multimodal approach yielded a model that not only achieved state-of-the-art performance but was also explainable. The results highlighted the significant role that behavioral context plays in visual processing, suggesting that the ability to tightly couple visual and behavioral inputs may enhance the efficiency of resource-limited systems, like mice.
Modeling the visual cortex in action reveals its remarkable ability to process information from multiple sensory modalities, and thus the visual cortex provides a fascinating framework for understanding information compression and reasoning. The brain’s visual system continuously compresses vast amounts of sensory information into manageable and interpretable units. By drawing parallels between visual processing and other cognitive tasks, we can start to unravel the complexities of how information is represented and manipulated to achieve reasoning in real-world tasks. Just as the visual cortex extracts key features from a scene, we can investigate how different layers of artificial neural networks, including those used in text-to-SQL tasks, distill essential elements from natural language queries to inform decision-making.
In the realm of text-to-SQL — which is one of the most important research objectives of Snowflake AI Research — while we may not fully grasp how words are perceived in the LLM in the same way we understand visual stimuli in the visual cortex, we can aim to emulate the visual system’s capacity for reasoning and contextual understanding. This ambition centers on the hope that by analyzing the patterns of activations within our models, we can uncover the underlying choices made during the transformation of natural language into structured queries. Just as the visual cortex integrates environmental and behavioral information to enhance perception, combining natural language with contextual data and semantic information in databases may lead to more accurate interpretations of queries. Ultimately, this alignment of visual processing principles with language tasks may pave the way for more sophisticated AI systems capable of reasoning in complex environments, thereby bridging the gap between visual and verbal understanding in a manner that mirrors natural cognitive processes.
Conclusion: The path forward
As we continue to explore the interplay between neuroscience and AI, the insights gained from studying the retina and V1 pave the way for future innovations. The integration of biological principles into AI design holds promise for more flexible and efficient systems. As we look ahead, the collaboration between neuroscientists and AI researchers will be vital in unlocking new possibilities and advancing our understanding of both biological and artificial systems. The journey is just beginning, and the potential is boundless.