Podcasts > Huberman Lab > Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

By Scicomm Media

In this episode of the Huberman Lab podcast, Dr. Terry Sejnowski sheds light on how the brain operates and learns using algorithms similar to advanced AI systems. He delves into the mechanisms behind the brain's "reward prediction algorithm" and the interplay between the cortical system for knowledge acquisition and the subcortical system for skill development.

Sejnowski also explores strategies to support lifelong learning, highlighting the role of physical activity, sleep, and motivation. Additionally, the episode examines the potential of AI as an "idea pump" for scientific research, with Sejnowski and host Andrew Huberman discussing the merits of human-AI collaboration. Their insights provide a compelling perspective on leveraging neuroscience and AI to enhance learning and drive scientific breakthroughs.

Listen to the original

Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

This is a preview of the Shortform summary of the Nov 18, 2024 episode of the Huberman Lab

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.

Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

1-Page Summary

Computational neuroscience and the algorithmic level of brain function

Computational neuroscientists like Dr. Terry Sejnowski use mathematical models and computing methods to understand how neural circuits correspond to brain activities at the algorithmic level - the processes behind cognition and behavior. Sejnowski highlights how the brain operates via algorithms akin to recipes guiding neural function.

The basal ganglia's reward prediction algorithm

Sejnowski explains how the brain uses reinforcement learning algorithms like temporal difference learning to learn sequences that maximize future rewards. The basal ganglia employ this trial-and-error method to fine-tune behaviors, mirroring advanced AI like AlphaGo. His research aims to uncover the algorithms dictating human cognition.

The neuroscience of learning, motivation, and skill acquisition

Two interacting learning systems

According to Sejnowski, the brain has two main learning systems: a cortical cognitive system for knowledge, and a subcortical procedural system involving the basal ganglia for skill acquisition through practice. Active procedural learning combined with conceptual cognitive learning builds expertise.

Supporting learning across the lifespan

As we age, procedural learning becomes more effortful but can be boosted through methods like Sejnowski's "learning how to learn" course. Staying mentally and physically active preserves cognition. Proper sleep and motivation tied to [restricted term] also aid learning and memory consolidation.

The potential of AI to aid scientific research

AI as an "idea pump"

Sejnowski and Andrew Huberman discuss how large language models like ChatGPT can generate novel hypotheses and experiments by generalizing from existing knowledge, summarizing research papers, critiquing methods, and extrapolating new scenarios - serving as powerful "idea pumps" for scientists.

Human-AI collaboration

While AI excels at finding patterns humans miss, Sejnowski and Huberman emphasize integrating human judgment to evaluate AI outputs. Studies show combining AI and expert analysis can elevate accuracy, leveraging AI's broad data synthesis and human depth of understanding to enable breakthroughs.

1-Page Summary

Additional Materials

Clarifications

  • The algorithmic level of brain function in computational neuroscience involves understanding how neural circuits implement specific algorithms that govern cognitive processes and behaviors, similar to how a recipe guides a cooking process. Researchers like Dr. Terry Sejnowski use mathematical models and computational methods to explore these algorithmic principles underlying brain function. This approach helps uncover the computational strategies the brain employs to process information, make decisions, and learn from experiences. By studying the brain at the algorithmic level, scientists aim to decipher the fundamental principles that drive complex cognitive functions and behaviors.
  • Neural circuits in the brain are pathways formed by interconnected neurons that communicate with each other to process information and generate specific functions or behaviors. These circuits are responsible for coordinating various activities in the brain, such as sensory perception, motor control, and cognitive processes. Understanding how these circuits function and interact helps researchers like Dr. Terry Sejnowski decipher the underlying mechanisms of brain activities at a more detailed level. By studying these neural circuits, scientists aim to uncover how specific patterns of neural activity give rise to complex brain functions like learning, memory, and decision-making.
  • Reinforcement learning algorithms, such as temporal difference learning, are computational methods inspired by behavioral psychology. They involve an agent learning to make decisions by receiving feedback in the form of rewards or punishments. Temporal difference learning specifically focuses on updating predictions based on the discrepancy between expected and actual outcomes, aiding in the optimization of decision-making processes. These algorithms are widely used in artificial intelligence and neuroscience to model how organisms learn from interactions with their environment.
  • The basal ganglia are a group of subcortical nuclei in the brain that play a crucial role in regulating voluntary movements, procedural learning, habit formation, and various cognitive functions. They consist of several components like the striatum, globus pallidus, substantia nigra, and subthalamic nucleus, each with specific functions and connections within the brain. The basal ganglia receive input from different brain regions, process this information, and send output to motor-related areas to coordinate movement and behavior. Dysfunction in the basal ganglia can lead to movement disorders like Parkinson's disease and Huntington's disease.
  • Procedural learning involves acquiring skills through practice and repetition, often involving tasks that become automatic with continued training. It is a type of learning that is distinct from more explicit, knowledge-based learning processes. This type of learning is associated with the development of expertise in various domains, such as playing a musical instrument or mastering a sport.
  • Expertise building through active procedural learning and cognitive learning involves combining practical skill acquisition (procedural learning) with theoretical knowledge (cognitive learning) to develop proficiency in a particular domain. This dual learning approach allows individuals to not only understand concepts but also apply them effectively through practice, leading to the mastery of skills and the development of expertise over time. The cortical cognitive system supports the acquisition of knowledge, while the subcortical procedural system, involving structures like the basal ganglia, facilitates the learning of motor skills and habits through repetition and experience. By integrating these two learning systems, individuals can enhance their abilities and achieve a deeper understanding of complex tasks or subjects.
  • Large language models like ChatGPT are artificial intelligence systems designed to understand and generate human language. ChatGPT uses a vast amount of text data to learn patterns and generate coherent responses. These models have been used for various tasks like generating text, answering questions, and assisting in research by providing insights and generating new ideas. They work by processing input text and predicting the most probable next words or phrases based on the patterns they have learned from the data they were trained on.
  • When discussing the integration of human judgment with AI outputs, it involves combining the analytical capabilities of artificial intelligence with the nuanced understanding and critical thinking skills of human experts. This collaboration aims to enhance decision-making processes by leveraging AI's data processing power and human expertise to achieve more accurate and insightful results. By incorporating human judgment, AI outputs can be evaluated in context, ensuring that the conclusions drawn are not only data-driven but also aligned with human reasoning and values. This integration is crucial for maximizing the strengths of both AI and human intelligence, leading to more informed and reliable outcomes in various fields, including scientific research and problem-solving.

Counterarguments

  • The brain's operation may not be fully captured by the analogy of algorithms, as biological processes can be more adaptive, context-dependent, and non-linear than current algorithmic models suggest.
  • Reinforcement learning algorithms may not account for all aspects of the basal ganglia's functions, as the brain's learning mechanisms could be more complex and less understood than these models imply.
  • The dichotomy between cognitive and procedural systems may be an oversimplification, as recent research suggests that these systems are highly interconnected and may not operate as independently as once thought.
  • The effectiveness of "learning how to learn" courses may vary widely among individuals, and the scientific basis for their efficacy could be stronger.
  • The claim that staying mentally and physically active preserves cognition is generally supported, but the extent and mechanisms of this preservation are complex and not fully understood.
  • The role of [restricted term] in learning and motivation is well-established, but it is only one part of a complex neurochemical system influencing these processes.
  • AI-generated hypotheses and experiments are limited by the data and algorithms they are based on, which may not always reflect the complexity of scientific phenomena.
  • The idea that AI serves as an "idea pump" may overstate the current capabilities of AI, which can sometimes produce trivial or irrelevant ideas if not properly guided by human expertise.
  • The integration of human judgment with AI is crucial, but there is a risk of human biases affecting the interpretation of AI outputs, which could lead to incorrect conclusions.
  • The synergy between AI and expert analysis is promising, but it also raises concerns about the over-reliance on AI, potential job displacement, and the need for critical oversight to ensure ethical and responsible use of AI in research.

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

Computational neuroscience and the algorithmic level of brain function

Computational neuroscientists, like Dr. Terry Cignowski, are employing mathematical models and advanced computing methods to deepen our understanding of brain function, focusing on how neural circuits correspond to brain activities.

Computational neuroscientists use mathematical models and algorithms to understand how the brain works as a system.

Terry Sejnowski and Dr. Terry Cignowski highlight how the brain operates at multiple levels, stretching from the molecular to the entire nervous system. To unravel brain functionality, they emphasize the importance of understanding the intermediate algorithmic level, which lies between the organic implementation within neural circuits and the resulting behaviors of the whole system.

Sejnowski compares algorithms to recipes that guide neural circuit function. With the advent of new methods like optical neuron recording, researchers can now observe interactions across various brain areas, gaining insight into the cognitive behaviors these complex algorithms direct.

Sejnowski also notes that past neuroscience often isolated functions to different parts of the cortex, akin to dividing it into countries with specific roles. This idea intersects with the current approach of segmenting brain functions at different levels to better understand the systemic operation.

The basal ganglia use a simple algorithm based on predicting future rewards to learn sequences of actions that achieve goals.

Sejnowski introduces the notion that our brains use specific algorithms, such as temporal difference learning, a form of reinforcement learning, for predicting and maximizing future rewards. This approach is fundamental to both fly brains and human brains and is employed by the basal ganglia to learn action sequences that lead to desired outcomes.

This trial-and-error reinforcement learning algorithm, exem ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

Computational neuroscience and the algorithmic level of brain function

Additional Materials

Clarifications

  • The intermediate algorithmic level in brain function refers to the level of brain activity that lies between the physical implementation in neural circuits and the observable behaviors of the entire system. It involves understanding the computational processes and algorithms that govern how neural circuits interact to produce cognitive functions. This level of analysis helps bridge the gap between the microscopic details of neural activity and the macroscopic behaviors and functions of the brain. Researchers study this intermediate level to uncover the computational principles that underlie brain function and behavior.
  • Temporal difference learning is a type of reinforcement learning method that updates predictions based on new information before the final outcome is known. It involves adjusting estimates by comparing current predictions with later, more accurate predictions about the future. This approach allows for learning from ongoing experiences and is a form of bootstrapping in the context of predicting future outcomes. Temporal difference methods are used in computational neuroscience to model how the brain learns and ad ...

Counterarguments

  • The assertion that algorithms can fully explain neural circuit function may be overly simplistic, as the brain's complexity and the influence of non-algorithmic factors like neuroplasticity, genetics, and the environment are also crucial.
  • While computational models are useful, they may not capture the full spectrum of brain activity, especially given the current limitations in understanding the brain's vast complexity and the potential for emergent properties that are not easily modeled.
  • The comparison of brain algorithms to recipes might be misleading, as it could imply a level of precision and replicability in neural processes that is not reflective of the more dynamic and adaptive nature of brain function.
  • The focus on the algorithmic level might overshadow the importance of the lower-level molecular and cellular processes that underpin higher-level functions.
  • The idea that past neuroscience isolated functions to different parts of the cortex is an oversimplification of historical neuroscience research, which has long recognized the interconnectedness of brain regions.
  • The emphasis on the basal ganglia's use of reinforcement learning algorithms may neglect the role of other brain structures and the potential for multiple learning mechanisms operating in parallel or in different contexts.
  • The comparison of the brain's learning algorithms to those used by advan ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

The neuroscience of learning, motivation, and skill acquisition

Exploring the neuroscience of learning, motivation, and skill acquisition reveals the complexity and adaptability of the human brain. Dr. Terry Sejnowski, alongside Andrew Huberman, delves into the ways our brains process and adapt to new information, and the implications for learning across our lifespan.

The brain has two main learning systems - a cortical cognitive system and a subcortical procedural system, which work together to build knowledge and skills.

Procedural learning through practice and repetition is essential for developing expertise, while cognitive learning provides the underlying concepts and theories.

Terry Sejnowski describes the basal ganglia as a crucial brain region for procedural learning, which includes learning sequences of actions to achieve goals, such as serving in tennis. This form of learning takes over from the cortex to produce actions that get progressively better with practice and is evidential across various fields, including finance and neuroscience. The basal ganglia interact with the cortex's action part and with the prefrontal cortex involved in thinking. The brain utilizes an algorithm that predicts the next reward and updates synapses to increase the chances of attaining a reward next time, hence building what's known as a value function. This value function is nourished through experiences, such as dining at varied restaurants, with the cortex developing knowledge of beneficial and detrimental things.

Andrew Huberman and Terry Sejnowski highlight the importance of active learning, which allows the brain to solve problems by trial and error rather than simply memorizing facts. Sejnowski emphasizes the critical role of practice, especially for rapid and expert execution, strengthening his point by referring to his physics education where homework and problem-solving embody procedural learning.

As we age, procedural learning becomes more effortful, but can be supported by techniques like the "learning how to learn" course developed by the speaker.

Maintaining physical, cognitive, and social activity helps preserve brain health and cognitive function as we get older. As people age, procedural learning becomes more challenging due to a thinning cortex, even though older memories remain robust. The "learning how to learn" course, targeted at adults in the workforce, is designed to boost learning efficiency, highlighting the necessity of enhancing learning methods beyond traditional education.

Dr. Sejnowski and colleagues have developed an online portal to teach individuals to learn better based on their learning styles. Additionally, techniques like physical exercise, as Sejnowski practices, are crucial for enhancing cognition and abetting better learning.

Sleep's role in memory consolidation is si ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The neuroscience of learning, motivation, and skill acquisition

Additional Materials

Clarifications

  • The basal ganglia play a crucial role in procedural learning by helping us learn sequences of actions to achieve goals, like mastering a tennis serve. This brain region interacts with the cortex and prefrontal cortex to refine actions through practice and repetition. It uses reward prediction algorithms to adjust synapses, enhancing the likelihood of achieving rewards in the future. Essentially, the basal ganglia support the development of automatic and skilled behaviors through continuous practice and reinforcement.
  • In the context of neuroscience, the algorithm for predicting rewards and updating synapses involves the brain's ability to anticipate future rewards based on past experiences. When a reward is expected, synapses (connections between neurons) in the brain are strengthened to increase the likelihood of achieving that reward in the future. This process is crucial for learning and decision-making, as it helps the brain optimize behavior to maximize positive outcomes. The brain essentially learns from feedback to adjust its neural connections, reinforcing successful actions and behaviors.
  • Sleep spindles are bursts of brain activity during non-REM sleep that play a crucial role in transferring information from short-term to long-term memory. These brief oscillations help consolidate memories and enhance learning by strengthening neural connections. Research suggests that an increase in sleep spindles is associated with improved memory retention and cognitive performance. Essentially, sleep spindles act as a mechanism for the brain to solidify and store newly acquired information during sleep.
  • [restricted term] is a neurotransmitter that plays a crucial role in motivation and reward processing in the brain. It is involved in reinforcing behaviors that lead to positive outcomes, such as learning new skills or acquiring knowledge. [restricted term] release in response to rewarding experiences can enhance motivation and drive individuals to engage in activities that are associated with pleasure or satisfaction. In the context of learning, [restricted term] helps regulate the brain's reward system, influencing the desire to seek out and engage with information or tasks that are perceived as rewarding.
  • In aging, synaptic pruning is a natural process where weaker or unnecessary synaptic connections in the brain are eliminated to enhance efficiency. Mitochondria degeneration in ag ...

Counterarguments

  • While the text emphasizes the division of learning into cortical cognitive and subcortical procedural systems, some neuroscientists argue that this division is overly simplistic and that the two systems are more interconnected and overlapping than suggested.
  • The importance of procedural learning for expertise is highlighted, but it's also important to consider that cognitive learning can play a significant role in expertise, especially in fields that require critical thinking and problem-solving.
  • The role of the basal ganglia in procedural learning is well-established, but it's also worth noting that other brain regions, such as the cerebellum, are also involved in the coordination and refinement of motor skills and procedural learning.
  • The concept of the brain using an algorithm to predict rewards and update synapses is a model that simplifies complex neurobiological processes, and there may be alternative models that provide a different perspective on how learning and memory work.
  • The emphasis on active learning and problem-solving is valuable, but for some individuals and learning styles, structured and guided learning can be more effective.
  • The assertion that procedural learning becomes more effortful with age could be nuanced by considering that some forms of learning and expertise, such as those involving wisdom and experience, may actually improve with age.
  • The "learning how to learn" course is presented as a beneficial tool for older adults, but it's important to recognize that not all learning strategies are universally effective and that individual differences can greatly impact the effectiveness of such courses.
  • The claim that physical exercise is crucial for enhancing cognition and learning is generally supported, but the degree of its impact can vary among individuals, and it's not the only factor contributing to cognitive health.
  • The role of sleep spindles in memory consolidation is an area of active research, and while there is evidence supporting this role, the mechanisms are not fully understood, and there may be other sleep-related processes that are equally or more important for memory consolidation.
  • [restricted term]'s role in motivation for learning is well-documented, but ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free
Dr. Terry Sejnowski: How to Improve at Learning Using Neuroscience & AI

The potential of AI and large language models to aid scientific research

Andrew Huberman, Terry Sejnowski, and others discuss the impressive potential of artificial intelligence (AI) and large language models (LLMs) to assist in the advancement of scientific research, from computational neuroscience to uncovering medicinal breakthroughs.

Large language models like ChatGPT can be used as "idea pumps" to generate novel hypotheses and experimental designs based on existing knowledge.

AlphaGo’s victory and the applications of LLMs in research demonstrate AI’s prowess in complex problem-solving. Sejnowski describes AI's capacity to generalize knowledge to solve new problems, which is crucial for scientific advancement. Huberman talks about AI foraging for knowledge based on future scenarios, potentially leading to the generation of novel hypotheses and designs. Sejnowski's colleague at the Salk Institute, Rusty Gage, uses LLMs as "idea pumps," inputting data and asking for experimental ideas. Sejnowski and Huberman note LLMs can handle and extrapolate from existing information, perform in-context learning, and surprisingly get better at tasks without undergoing further learning.

These models excel at tasks like summarizing research papers, critiquing statistical methods, and extrapolating new scenarios from available data.

Sejnowski recounts giving an LLM a neuroscience abstract and asking it to explain the content to a 10-year-old, showcasing the model's ability to simplify complex information, albeit with the loss of finer intricacies. Huberman mentions AI that can reference sources, analyze them critically, and predict outcomes. Sejnowski mentions Google's Gemini, which uses "chain of reasoning" to improve problem-solving accuracy by following logical steps.

Integrating the strengths of human experts and AI systems can lead to breakthroughs that neither could achieve alone.

AI can uncover patterns and connections that humans may miss. Sejnowski proposes that AI models could predict future events by integrating AI's predictive capabilities with human expertise. AI can quickly synthesize existing data, potentially hypothesizing outcomes faster than possible through traditional methods. Although AI can generate data and ideas, it doesn't necessar ...

Here’s what you’ll find in our full summary

Registered users get access to the Full Podcast Summary and Additional Materials. It’s easy and free!
Start your free trial today

The potential of AI and large language models to aid scientific research

Additional Materials

Clarifications

  • Large language models (LLMs) like ChatGPT are advanced artificial intelligence systems designed to understand and generate human language. They can process vast amounts of text data to generate human-like responses, summaries, and even creative content. LLMs excel at tasks like summarizing research papers, critiquing statistical methods, and extrapolating new scenarios from available data. These models can simplify complex information, generate experimental ideas, and aid in problem-solving by leveraging their ability to learn from and build upon existing knowledge.
  • AlphaGo is an artificial intelligence program developed by DeepMind that made headlines by defeating human Go champions. Its significance lies in demonstrating AI's ability to excel in complex, strategic games that were previously thought to be beyond the capabilities of machines. AlphaGo's victory showcased the power of AI in mastering intricate decision-making processes and strategic thinking, marking a significant milestone in the advancement of artificial intelligence technology. This achievement highlighted the potential of AI to tackle complex real-world problems that require high-level reasoning and decision-making skills.
  • "Idea pumps in relation to Large Language Models (LLMs) refer to the capability of these models to generate novel hypotheses and experimental designs based on existing knowledge. LLMs can be used to input data and prompt the generation of new ideas or concepts, acting as a source of inspiration for scientific research. Essentially, they serve as a tool to stimulate creative and innovative thinking by leveraging the vast amount of information they have been trained on. This term highlights the role of LLMs in spark ...

Counterarguments

  • While LLMs can generate hypotheses, they may also produce a high volume of irrelevant or impractical ideas, requiring significant human filtering and validation.
  • AI's prowess in complex problem-solving, as demonstrated by AlphaGo, may not directly translate to all scientific domains, which often require a deep understanding of context and nuance that AI may lack.
  • Summarizing research papers and critiquing statistical methods are valuable, but AI may miss the subtleties of human language and the context that experts bring to these tasks.
  • The integration of human expertise with AI systems is promising, but there can be challenges in communication, differing methodologies, and potential biases from both sides that could affect the outcomes.
  • AI's ability to uncover patterns and connections may lead to overreliance on data-driven insights, potentially overshadowing the importance of theory-driven scientific inquiry.
  • The increase in diagnostic accuracy in dermatology when combining AI with human expertise is significant, but it ...

Get access to the context and additional materials

So you can understand the full picture and form your own opinion.
Get access for free

Create Summaries for anything on the web

Download the Shortform Chrome extension for your browser

Shortform Extension CTA