Computational Neuroscience

Session Date: 
Mar 23, 2019
Speakers: 

Ten years, ago we simulated small network models of the cortex and other parts of the brain and demonstrated that they could account for basic aspects of perception, working memory and decision making. However, these were toy models compared to real brain circuits, and many aspects of cognition, such as language, remained a mystery. Two revolutions have occurred in the last decade that are rapidly opening up a new era. In neuroscience, new optogenetic tools have made it possible to record from thousands of neurons simultaneous in several brain regions and selectively manipulate different types of neurons. Seeing the brain through the lens of neural populations has opened up a new dynamical perspective on cognitive function. The second revolution is based on deep learning in layered neural networks, based loosely on the architecture of the cortex, that can recognize speech, objects in images, translate between languages and, with the addition of a basal ganglia model, play Go at superhuman levels. This may lead to a better understanding of how language could have evolved from the previously existing cortical architecture in nonhuman primates.

AttachmentSize
File 2019_03_23_15_Sejnowski-Web.mp4118.17 MB