Bruno A. Olshausen's Abstract

Title: Getting the 'I' into AI:  Some clues from biology and mathematics

Abstract: Despite the fact that we have learned much from neuroscience over the past half century, our computational models seem to have advanced little in comparison.  Rosenblatt's perceptron (ca. 1960) and Fukushima's neocognitron (1980) still dominate the modern intellectual landscape (though rebranded under new names).  Here I shall argue for an approach to understanding the neural mechanisms of perception and cognition that takes as its starting point basic insights about the structure of the natural world and attempts to articulate the basic computational problems to be solved.  Examples of this approach can be found in the Pattern Theory of Grenander and Mumford, and the High-Dimensional Computing framework of Plate and Kanerva.  Gaining insight in neuroscience - and making true advances in artificial intelligence - will require building models, grounded in theory, which embrace the complexity and rich computational structure of biological neural circuits - i.e., laminar structure, recurrence, dendritic nonlinearites, and hierarchical organization with bidirectional flow of information.