The Science of Deep Learning

Creativity and Collaboration: Revisiting Cybernetic Serendipity Colloquium Agenda

 

//// Role/Play: Collaborative Creativity and Creative Collaborations Student Fellows Symposium Agenda

  Go
  • Wednesday, March 13, 2019
  •  

    Lecture

    9:00 AM  -  9:20 AM
    Opening Remarks, David Donoho
    NAS Building 2101 Constitution Ave NW
    Welcome and Opening Remarks David Donoho, Stanford University
    Speakers:
    9:20 AM  -  10:00 AM
    Session I: The State of Deep Learning: Overview Talk (I) Amnon Shashua
    NAS Building 2101 Constitution Ave NW
    Successes and Challenges in Modern Artificial Intelligence,  Amnon Shashua, Hebrew University / Mobileye

    Data Science, Machine Learning, Deep networks are part and parcel of the “rise of algorithms”, also referred colloquially as “Artificial Intelligence”, which are data-enabled processes that mimic human decision making over a wide spectrum of tasks from pattern recognition, natural language processing, machine translation, and autonomous driving. I will describe the successes and challenges for harnessing the state of the art AI for powering autonomous driving and wearable computing and chart the challenges ahead required to create “broad” machine intelligence necessary for solving the really difficult problems for humanity.

    Speakers:
    Organizer:
    10:00 AM  -  10:40 AM
    Session I: The State of Deep Learning: Overview talk (II) Jitendra Malik
    NAS Building 2101 Constitution Ave NW
    Session I: The State of Deep Learning: Overview talk (II) Jitendra Malik, University of California, Berkeley
    Speakers:
    Organizer:
     

    Break

    10:40 AM  -  11:00 AM
    Break
    NAS Building 2101 Constitution Ave NW
    Break
     

    Lecture

    11:00 AM  -  11:30 AM
    Session I: The State of Deep Learning: Talk: Chris Manning
    NAS Building 2101 Constitution Ave NW

    Talk: The State of Deep Learning for Natural Language Processing, Chris Manning, Stanford University

    Human language is a distinctive and important part of human intelligence. This talk touches on both key deep learning techniques have opened up successful new approaches to interpreting language - word vectors, recurrent neural networks, attention, and transformer models - and key applications like speech recognition and synthesis, machine translation, and question answering. The talk concludes with the suggestion that deep learning may be starting to show the path towards a discovery procedure for linguistic knowledge.

    Speakers:
    Organizer:
    11:30 AM  -  12:00 PM
    Talk: Oriol Vinyals
    NAS Building 2101 Constitution Ave NW

    The State of Deep Reinforcement Learning, Oriol Vinyals, Google AI

    Deep Reinforcement Learning has emerged as an extension to the capabilities of Deep Learning systems beyond supervised and unsupervised learning. In the last few years, we have witnessed advances on domains in which complicated decisions must be carried by an "agent" interacting with an "environment". In this talk, I will summarize the state of deep RL, highlighting successes from ATARI, to Go and StarCraft, as well as depicting some of the challenges ahead.

    Speakers:
    Organizer:
    12:00 PM  -  12:30 PM
    Critical Perspective: Strengths and Fallacies in the Dominant DL Narrative
    NAS Building 2101 Constitution Ave NW
    Session I: The State of Deep Learning
    (Panel) Critical Perspective: Strengths and Fallacies in the Dominant DL Narrative
    Moderator:  David Donoho, Stanford University
    Terrence Sejnowski, Salk Institute for Biological Studies
    Tomaso Poggio, Massachusetts Institute of Technology
    Regina Barzilay, Massachusetts Institute of Technology
    Rodney Brooks, Massachusetts Institute of Technology
    Moderator:
     

    Dining

    12:30 PM  -  1:30 PM
    Lunch
    NAS Building 2101 Constitution Ave NW
    Lunch
     

    Lecture

    1:30 PM  -  2:00 PM
    Session II: Deep Learning in Science: Talk: Regina Barzilay
    NAS Building 2101 Constitution Ave NW
    Session II: Deep Learning in Science: Talk: Regina Barzilay, Massachusetts Institute of Technology
    Speakers:
    Organizer:
    2:00 PM  -  2:30 PM
    Session II: Deep Learning in Science: Talk: Kyle Cranmer
    NAS Building 2101 Constitution Ave NW

    Experiences with deep learning in particle physics, Kyle Cranmer, New York University

    Particle physics is a field equipped with a high-fidelity simulation that spans a hierarchy of scales ranging from the quantum mechanical interaction of fundamental particles to the electronic response of enormous particle detectors. These simulators provide a causal, generative model for the data. Moreover, they are are stochastic and non-differentiable. Most inference problems in particle physics can be framed as an inverse problem where the simulation represents the forward model. I will describe recent experiences and lessons learned from attacking these problems using deep learning and present examples of incorporating domain knowledge into the models.

    Speakers:
    Organizer:
    2:30 PM  -  3:00 PM
    Session II: Deep Learning in Science: Talk: Olga Troyanskaya
    NAS Building 2101 Constitution Ave NW
    Session II: Deep Learning in Science: Talk: Olga Troyanskaya, Princeton University
    Speakers:
    Organizer:
     

    Break

    3:00 PM  -  3:30 PM
    Break
    NAS Building 2101 Constitution Ave NW
    Break
     

    Lecture

    3:30 PM  -  4:00 PM
    Session II: Deep Learning in Science: Talk: Eero Simoncelli
    NAS Building 2101 Constitution Ave NW
    Session II: Deep Learning in Science: Talk: Eero Simoncelli, New York University
    Speakers:
    Organizer:
    4:00 PM  -  4:15 PM
    Counterpoint: Bruno Olshausen
    NAS Building 2101 Constitution Ave NW

    Counterpoint:Can deep learning provide deep insight in neuroscience?, Bruno Olshausen, University of California, Berkeley

    Despite the fact that we have learned much from neuroscience over the past half century, our computational models seem to have advanced little in comparison.  Rosenblatt’s perceptron (ca. 1960) and Fukushima’s neocognitron (1980) still dominate the modern intellectual landscape (though rebranded under new names).  Here I shall argue for an approach to understanding the neural mechanisms of perception that takes as its starting point basic insights about the structure of the natural visual world and attempts to articulate the basic computational problems to be solved, as in the Pattern Theory of Grenander and Mumford.  Gaining insight in neuroscience will require building models, grounded in the theory, which embrace the complexity and rich computational structure of thalamocortical circuits - i.e., laminar structure, recurrence, dendritic nonlinearites, and hierarchical organization with bidirectional flow of information. 

    Speakers:
    Organizer:
    4:15 PM  -  4:30 PM
    Counterpoint: Antonio Torralba
    NAS Building 2101 Constitution Ave NW
    Session II: Deep Learning in Science: Counterpoint: Antonio Torralba, Massachusetts Institute of Technology
    Speakers:
    Organizer:
    4:30 PM  -  5:00 PM
    Panel Discussion: Scientific Funding for Deep Learning
    NAS Building 2101 Constitution Ave NW

    Session II: Deep Learning in Science
    (Panel) Critical Perspective: Scientific Funding for Deep Learning

    Moderator: Juan Meza, NSF
    Robert Bonneau, DOD
    Hava Siegelmann, DARPA
    Henry Kautz, NSF
    Richard (Doug) Riecken, USAF Office of Scientific Research

    Moderator:
     

    Dining

    5:00 PM  -  6:00 PM
    Reception
    NAS Building 2101 Constitution Ave NW
    Reception
     

    Lecture

    6:00 PM  -  8:00 PM
    Annual Sackler Lecture
    NAS Building 2101 Constitution Ave NW

    19th Annual Sackler Lecture Steps Toward Super Intelligence

    Introduction by Marcia McNutt, President, National Academy of Sciences

    Presented by Rodney Brooks, Massachusetts Institute of Technology

    Progress in Machine Learning over the last decade has led to lots and lots of applications, and to an understanding in business that getting control of large amounts of data is an important tactic. We will continue to see new ways to use large data sets and new application areas. But we should not confuse this burst of progress with being close to being able to build general human-level, or beyond, Artificial Intelligence systems. There are so many other aspects of intelligence that we still have no viable ways towards emulating. There is lots of research yet to be done. This talk outlines how we got to where we are, why we may be mistaken on how far we have come, and highlights challenges in getting toward super intelligent machines. That prospect may yet be centuries away.

     

    Dining

    7:15 PM  -  8:45 PM
    Private Dinner hosted by NAS President McNutt
    NAS Members' Room
    Private Dinner hosted by NAS President Marcia McNutt for the Annual Sackler Lecturer Rodney Brooks and Sackler Colloquium program benefactor Dame Jillian Sackler. By invitation only.
     Optional 
  • Thursday, March 14, 2019
  •  

    Lecture

    8:30 AM  -  9:00 AM
    Session III: Theoretical Perspectives on Deep Learning: Talk: Tomaso Poggio
    NAS Building 2101 Constitution Ave NW
    Session III: Theoretical Perspectives on Deep Learning:
    Networks of neurons for learning and representing symbols in the brain, Tomaso Poggio, Massachusetts Institute of Technology
    Speakers:
    Organizer:
    9:00 AM  -  9:30 AM
    Session III: Theoretical Perspectives on Deep Learning: Talk: Nati Srebro
    NAS Building 2101 Constitution Ave NW
    Session III: Theoretical Perspectives on Deep Learning: Talk: Nati Srebro, Toyota Technological Institute at Chicago
    Speakers:
    Organizer:
    9:30 AM  -  10:00 AM
    Session III: Theoretical Perspectives on Deep Learning: Talk: Peter Bartlett
    NAS Building 2101 Constitution Ave NW

    Accurate prediction from interpolation: A new challenge for statistical learning theory,Peter Bartlett, University of California, Berkeley

    Classical results that address the problem of generalization involve a tradeoff between the fit to the training data and the complexity of the prediction rule. Deep learning seems to operate outside the regime where these results are informative, since deep networks can perform well even with a perfect fit to the training data.  This raises the important challenge of understanding the performance of interpolating prediction rules.  We present some preliminary results for the simple case of linear regression, and highlight the questions raised for deep learning.

     

    Speakers:
    Organizer:
    10:00 AM  -  10:15 AM
    Session III: Theoretical Perspectives on Deep Learning: Counterpoint: Konrad Kording
    NAS Building 2101 Constitution Ave NW

    Why neuroscience needs deep learning theory

    The realization that learning with good credit assignment is a hallmark feature of human behavior and hence brain computation is slowly taking hold. I will discuss how theoretical insights are key for the future of progress in neuroscience. As part of this endeavor I will sketch the kinds of insights that we are looking for and present new deep learning theory work from the laboratory.

    Speakers:
    Organizer:
     

    Break

    10:15 AM  -  10:30 AM
    Break
    NAS Building 2101 Constitution Ave NW
    Break
     

    Lecture

    10:30 AM  -  10:45 AM
    Counterpoint: Anders Hansen
    NAS Building 2101 Constitution Ave NW

    Session III: Theoretical Perspectives on Deep Learning: Counterpoint: Anders Hansen, Cambridge University,On instabilities in deep learning - Does AI come at a cost?

    It is now well established that deep learning yields highly successful, yet incredibly unstable methods in image classification and in decision problems in general. Recently, it has been documented that the instability phenomenon in deep learning also occurs in inverse problems and image reconstruction. Indeed, tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction. These results raise the following fundamental question: Does artificial intelligence (AI), as we know it, come at a cost? Is instability a necessary by-product of our current deep learning and AI techniques? We will discuss recent mathematical developments that shed light on this question. Moreover, as we will see, this question is actually a problem in the foundations of computational mathematics.  

    Speakers:
    Organizer:
    10:45 AM  -  11:00 AM
    Counterpoint: Ronald Coifman
    NAS Building 2101 Constitution Ave NW

    Counterpoint:Deeper Learning in Empirical Science, some requirements and needs,Ronald Coifman, Yale University

    In order to generate and learn scientific models from observational data, we confront the challenge of discovering intrinsic latent variables; ie, variables or codes which are descriptors, independent of particular data modalities, and that can provide a common calibrated language, for quantifying and modeling observational dynamics. 

    At this point machine learning provides mostly “tabulation/encoders” and regression. We suggest that the quest for intrinsic variables should be a priority, it is often feasible, and enables direct consistency and performance match between algorithmic learners. 

     

    Speakers:
    Organizer:
    11:00 AM  -  11:15 AM
    Critical Perspective: Could a good DL theory change practice?
    NAS Building 2101 Constitution Ave NW

    Session III: Theoretical Perspectives on Deep Learning
    (Panel) Critical Perspective:Could a good DL theory change practice? 

    Moderator: Ben Recht, UC Berkeley
    Eero Simoncelli, New York University
    Julia Kempe, New York University Center for Data Science

    Panelists:
    Moderator:
    11:15 AM  -  12:00 PM
    Policy and Science Funding Panel

    Panel Discussion: Drivers and considerations for federal / industry space investment in fundamental academic AI research

    Moderator: Jim Kurose, National Science Foundation

    John Beieler, IARPA

    Juan Meza, National Science Foundation

    Tony Thrall, National Security Agency

    Moderator:
     

    Dining

    12:15 PM  -  1:30 PM
    Lunch and Poster Session
    NAS Building 2101 Constitution Ave NW
    Poster will be presented by Young Researchers during lunch.
     

    Lecture

    1:30 PM  -  1:50 PM
    Session IV: Experimental Perspectives on Deep Learning: Short talk: Jonathon Phillips
    NAS Building 2101 Constitution Ave NW
    Data Sets for Analyzing Face Recognition Performance of Humans and Algorithms, Jonathon Phillips, National Institute of Standards and Technology
    Speakers:
    Organizer:
    1:50 PM  -  2:10 PM
    Session IV: Experimental Perspectives on Deep Learning: Short talk: Isabelle Guyon
    NAS Building 2101 Constitution Ave NW

    Neural Solvers for Power Transmission Problems

    B. Donnot, B. Donon, A. Marot, I. Guyon

    We emulate the behavior of physics solvers of electricity differential equations to compute load flows in power transmission networks. Load flow computation is a well studied and understood problem, but current methods (based on Newton-Raphson optimization) are slow. With increasing usage expectations of current infrastructures, it is important to find methods to accelerate computations. In this presentation we compare two neural network approaches that we developed to speed up load flow computations. The first one, the LEAP net (LatentEncoding of Atypical Perturbation) implements a form of transfer learning, permitting to train on a few source domains (grid topology perturbations), then generalize to new target domains (combinations of perturbations), without learning on any example of that domain. We evaluate the viability of this technique to rapidly assess curative actions that human operators take in emergency situations, using real historical data, from the French high voltage power grid. The second one, the Graph Neural Solver (GNS) overcomes the limitation of the LEAP net to work in the vicinity of a fixed grid topology by implementing an iterative approximation of the physics equations. Preliminary results for GNS are presented.

    Speakers:
    Organizer:
    2:10 PM  -  2:40 PM
    Session IV: Experimental Perspectives on Deep Learning: Talk: Doina Precup
    NAS Building 2101 Constitution Ave NW

    Talk:From deep reinforcement learning to AI, Doina Precup, McGill University

    Deep reinforcement learning has achieved great success in well-defined application domains, such as Go or chess, in which an agent has to learn how to act and there is a clear success criterion. In this talk, I will focus on the potential role of reinforcement learning as a tool for building knowledge representations in AI agents which need to face a continual learning task. I will examine a key concept in reinforcement learning, the value function, and discuss its generalization to support various forms of knowledge. I will also discuss the challenge of how to evaluate reinforcement learning agents whose goal is not just to control their environment, but also to build knowledge about the world.  I will argue that the traditional approach of simply using a single quantitative measure is no longer sufficient, and that we need to develop more nuanced ways of understanding the capability of our agents.

    Speakers:
    Organizer:
    2:40 PM  -  3:10 PM
    Session IV: Experimental Perspectives on Deep Learning: Talk: Haim Sampolinsky
    NAS Building 2101 Constitution Ave NW

    Talk: Theory-based measures of object representations in deep artificial and biological networks, Haim Sampolinsky, Hebrew University of Jerusalem

    An object perceived under different physical conditions (i.e. location, pose, size, orientation, background) creates different responses in sensory neurons, resulting in an “object manifold” in the response-space of a neuronal population. Presumably, hierarchical sensory systems untangle these manifolds, allowing downstream systems to perform perceptually invariant tasks such as object recognition and classification.  However, it is unclear who to quantify the process of untangling and more generally, how to compare object representations between different layers, different architectures, or between artificial networks and biological ones. In my talk, I will describe recent theoretical advances that relate the ability to perform invariant object classification to the geometric properties of the object manifolds. I will show numerical results that apply this theory to evaluate the population based changes of object representations in the successive layers of deep convolutional neuronal networks (DCNNs) as well as in neural data in visual cortex. This work proposes theory-based statistical measures that can be used to compare experimental data on neural responses to predictions of deep networks.

    Speakers:
    Organizer:
     

    Break

    3:10 PM  -  3:30 PM
    Break
    NAS Building 2101 Constitution Ave NW
    Break
     

    Lecture

    3:30 PM  -  3:45 PM
    Counterpoint: Tara Sainath
    NAS Building 2101 Constitution Ave NW
    Session IV: Experimental Perspectives on Deep Learning: Counterpoint: Tara Sainath, Google AI
    Speakers:
    Organizer:
    3:45 PM  -  4:15 PM
    Critical Perspective Panel: What’s missing in today’s experimental analysis of DL?
    NAS Building 2101 Constitution Ave NW

    Session IV: Experimental Perspectives on Deep Learning
    (Panel) Critical Perspective: What’s missing in today’s experimental analysis of DL?

    Moderator: Jonathon Phillips, NIST
    Jitendra Malik, University of California, Berkeley
    Peter Bartlett, University of California, Berkeley
    Antonio Torralba, Massachusetts Institute of Technology
    Isabelle Guyon, Paris-Sud University & ClopiNet

    Moderator:
    4:15 PM  -  4:30 PM
    Summary: Right ways forward?: Jon Kleinberg
    NAS Building 2101 Constitution Ave NW
    Summary: Right ways forward?: Jon Kleinberg, Cornell University
    Speakers:
    Organizer:
    4:30 PM  -  4:45 PM
    Summary: Right ways forward?: Terrence Sejnowski
    NAS Building 2101 Constitution Ave NW
    Summary: Right ways forward?: Terrence Sejnowski, Salk Institute for Biological Studies
    Organizer:
    4:45 PM  -  5:00 PM
    Summary: From Machine Learning to Artificial Intelligence - Leon Bottou
    NAS Building 2101 Constitution Ave NW

    Summary: From Machine Learning to Artificial Intelligence, Leon Bottou, FaceBook AI Research

    There is a substantial gap between the tasks we intend to solve and the statistical proxy problem that machine learning algorithms solve. For instance, machine learning algorithms often capture the spurious correlations that are pervasive in the large training sets that are now commonplace in deep learning research. This gap has a rich conceptual structure and yet has rarely been the object of theoretical research. In this presentation, I argue that understanding how to reason about this gap is key to advance artificial intelligence.

    Speakers:
    Organizer:
     

    Adjourn

    5:00 PM  -  5:01 PM
    Adjourn
    NAS Building 2101 Constitution Ave NW
    Adjourn
Top