Creativity and Collaboration: Revisiting Cybernetic Serendipity Colloquium Agenda
//// Role/Play: Collaborative Creativity and Creative Collaborations Student Fellows Symposium Agenda
Lecture
Data Science, Machine Learning, Deep networks are part and parcel of the “rise of algorithms”, also referred colloquially as “Artificial Intelligence”, which are data-enabled processes that mimic human decision making over a wide spectrum of tasks from pattern recognition, natural language processing, machine translation, and autonomous driving. I will describe the successes and challenges for harnessing the state of the art AI for powering autonomous driving and wearable computing and chart the challenges ahead required to create “broad” machine intelligence necessary for solving the really difficult problems for humanity.
Break
Talk: The State of Deep Learning for Natural Language Processing, Chris Manning, Stanford University
Human language is a distinctive and important part of human intelligence. This talk touches on both key deep learning techniques have opened up successful new approaches to interpreting language - word vectors, recurrent neural networks, attention, and transformer models - and key applications like speech recognition and synthesis, machine translation, and question answering. The talk concludes with the suggestion that deep learning may be starting to show the path towards a discovery procedure for linguistic knowledge.
The State of Deep Reinforcement Learning, Oriol Vinyals, Google AI
Deep Reinforcement Learning has emerged as an extension to the capabilities of Deep Learning systems beyond supervised and unsupervised learning. In the last few years, we have witnessed advances on domains in which complicated decisions must be carried by an "agent" interacting with an "environment". In this talk, I will summarize the state of deep RL, highlighting successes from ATARI, to Go and StarCraft, as well as depicting some of the challenges ahead.
Dining
Experiences with deep learning in particle physics, Kyle Cranmer, New York University
Particle physics is a field equipped with a high-fidelity simulation that spans a hierarchy of scales ranging from the quantum mechanical interaction of fundamental particles to the electronic response of enormous particle detectors. These simulators provide a causal, generative model for the data. Moreover, they are are stochastic and non-differentiable. Most inference problems in particle physics can be framed as an inverse problem where the simulation represents the forward model. I will describe recent experiences and lessons learned from attacking these problems using deep learning and present examples of incorporating domain knowledge into the models.
Counterpoint:Can deep learning provide deep insight in neuroscience?, Bruno Olshausen, University of California, Berkeley
Despite the fact that we have learned much from neuroscience over the past half century, our computational models seem to have advanced little in comparison. Rosenblatt’s perceptron (ca. 1960) and Fukushima’s neocognitron (1980) still dominate the modern intellectual landscape (though rebranded under new names). Here I shall argue for an approach to understanding the neural mechanisms of perception that takes as its starting point basic insights about the structure of the natural visual world and attempts to articulate the basic computational problems to be solved, as in the Pattern Theory of Grenander and Mumford. Gaining insight in neuroscience will require building models, grounded in the theory, which embrace the complexity and rich computational structure of thalamocortical circuits - i.e., laminar structure, recurrence, dendritic nonlinearites, and hierarchical organization with bidirectional flow of information.
Session II: Deep Learning in Science(Panel) Critical Perspective: Scientific Funding for Deep Learning
Moderator: Juan Meza, NSF Robert Bonneau, DODHava Siegelmann, DARPAHenry Kautz, NSFRichard (Doug) Riecken, USAF Office of Scientific Research
19th Annual Sackler Lecture Steps Toward Super Intelligence
Introduction by Marcia McNutt, President, National Academy of Sciences
Presented by Rodney Brooks, Massachusetts Institute of Technology
Progress in Machine Learning over the last decade has led to lots and lots of applications, and to an understanding in business that getting control of large amounts of data is an important tactic. We will continue to see new ways to use large data sets and new application areas. But we should not confuse this burst of progress with being close to being able to build general human-level, or beyond, Artificial Intelligence systems. There are so many other aspects of intelligence that we still have no viable ways towards emulating. There is lots of research yet to be done. This talk outlines how we got to where we are, why we may be mistaken on how far we have come, and highlights challenges in getting toward super intelligent machines. That prospect may yet be centuries away.
Accurate prediction from interpolation: A new challenge for statistical learning theory,Peter Bartlett, University of California, Berkeley
Classical results that address the problem of generalization involve a tradeoff between the fit to the training data and the complexity of the prediction rule. Deep learning seems to operate outside the regime where these results are informative, since deep networks can perform well even with a perfect fit to the training data. This raises the important challenge of understanding the performance of interpolating prediction rules. We present some preliminary results for the simple case of linear regression, and highlight the questions raised for deep learning.
Why neuroscience needs deep learning theory
The realization that learning with good credit assignment is a hallmark feature of human behavior and hence brain computation is slowly taking hold. I will discuss how theoretical insights are key for the future of progress in neuroscience. As part of this endeavor I will sketch the kinds of insights that we are looking for and present new deep learning theory work from the laboratory.
Session III: Theoretical Perspectives on Deep Learning: Counterpoint: Anders Hansen, Cambridge University,On instabilities in deep learning - Does AI come at a cost?
It is now well established that deep learning yields highly successful, yet incredibly unstable methods in image classification and in decision problems in general. Recently, it has been documented that the instability phenomenon in deep learning also occurs in inverse problems and image reconstruction. Indeed, tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction. These results raise the following fundamental question: Does artificial intelligence (AI), as we know it, come at a cost? Is instability a necessary by-product of our current deep learning and AI techniques? We will discuss recent mathematical developments that shed light on this question. Moreover, as we will see, this question is actually a problem in the foundations of computational mathematics.
Counterpoint:Deeper Learning in Empirical Science, some requirements and needs,Ronald Coifman, Yale University
In order to generate and learn scientific models from observational data, we confront the challenge of discovering intrinsic latent variables; ie, variables or codes which are descriptors, independent of particular data modalities, and that can provide a common calibrated language, for quantifying and modeling observational dynamics.
At this point machine learning provides mostly “tabulation/encoders” and regression. We suggest that the quest for intrinsic variables should be a priority, it is often feasible, and enables direct consistency and performance match between algorithmic learners.
Session III: Theoretical Perspectives on Deep Learning(Panel) Critical Perspective:Could a good DL theory change practice?
Moderator: Ben Recht, UC BerkeleyEero Simoncelli, New York UniversityJulia Kempe, New York University Center for Data Science
Panel Discussion: Drivers and considerations for federal / industry space investment in fundamental academic AI research
Moderator: Jim Kurose, National Science Foundation
John Beieler, IARPA
Juan Meza, National Science Foundation
Tony Thrall, National Security Agency
Neural Solvers for Power Transmission Problems
B. Donnot, B. Donon, A. Marot, I. Guyon
We emulate the behavior of physics solvers of electricity differential equations to compute load flows in power transmission networks. Load flow computation is a well studied and understood problem, but current methods (based on Newton-Raphson optimization) are slow. With increasing usage expectations of current infrastructures, it is important to find methods to accelerate computations. In this presentation we compare two neural network approaches that we developed to speed up load flow computations. The first one, the LEAP net (LatentEncoding of Atypical Perturbation) implements a form of transfer learning, permitting to train on a few source domains (grid topology perturbations), then generalize to new target domains (combinations of perturbations), without learning on any example of that domain. We evaluate the viability of this technique to rapidly assess curative actions that human operators take in emergency situations, using real historical data, from the French high voltage power grid. The second one, the Graph Neural Solver (GNS) overcomes the limitation of the LEAP net to work in the vicinity of a fixed grid topology by implementing an iterative approximation of the physics equations. Preliminary results for GNS are presented.
Talk:From deep reinforcement learning to AI, Doina Precup, McGill University
Deep reinforcement learning has achieved great success in well-defined application domains, such as Go or chess, in which an agent has to learn how to act and there is a clear success criterion. In this talk, I will focus on the potential role of reinforcement learning as a tool for building knowledge representations in AI agents which need to face a continual learning task. I will examine a key concept in reinforcement learning, the value function, and discuss its generalization to support various forms of knowledge. I will also discuss the challenge of how to evaluate reinforcement learning agents whose goal is not just to control their environment, but also to build knowledge about the world. I will argue that the traditional approach of simply using a single quantitative measure is no longer sufficient, and that we need to develop more nuanced ways of understanding the capability of our agents.
Talk: Theory-based measures of object representations in deep artificial and biological networks, Haim Sampolinsky, Hebrew University of Jerusalem
An object perceived under different physical conditions (i.e. location, pose, size, orientation, background) creates different responses in sensory neurons, resulting in an “object manifold” in the response-space of a neuronal population. Presumably, hierarchical sensory systems untangle these manifolds, allowing downstream systems to perform perceptually invariant tasks such as object recognition and classification. However, it is unclear who to quantify the process of untangling and more generally, how to compare object representations between different layers, different architectures, or between artificial networks and biological ones. In my talk, I will describe recent theoretical advances that relate the ability to perform invariant object classification to the geometric properties of the object manifolds. I will show numerical results that apply this theory to evaluate the population based changes of object representations in the successive layers of deep convolutional neuronal networks (DCNNs) as well as in neural data in visual cortex. This work proposes theory-based statistical measures that can be used to compare experimental data on neural responses to predictions of deep networks.
Session IV: Experimental Perspectives on Deep Learning(Panel) Critical Perspective: What’s missing in today’s experimental analysis of DL?
Moderator: Jonathon Phillips, NIST Jitendra Malik, University of California, BerkeleyPeter Bartlett, University of California, BerkeleyAntonio Torralba, Massachusetts Institute of TechnologyIsabelle Guyon, Paris-Sud University & ClopiNet
Summary: From Machine Learning to Artificial Intelligence, Leon Bottou, FaceBook AI Research
There is a substantial gap between the tasks we intend to solve and the statistical proxy problem that machine learning algorithms solve. For instance, machine learning algorithms often capture the spurious correlations that are pervasive in the large training sets that are now commonplace in deep learning research. This gap has a rich conceptual structure and yet has rarely been the object of theoretical research. In this presentation, I argue that understanding how to reason about this gap is key to advance artificial intelligence.
Adjourn