8:00 - 8:30 Welcome LANL & Theoretical Division &
Analytics, Intelligence, and Technology Division, & CNLS Center Leader

8:30 – 9:30 Lecture 1/1- Michael
Chertkov, Los Alamos National Laboratory; Graphical
Models for Inference and Optimization over Physical Network Flows

We review in these lectures foundations and
recent developments of Graphical Models approach of Statistical Inference
originated from Information Theory, Computer Science, Artificial Intelligence
and Machine Learning. We also describe applications of the approach to problems
in optimization and control of network flows constrained by physics. We
illustrate these applications on examples from the electric power and natural
gas networks.

9:30 - 9:45 Break

9:45 – 10:45 Lecture 1/2-
Michael Chertkov, Los Alamos National Laboratory; Graphical Models for Inference and Optimization over Physical
Network Flows

10:45 - 11:00 Break

11:00 – 12:00 Lecture 2/1-
Jean B. Lasserre, LAAS-CNRS, France; The moment-SOS
approach in and outside optimization

We first provide a brief description of the
moment-SOS (sum-of-squares) approach in global polynomial optimization which is
based on powerful positivity certificates from real algebraic geometry. If combined
with semidefinite programming (an efficient technique from convex optimization)
it allows to define a hierarchy of convex relaxations for polynomial
optimization problems. Each relaxation of the hierarchy is a semidefinite
program whose size increases in the hierarchy and the associated monotone
sequence of optimal values converges to the global optimum. Finite convergence
is generic and fast in practice.

In fact this methodology also applies to
solve the Generalized Problem of Moments (GPM) (of which global optimization is
only a particular instance, and even the simplest). Then we briefly describe
its application to several of many other applications outside optimization,
notably in applied mathematics, probability, statistics, computational geometry,
control and optimal control.

12:00 - 12:15 Break

12:15 - 1:15 Lecture 2/1-
Jean B. Lasserre, LAAS-CNRS, France; The moment-SOS
approach in and outside optimization

1:15 - 3:00 Lunch

3:00 - 4:00 Lecture 3/1- Laurent El Ghaoui, UC Berkeley;
Some
applications of data science in smart energy applications

I will provide a review
of my recent work in the area of smart energy, all involving machine learning
and optimization models. A first application involves a complex energy system,
which we seek to design or improve, but only have at our disposal a very
complex simulation model for it. I describe a method that builds
"optimization-friendly" surrogate (ie, approximate) models, based on
sparse machine learning, and an example involving a set of buildings to be
renovated for better energy efficiency. A second application deals with the
intra-day management of energy systems, and this time the challenge is to build
an "uncertainty model" to describe future demand, and apply this
model in a robust optimization context. Lastly I will evoke how text analytics
can play a key role in descriptive and predictive maintenance, based on
technicians reports.

4:00 - 4:15 Break

4:15 - 5:15 Lecture 3/2- Laurent El Ghaoui, UC Berkeley;
Some
applications of data science in smart energy applications

6:30 - 8:00 Dinner

Tuesday, January 10, 2017

7:00 - 8:30 Breakfast

8:30 - 9:30 Lecture 4/1- Konstantin Turitsyn, MIT;
Inner approximations of power system feasibility and
stability regions

Relaxations of power flow
equations provide a natural way to construct an outer approximation of
feasibility region. At the same, there are no simple ways of constructing inner
approximations of feasibility and stability. The need in inner approximations
arises in a variety of applications, including security assessment, emergency
control and analysis of system robustness to uncertainty. In this lecture I
will first introduce the applications and pose the formal problem of
constructing inner approximations. Second, I will review the historical and more
recent approaches including the new techniques by proposed by our group and our
colleagues. I will establish known connections to monotonicity theory and
discuss the dynamic problem setting corresponding to characterization of
small-signal and transient stability regions. The talk will conclude with
numerical demonstrations and outlook into open problems.

9:30 - 9:45 Break

9:45 - 10:45 Lecture 4/2- Konstantin Turitsyn, MIT;
Inner approximations of power system feasibility and
stability regions

10:45 - 11:00 Break

11:00
- 12:00 Lecture 5/1- Mihailo R. Jovanoivic,
University of Southern California; Controller Architectures: Tradeoffs
between Performance and Complexity

This
talk describes the design of controller architectures that achieve a desired
tradeoff between performance of distributed systems and controller complexity.
Our methodology consists of two steps. First, we design controller architecture
by incorporating regularization functions into the optimal control problem and,
second, we optimize the controller over the identified architecture. For
large-scale networks of dynamical systems, the desired structural property is
captured by limited information exchange between physical and controller layers
and the regularization term penalizes the number of communication links. In the
first step, the controller architecture is designed using a customized proximal
augmented Lagrangian algorithm. This method exploits separability of the
sparsity-promoting regularization terms and transforms the augmented Lagrangian
into a form that is continuously differentiable and can be efficiently
minimized using a variety of methods. Although structured optimal control
problems are, in general, nonconvex, we identify classes of convex problems
that arise in the design of symmetric systems, undirected consensus and
synchronization networks, optimal selection of sensors and actuators, and
decentralized control of positive systems. Wide-area control of power systems
will be used as a case study to demonstrate the effectiveness of the framework.

12:00 - 12:15 Break

12:15 –
1:15 Lecture 5/2- Mihailo R. Jovanoivic,
University of Southern California; Controller Architectures: Tradeoffs
between Performance and Complexity

1:15 - 3:00 Lunch

3:00 - 4:00 Lecture 6/1- Jeff Linderoth, University of Wisconsin; Mixed-Integer Nonlinear
Optimization: Applications, Algorithms, and Computation

Mixed-integer nonlinear
programming problems (MINLPs) combine the combinatorial complexity of discrete
decisions with the numerical challenges of nonlinear functions. In this
these talks, we will describe applications of MINLP in science and engineering,
demonstrate the importance of building "good" MINLP models, discuss
numerical techniques for solving MINLP, and survey the forefront of ongoing
research topics in this important and emerging area.

4:00 - 4:15 Break

4:15 - 5:15 Lecture 6/2- Jeff Linderoth, University of Wisconsin; Mixed-Integer Nonlinear
Optimization: Applications, Algorithms, and Computation

6:30 - 9:00 Poster Session
and Dinner

Wednesday, January 11, 2017

7:00 - 8:30 Breakfast

8:30 – 9:30 Lecture 7/1- Marc
Vuffray & Sidhant Misra, Los Alamos National Laboratory; Learning Structured Probability Distributions from Data

During this lecture we introduce the concept
of Markov Random Field (MRF), a canonical language for representing
network-structured distributions, and we present techniques to learn
efficiently MRFs from data.

MRFs are multivariate probability
distributions for which direct dependencies between random variables are captured
by a network. MRFs are widely used for uncertainty management, inference and
model reductions. In many applications the network of a MRF is expected to be
sparse i.e. the number of edges is of the same order than the number of nodes.
As MRFs are often not known a priori or cannot always be deduced from first
principles, it is of importance to learn MRFs from data. In this lecture we
focus on efficient methods for learning MRFs over discrete random variables and
Gaussian random variables when data take the form of several independent
observation of random variables. Alongside with the concepts related to MRFs we
present several techniques and ideas coming from the world of high-dimensional
stochastic optimization.

9:30 - 9:45 Break

9:45 – 10:45 Lecture 7/2- Marc
Vuffray & Sidhant Misra, Los Alamos National Laboratory; Learning Structured Probability Distributions from Data

10:45 - 11:00 Break

11:00 – 12:00 Lecture 8/1- Art
B. Owen, Stanford University; Adaptive Importance
Sampling

Importance
sampling is a method for reducing the variance of Monte Carlo computations. It
is usually applied to problems where the expectation of interest is dominated
by a contribution from a critical region of space that has small
probability. Rare event probabilities or integrals of spiky functions are
typical examples. Such problems arise in high energy physics, Bayesian
computation, engineering reliability, insurance, and graphical rendering among
other areas.

The method replaces the nominal sampling distribution, p, by another one, q,
that takes more samples from within the critical region. The resulting bias is
then corrected by a weighting the sample value by the ratio p/q. Importance
sampling is often the only variance reduction method with any chance of
yielding an effective Monte Carlo estimate. It is by far the most
difficult variance reduction method to use in practice and it can backfire
yielding a large or even infinite sampling variance, especially when the ratio
p/q has high variance.

In adaptive importance sampling, we use the data we have generated to improve
upon an initial choice of q. This talk first reviews importance sampling
and methods for choosing q (mixtures, exponential tilting, Gaussian/Laplace
approximations). Then it considers defensive and multiple importance sampling,
and combinations with control variates. Then adaptive importance sampling
methods are discussed, including the cross-entropy method, exponential
convergence of Markov chain importance samplers, Vegas, Miser, Divonne,
sampling from mixtures of products of beta distributions, adaptive multiple
importance sampling (AMIS), and more recent methods.

12:00 - 12:15 Break

12:15 – 1:15 Lecture 8/2- Art
B. Owen, Stanford University; Adaptive Importance
Sampling

Multi-Armed Bandit (MAB) problems constitute
the most fundamental sequential decision problems with an exploration vs.
exploration trade-off. In such problems, the decision maker selects an arm or
action in each round, and observes the corresponding reward. The objective is
to maximize the expected cumulative reward over some time horizon by balancing
exploitation (actions with the highest observed rewards so far should be
selected often) and exploration (all actions should be explored). MAB problems
have found applications in many disciplines including medical treatment,
communication systems, online services, economics, and physics. This lecture
provides a survey of recent advances in bandit optimization, and of the
mathematical tools used to devise algorithms and analyze their performance. We
also highlight important open problems.