-
Miguel Anjos, University of Waterloo
Warm-starts and hip cuts for interior-point methods in
combinatorial optimization
I will describe some of the latest results in the use of
interior-point methods for the solution of linear and semidefinite
programming relaxations of combinatorial optimization problems. The first
part of the talk will present a new warm-starting technique for
re-optimizing successive linear programming relaxations. The idea is that
a previously optimal solution can be used as the initial point for
re-starting an interior-point method by suitably relaxing the
non-negativity constraints using additional slack variables. For some
classes of test problems, the iteration savings are around 50% on average.
The second part will show how the warm-starting can be integrated into a
hybrid interior-point cutting-plane method (HIPCUT) that achieves greater
efficiency by adding and removing cuts as the interior-point method is
progressing. Preliminary computational results show that HIPCUT can find
optimal solutions in less time than solving the final relaxation with all
relevant cuts known in advance. This is joint work with Alexander Engau
(University of Colorado-Denver) and Anthony Vannelli (University of
Guelph).
-
Aleksandr
Aravkin, University of Washington
An L1-Laplace Robust Kalman Smoother
Robustness is a major problem in Kalman filtering and smoothing - it is
desirable to have an algorithm that works well with Gaussian noise but also
has reasonable performance when this assumption is violated. In this talk,
we consider the case of heavy tailed observation noise, and show how
interior point methods can be used to find a maximum a posteriori
likelihood
(MAP) Kalman smoother for a nonlinear model with Gaussian process noise and
L1-Laplace observation noise. We test the smoother's performance on
simulated data with contaminated normal observations, L1-Laplace noise,
Student-t noise, and apply it to actual data for an underwater tracking
experiment.
-
Sarah Moffat, UBC Okanagan
The Kernel Average of n functions
The kernel average is an average of two convex functions based on a
kernel function. Using a reformulation of the proximal average, a
definition for the kernel average of n functions is suggested and the
conjugate of this new average is examined.
-
Jiming Peng,
University of Illinois
Sparse solutions to standard quadratic optimization problems with random
matrices
The standard quadratic programming (StQP) refers to the problem of
minimizing a quadratic form over the standard simplex. Such a
problem arises from numerous applications and is known to be NP-hard.
In this talk we focus on a special scenario of the StQP where all the
elements of the matrix argument follow an identical independent
distribution. We show that with a very high probability, the StQP with
such
a random matrix has a
very sparse solution, which further implies that the StQP with random
matrices can be solved in polynomial time with a high
probability. Numerical evaluation of our theoretical findings and
extensions will
be discussed as well.
-
Joe Qranfal, Simon Fraser University
Beyond Projected Kalman
The projected Kalman method has been proved to be optimal in modeling and
solving the inverse problem of reconstructing a dynamic medical image. It
was tested in the case of image reconstruction in time-dependent single
photon emission computed tomography (SPECT). While this method offers
enhancements over previous ones, it nonetheless comes with some
shortcomings. We see in this talk how we can go beyond the projected Kalman
algorithm in preserving its advantages and avoiding some of its
disadvantages.
-
Terry Rockafellar, University of Washington
Risk and Convexity in Engineering
An important concept in structural engineering is the probability of
failure, which can enter the objective or constraints in a problem of
optimal design. As a mathematical entity, however, the probability
of failure behaves poorly as a function of design variables, and it
is difficult to estimate. Recently, through advances in risk theory
utilizing convex analysis, a better entity, called the buffered
probability of failure, has been proposed. It is well suited to
optimization and provides additional safeguards to the outcome.
-
Lin Xiao,
Microsoft Research
Dual Averaging Methods for Regularized Stochastic Learning and Online
Optimization
We consider regularized stochastic learning and online optimization
problems, where the objective function is the sum of two convex terms: one
is the loss function of the learning task, and the other is a simple
regularization term. We develop extensions of Nesterov's dual averaging
method, that can explicitly exploit the regularization structure in an
online setting. In particular, at each iteration, the learning variables
are
adjusted by solving a simple optimization problem that involves the running
average of all past subgradients of the loss functions and the whole
regularization term, not just its subgradient. We show this method achieves
the optimal convergence rate and often enjoys the same low complexity per
iteration as the traditional stochastic gradient method. Computational
experiments demonstrate that it is very effective in obtaining sparse
solutions for problems with L1-regularization.
-
Jane Ye,
University of Victoria
Optimizing Condition Numbers
In this paper we study the problem of minimizing condition numbers over a
compact convex subset of the cone of symmetric positive semi-definite
n by n matrices.
We show that the condition number is a Clarke regular strongly pseudoconvex
function. We prove that a global solution of the problem can be
approximated by an exact or
an inexact solution of a nonsmooth convex program. This asymptotic analysis
provides a valuable tool for designing an implementable algorithm for
solving the problem of minimizing condition numbers.
-
Annie Zhang, Simon Fraser University
Spanning trees with edge-pair constraints
We consider two spanning tree problems containing edge-pair
constraints. One is the minimum spanning tree problem with forbidden pairs
and the other is the quadratic bottleneck spanning tree problem. For each
of them we discuss new complexity results and identify new polynomially
solvable cases. Also, lower bounds and efficitent heuristic algorithms are
developed for both problems along with computational resutls. Finally some
extensions and related algorithmic results are presented.
-
Yuriy Zinchenko
University of Calgary
Shrink-wrapping Trajectories for Linear Programming
Hyperbolic Programming (HP) -minimizing linear objective over
an affine subspace intersected with a hyperbolicity cone- is a class of
convex optimization problems containing Linear Programming (LP) and its
natural relaxations. Based on these relaxations a new Shrink-Wrapping
approach for LP has been proposed by Renegar. We analyze Shrink-Wrapping
trajectories near a certain invariant set containing LP optimum and
contrast the trajectories with the central path on some pathological LP
instances.