
Miguel Anjos, University of Waterloo
Warmstarts and hip cuts for interiorpoint methods in
combinatorial optimization
I will describe some of the latest results in the use of
interiorpoint methods for the solution of linear and semidefinite
programming relaxations of combinatorial optimization problems. The first
part of the talk will present a new warmstarting technique for
reoptimizing successive linear programming relaxations. The idea is that
a previously optimal solution can be used as the initial point for
restarting an interiorpoint method by suitably relaxing the
nonnegativity constraints using additional slack variables. For some
classes of test problems, the iteration savings are around 50% on average.
The second part will show how the warmstarting can be integrated into a
hybrid interiorpoint cuttingplane method (HIPCUT) that achieves greater
efficiency by adding and removing cuts as the interiorpoint method is
progressing. Preliminary computational results show that HIPCUT can find
optimal solutions in less time than solving the final relaxation with all
relevant cuts known in advance. This is joint work with Alexander Engau
(University of ColoradoDenver) and Anthony Vannelli (University of
Guelph).

Aleksandr
Aravkin, University of Washington
An L1Laplace Robust Kalman Smoother
Robustness is a major problem in Kalman filtering and smoothing  it is
desirable to have an algorithm that works well with Gaussian noise but also
has reasonable performance when this assumption is violated. In this talk,
we consider the case of heavy tailed observation noise, and show how
interior point methods can be used to find a maximum a posteriori
likelihood
(MAP) Kalman smoother for a nonlinear model with Gaussian process noise and
L1Laplace observation noise. We test the smoother's performance on
simulated data with contaminated normal observations, L1Laplace noise,
Studentt noise, and apply it to actual data for an underwater tracking
experiment.

Sarah Moffat, UBC Okanagan
The Kernel Average of n functions
The kernel average is an average of two convex functions based on a
kernel function. Using a reformulation of the proximal average, a
definition for the kernel average of n functions is suggested and the
conjugate of this new average is examined.

Jiming Peng,
University of Illinois
Sparse solutions to standard quadratic optimization problems with random
matrices
The standard quadratic programming (StQP) refers to the problem of
minimizing a quadratic form over the standard simplex. Such a
problem arises from numerous applications and is known to be NPhard.
In this talk we focus on a special scenario of the StQP where all the
elements of the matrix argument follow an identical independent
distribution. We show that with a very high probability, the StQP with
such
a random matrix has a
very sparse solution, which further implies that the StQP with random
matrices can be solved in polynomial time with a high
probability. Numerical evaluation of our theoretical findings and
extensions will
be discussed as well.

Joe Qranfal, Simon Fraser University
Beyond Projected Kalman
The projected Kalman method has been proved to be optimal in modeling and
solving the inverse problem of reconstructing a dynamic medical image. It
was tested in the case of image reconstruction in timedependent single
photon emission computed tomography (SPECT). While this method offers
enhancements over previous ones, it nonetheless comes with some
shortcomings. We see in this talk how we can go beyond the projected Kalman
algorithm in preserving its advantages and avoiding some of its
disadvantages.

Terry Rockafellar, University of Washington
Risk and Convexity in Engineering
An important concept in structural engineering is the probability of
failure, which can enter the objective or constraints in a problem of
optimal design. As a mathematical entity, however, the probability
of failure behaves poorly as a function of design variables, and it
is difficult to estimate. Recently, through advances in risk theory
utilizing convex analysis, a better entity, called the buffered
probability of failure, has been proposed. It is well suited to
optimization and provides additional safeguards to the outcome.

Lin Xiao,
Microsoft Research
Dual Averaging Methods for Regularized Stochastic Learning and Online
Optimization
We consider regularized stochastic learning and online optimization
problems, where the objective function is the sum of two convex terms: one
is the loss function of the learning task, and the other is a simple
regularization term. We develop extensions of Nesterov's dual averaging
method, that can explicitly exploit the regularization structure in an
online setting. In particular, at each iteration, the learning variables
are
adjusted by solving a simple optimization problem that involves the running
average of all past subgradients of the loss functions and the whole
regularization term, not just its subgradient. We show this method achieves
the optimal convergence rate and often enjoys the same low complexity per
iteration as the traditional stochastic gradient method. Computational
experiments demonstrate that it is very effective in obtaining sparse
solutions for problems with L1regularization.

Jane Ye,
University of Victoria
Optimizing Condition Numbers
In this paper we study the problem of minimizing condition numbers over a
compact convex subset of the cone of symmetric positive semidefinite
n by n matrices.
We show that the condition number is a Clarke regular strongly pseudoconvex
function. We prove that a global solution of the problem can be
approximated by an exact or
an inexact solution of a nonsmooth convex program. This asymptotic analysis
provides a valuable tool for designing an implementable algorithm for
solving the problem of minimizing condition numbers.

Annie Zhang, Simon Fraser University
Spanning trees with edgepair constraints
We consider two spanning tree problems containing edgepair
constraints. One is the minimum spanning tree problem with forbidden pairs
and the other is the quadratic bottleneck spanning tree problem. For each
of them we discuss new complexity results and identify new polynomially
solvable cases. Also, lower bounds and efficitent heuristic algorithms are
developed for both problems along with computational resutls. Finally some
extensions and related algorithmic results are presented.

Yuriy Zinchenko
University of Calgary
Shrinkwrapping Trajectories for Linear Programming
Hyperbolic Programming (HP) minimizing linear objective over
an affine subspace intersected with a hyperbolicity cone is a class of
convex optimization problems containing Linear Programming (LP) and its
natural relaxations. Based on these relaxations a new ShrinkWrapping
approach for LP has been proposed by Renegar. We analyze ShrinkWrapping
trajectories near a certain invariant set containing LP optimum and
contrast the trajectories with the central path on some pathological LP
instances.