RECENT PUBLICATION

Figure: Lower sets are a key property used in the paper. Loosely speaking, lower sets are sets of multi-indices (shown in blue in the figure) that contain no holes or indents. The methods introduced in the paper use a weighting strategy to exploit the lower set structure of the polynomial coefficients of high-dimensional functions (Image courtesy of Simone Brugiapaglia).

Tackling the curse: Novel methods for high-dimensional approximation

The motivation An important challenge in scientific computing – one driven by numerous applications ranging from engineering to seismology and biology – involves problems with many parameters. Mathematically, these problems can often be recast as that of approximating an unknown, high-dimensional function from a finite set of samples. While simple to state, however, this problem is rendered challenging by the famous curse of dimensionality, a term coined by Richard Bellman in the 1960s.

Despite over fifty years of research, the curse of dimensionality persists as a bane of scientific computing. As scientific problems become increasingly complex (i.e. higher dimensional), there is a pressing need for new approaches. Fortunately, new insights came from a seemingly distant source: signal processing. In the last five to ten years, novel approaches (referred to as compressed sensing) appeared in the field of signal processing for tackling the problem of identifying low-dimensional structure in high-dimensional objects. With this in mind, the purpose of this study was to modify, adapt and implement compressed sensing techniques as new methods for addressing the curse of dimensionality in high-dimensional approximation.

The discovery – Simon Fraser University’s Ben Adcock demonstrated that a suitable weighted l1-minimization procedure mitigates the curse of dimensionality in high-dimensional approximation to a substantial extent. Indeed, as he proved, the complexity of this new approach depends only logarithmically on dimension, as opposed to exponentially for classical approaches. This is a significant improvement.  

Its significance – While there was some previous work in this direction, Adcock’s discovery is the first to prove conclusively the extent to which such approaches could ameliorate the curse of dimensionality, and also to provide a complete numerical recipe for doing so. The potential impact of this work lies in any field that deals with the approximation of smooth, high-dimensional functions; in particular, uncertainty quantification, optimization and control, and numerical partial differential equations.

Read the paper“Infinite-Dimensional Compressed Sensing and Function Interpolation” by Ben Adcock. Foundations of Computational Mathematics 18(3):661-701 (2018). DOI: 10.1007/s10208-017-9350-3

Website article compiled by Jacqueline Watson with Theresa Kitos