RECENT PUBLICATION

Figure: Comparison of the partial l1 and some popular nonconvex regularized models on random noiseless data.

 

February 11, 2019

Less-biased sparse approximation via a partial regularizer

The motivation – Over the last decade, seeking a sparse solution to a linear system has attracted a great deal of attention in science and engineering. It has found numerous applications in compressed sensing, imaging processing, signal processing, machine learning, and statistics. For example, in the context of compressed sensing, a sparse signal is often reconstructed by finding a sparse solution to an undetermined linear system.

The common approach to finding a sparse solution to a linear system is to solve a fully regularized least squares model. It is, however, well known that most existing regularizers such as l1-norm suffer from bias incurred by some leading entries (in magnitude) of the associated vector, and therefore the fully regularized least squares model also suffers from the bias. This paper proposes some less-biased models and studies their theoretical properties and empirical performance.

The discovery – Led by Dr. Zhaosong Lu, the authors found that a class of models with partial regularizers can substantially neutralize the bias and is generally more effective in recovering a sparse solution of a linear system than the widely used models with full regularizers. They showed that every local minimizer of the proposed models is substantially sparse, or the magnitude of all of its nonzero entries is greater than a uniform constant depending only on the problem data. Moreover, for a class of partial regularizers, any global minimizer of these models is a sparsest solution to the linear system. They also established sufficient conditions for local or global recovery of the sparsest solution to the linear system, among which one of the conditions is weaker than the best known restricted isometry property (RIP) condition for sparse recovery by l1-norm. Numerical results on compressed sensing and sparse logistic regression demonstrate that the proposed models substantially outperform those used widely in the literature, in terms of solution quality.

Its significance – This work provides less-biased models with partial regularizers, which enable its users to find a sparse solution of good quality for a wider class of linear systems. It also stimulates researchers to explore more effective models and regularizers for finding a sparse solution of a linear system.

Read the paper“Sparse Recovery via Partial Regularization: Models, Theory, and Algorithms” by Lu, Z and Li, X. Mathematics of Operations Research 43(4): 1290-1316 (2018). DOI: 10.1287/moor.2017.0905.


Website article compiled by Jacqueline Watson with Theresa Kitos