A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers

A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers

Report Number
797
Authors
Sahand Negahban, Pradeep Ravikumar, Martin J. Wainwright and Bin Yu
Abstract

High-dimensional statistical inference deals with models in which the the number of parameters $p$ is comparable to or larger than the sample size $n$. Since it is usually impossible to obtain consistent procedures unless $p/n \rightarrow 0$, a line of recent work has studied models with various types of low-dimensional structure (e.g., sparse vectors; block-structured matrices; low-rank matrices; Markov assumptions). In such settings, a general approach to estimation is to solve a regularized convex program (known as a regularized $M$-estimator) which combines a loss function (measuring how well the model fits the data) with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized $M$-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized $M$-estimators have fast convergence rates, and which are optimal in many well-studied cases.

PDF File