Lasso is a popular method for variable selection in regression. Much theoretical understanding has been obtained recently on its model selection or sparsity recovery properties under sparse and homoscedastic linear regression models. Since these standard model assumptions are often not met in practice, it is important to understand how Lasso behaves under nonstandard model assumptions.
In this paper, we study the sign consistency of the Lasso under one such model where the variance of the noise scales linearly with the expectation of the observation. This sparse Poisson-like model is motivated by medical imaging. In addition to studying the sign consistency, we also give sufficient conditions for $\ell_\infty$ consistency. With theoretical and simulation studies, we provide conditions for when the Lasso should not be expected to be sign consistent. One interesting finding is that $\truebeta$ can not be spread out. Precisely, for both deterministic design and random Gaussian design, the sufficient conditions for the Lasso to be sign consistent require $\|\truebeta\|_2 / [\minbeta]^2$ to be not too big, where $\minbeta$ is the smallest nonzero element of $|\truebeta|$. By special designs of $\X$, we show that $\|\truebeta\|_2 / [\minbeta]^2=o(n)$ is almost necessary. For Positron Emission Tomography (PET), this suggests that when there are dense areas of the positron emitting substance, less dense areas are not well detected by the Lasso; this is of particular concern when imaging tumors; the periphery of the tumor will produce a much weaker signal than the center, leading to a big $\|\truebeta\|_2 / [\minbeta]^2$.
We compare the sign consistency of the Lasso under the Poisson-like model to its sign consistency on the standard model which assumes the noise is homoscedastic. The comparison shows that when $\truebeta$ is spread out, the Lasso performs worse for data from the Poisson-like model than those from the standard model, confirming our theoretical findings.