Deconvolution is usually regarded as one of the so called ill-posed problems of applied mathematics if no constraints on the unknowns can be assumed. In this paper, we discuss the idea of well-defined statistical models being a counterpart of the notion of well-posedness. We show that constraints on the unknowns such as non-negativity and sparsity can help a great deal to get over the inherent ill-posedness in deconvolution. This is illustrated by a parametric deconvolution method based on the spike-convolution model. Not only does this knowledge, together with the choice of the measure of goodness of fit, help people think about data (models), it also determines the way people compute with data (algorithms). This view is illustrated by taking a fresh look at two familiar deconvolvers: the widely-used Jansson method, and another one which is to minimize the Kullback-Leibler distance between observations and fitted values. In the latter case, we point out that a counterpart of the EM algorithm exists for the problem of minimizing the Kullback-Leibler distance in the context of deconvolution. We compare the performance of these deconvolvers using data simulated from a spike-convolution model and DNA sequencing data.