An expectation-maximization algorithm for learning sparse and overcomplete
data representations is presented. The proposed algorithm exploits a variat
ional approximation to a range of heavy-tailed distributions whose limit is
the Laplacian. A rigorous lower bound on the sparse prior distribution is
derived, which enables the analytic marginalization of a lower bound on the
data likelihood. This lower bound enables the development of an expectatio
n-maximization algorithm for learning the overcomplete basis vectors and in
ferring the most probable basis coefficients.