Let (X, Y) be a random couple in S.T with unknown distribution P and (X1, Y1), ., (Xn, Yn) be i.i.d. copies of (X, Y). Denote Pn the empirical distribution of (X1, Y1), ., (Xn, Yn). Let h1, ., hN: S.[.1, 1] be a dictionary that consists of N functions. For ...N, denote f.:=.j=1N.jhj. Let .: T.... be a given loss function and suppose it is convex with respect to the second variable. Let (..f)(x, y):=.(y; f(x)). Finally, let ...N be the simplex of all probability distributions on {1, ., N}. Consider the following penalized empirical risk minimization problem ^..:=argmin...[Pn(..f.)+.N.j=1.jlog.j] along with its distribution dependent version ..:=argmin...[P(..f.)+.N.j=1.jlog.j], where ..0 is a regularization parameter. It is proved that the .approximate sparsity. of .. implies the .approximate sparsity. of ... and the impact of .sparsity. on bounding the excess risk of the empirical solution is explored. Similar results are also discussed in the case of entropy penalized density estimation.