Standard methods for analyzing linear-latent variable models rely on the assumption that the observed variables are normally distributed. Normality allows statistical inferences to be carried out based solely on the first-and second-order moments. In general, inferences for nonnormally distributed data require the estimates of matrices of third-and fourth-order moments. In the present paper, we show that inferences based on normal theory retain validity and asymptotic efficiency under general assumptions that allow for considerable departure from normality. In particular, we obtain conditions under which correct asymptotic inferences are attained when replacing a matrix of higher order moments by a matrix that depends only on cross-product moments of the data.