In a large class of statistical inverse problems it is necessary to suppose that the transformation that is inverted is known. Although, in many applications, it is unrealistic to make this assumption, the problem is often insoluble without it. However, if additional data are available, then it is possible to estimate consistently the unknown error density. Data are seldom available directly on the transformation, but repeated, or replicated, measurements increasingly are becoming available. Such data consist of "intrinsic" values that are measured several times, with errors that are generally independent. Working in this setting we treat the nonparametric deconvolution problems of density estimation with observation errors, and regression with errors in variables. We show that, even if the number of repeated measurements is quite small, it is possible for modified kernel estimators to achieve the same level of performance they would if the error distribution were known. Indeed, density and regression estimators can be constructed from replicated data so that they have the same first-order properties as conventional estimators in the known-error case, without any replication, but with sample size equal to the sum of the numbers of replicates. Practical methods for constructing estimators with these properties are suggested, involving empirical rules for smoothing-parameter choice.