It is generally assumed when using Bayesian inference methods for neural ne
tworks that the input data contains no noise. For real-world (errors in var
iable) problems this is clearly an unsafe assumption. This paper presents a
Bayesian neural network framework which accounts for input noise provided
that a model of the noise process exists. In the limit where the noise proc
ess is small and symmetric it is shown, using the Laplace approximation, th
at this method adds an extra term to the usual Bayesian error bar which dep
ends on the variance of the input noise process. Further, by treating the t
rue (noiseless) input as a hidden variable, and sampling this jointly with
the network's weights, using a Markov chain Monte Carlo method, it is demon
strated that it is possible to infer the regression over the noiseless inpu
t. This leads to the possibility of training an accurate model of a system
using less accurate, or more uncertain, data. This is demonstrated on both
the, synthetic, noisy sine wave problem and a real problem of inferring the
forward model for a satellite radar backscatter system used to predict sea
surface wind vectors.