Neuronal representations of external events are often distributed across la
rge populations of cells. We study the effect of correlated noise on the ac
curacy of these neuronal population codes. Our main question is whether the
inherent error in the population code can be suppressed by increasing the
size of the population N in the presence of correlated noise. We address th
is issue using a model of a population of neurons that are broadly tuned to
an angular variable in two dimensions. The fluctuations in the neuronal ac
tivities are modeled as Gaussian noises with pairwise correlations that dec
ay exponentially with the difference between the preferred angles of the co
rrelated cells. We assume that the system is broadly tuned, which means tha
t both the correlation length and the width of the tuning curves of the mea
n responses span a substantial fraction of the entire system length. The pe
rformance of the system is measured by the Fisher information (FI), which b
ounds its estimation error. By calculating the FI in the limit of a large N
, we show that positive correlations decrease the estimation capability of
the network, relative to the uncorrelated population. The information capac
ity saturates to a finite value as the number of cells in the population gr
ows. In contrast, negative correlations substantially increase the informat
ion capacity of the neuronal population. These results are supplemented by
the effect of correlations on the mutual information of the system. Our ana
lysis provides an estimate of the effective number of statistically indepen
dent degrees of freedom, denoted N-eff, that a large correlated system can
have. According to our theory N-eff remains finite in the limit of a large
N. Estimating the parameters of the correlations and tuning curves from exp
erimental data in some cortical areas that code for angles, we predict that
the number of effective degrees of freedom embedded in localized populatio
ns in these areas is less than or of the order of approximate to 10(2).