The finite size of natural symbol sequences, eg. DNA strands or texts
in natural languages, is responsible for the worsening statistics with
respect to increasing length n of the substrings, thus restricting th
e reliability of the results of higher-order entropy calculations to s
mall n. A new method for the calculation of higher-order entropies H(n
) based upon a theorem of coding theory is presented, allowing for rel
iable estimations far beyond. We tested the range of validity of this
method by means of symbol sequences with known entropies: two stochast
ic processses (the underlying probability distribution being the equid
istribution and the distribution of the nucleotides of the yeast chrom
osome III DNA sequence) and a Markov process of fifth order with its t
ransition probabilities also taken from this yeast DNA sequence.