This paper investigates how closely randomly generated binary source sequen
ces can be matched by convolutional code codewords. What distinguishes it f
rom prior work is that a randomly chosen subsequence with density lambda is
to be matched as closely as possible. The so-called marked bits of the sub
sequence could indicate overload quantization points for a source sample ge
nerated from the tails of a probability distribution. They might also indic
ate bits where the initial estimate is considered reliable, as might happen
in iterated decoding. The capacity of a convolutional code to interpolate
the marked subsequence might be viewed as a measure of its ability to handl
e overload distortion. We analyze this capacity using a Markov chain whose
states are sets of subsets of trellis vertices of the convolutional code. W
e investigate the effect of memory on the probability of perfect interpolat
ion and calculate the residual rate on the unmarked bits of the binary sour
ce sequence. We relate our interpolation methodology to sequence-based meth
ods of quantization and use it to analyze the performance of convolutional
codes on the pure erasure channel.