This correspondence reviews the super-exponential algorithm proposed by Sha
lvi and Weinstein for blind channel equalization. The principle of this alg
orithm-Hadamard exponentiation, projection over the set of attainable combi
ned channel-equalizer impulse responses followed by a normalization-is show
n to coincide with a gradient search of an extremum of a cost function. The
cost function belongs to the family of functions given as the ratio of the
standard l(2p) and l(2) sequence norms, where p > 1. This family is very r
elevant in blind channel equalization, tracing back to Donoho's work on min
imum entropy deconvolution and also underlying the Godard (or Constant Modu
lus) and the earlier Shalvi-Weinstein algorithms. Using this gradient searc
h interpretation, which is more tractable for analytical study, we give a s
imple proof of convergence for the super-exponential algorithm. Finally, we
show that the gradient step-size choice giving rise to the super-exponenti
al algorithm is optimal.