Extended gcd calculation has a long history and plays an important rol
e in computational number theory and linear algebra. Recent results ha
ve shown that finding optimal multipliers in extended gcd calculations
is difficult. We present an algorithm which uses lattice basis reduct
ion to produce small integer multipliers x(1), ..., x(m) for the equat
ion s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, .
.. , s(m) are given integers. The method generalises to produce small
unimodular transformation matrices for computing the Hermite normal fo
rm of an integer matrix.