Utilizing the notion of matching predictives as in Berger and Pericchi
, we show that for the conjugate family of prior distributions in the
normal linear model, the symmetric Kullback-Leibler divergence between
two particular predictive densities is minimized when the prior hyper
parameters are taken to be those corresponding to the predictive prior
s proposed in Ibrahim and Laud and Laud and Ibrahim. The main applicat
ion for this result is for Bayesian variable selection.