In a Bayesian analysis of finite mixture models, parameter estimation and c
lustering are sometimes less straightforward than might be expected. In par
ticular, the common practice of estimating parameters by their posterior me
an, and summarizing joint posterior distributions by marginal distributions
, often leads to nonsensical answers. This is due to the so-called 'label s
witching' problem, which is caused by symmetry in the likelihood of the mod
el parameters. A frequent response to this problem is to remove the symmetr
y by using artificial identifiability constraints. We demonstrate that this
fails in general to solve the problem, and we describe an alternative clas
s of approaches, relabelling algorithms, which arise from attempting to min
imize the posterior expected loss under a class of loss functions. We descr
ibe in detail one particularly simple and general relabelling algorithm and
illustrate its success in dealing with the label switching problem on two
examples.