This paper is devoted to difference of convex functions (d.c.) optimiz
ation: d.c. duality, local and global optimality conditions in d.c. pr
ogramming, the d.c. algorithm (DCA), and its application to solving th
e trust-region problem. The DCA is an iterative method that is quite d
ifferent from well-known related algorithms. Thanks to the particular
structure of the trust-region problem, the DCA is very simple (requiri
ng only matrix-vector products) and, in practice, converges to the glo
bal solution. The inexpensive implicitly restarted Lanczos method of S
orensen is used to check the optimality of solutions provided by the D
CA. When a nonglobal solution is found, a simple numerical procedure i
s introduced both to find a feasible point having a smaller objective
value and to restart the DCA at this point. It is shown that in the no
nconvex case, the DCA converges to the global solution of the trust-re
gion problem, using only matrix-vector products and requiring at most
2m+2 restarts, where m is the number of distinct negative eigenvalues
of the coefficient matrix that defines the problem. Numerical simulati
ons establish the robustness and efficiency of the DCA compared to sta
ndard related methods, especially for large-scale problems.