Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming

Citation
C. Heinricher, Arthur et H. Stockbridge, Richard, Optimal Control and Replacement with State-Dependent Failure Rate: Dynamic Programming, Annals of applied probability , 3(2), 1993, pp. 364-379
ISSN journal
10505164
Volume
3
Issue
2
Year of publication
1993
Pages
364 - 379
Database
ACNP
SICI code
Abstract
A class of stochastic control problems where the payoff depends on the running maximum of a diffusion process is described. The controller must make two kinds of decision: first, he must choose a work rate (this decision determines the rate of profit as well as the proximity of failure), and second, he must decide when to replace a deteriorated system with a new one. Preventive replacement is a realistic option if the cost for replacement after failure is larger than the cost of a preventive replacement. We focus on the profit and replacement cost for a single work cycle and solve the problem in two stages. First, the optimal feedback control (work rate) is determined by maximizing the payoff during a single excursion of a controlled diffusion away from the running maximum. This step involves the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation. The second step is to determine the optimal replacement set. The assumption that failure occurs only on the set where the state is increasing implies that replacement is optimal only on this set. This leads to a simple formula for the optimal replacement level in terms of the value function.