DocumentCode :
1081477
Title :
A Markov Decision Process for Economic Quality Control
Author :
Lave, Roy E., Jr.
Author_Institution :
Assistant Professor, the Institute of Engineering-Economics Systems, Stanford University, Stanford, Calif.
Volume :
2
Issue :
1
fYear :
1966
Firstpage :
45
Lastpage :
54
Abstract :
A Markov Control Chain is developed which allows optimization of the timing of control activities and, for sample-based systems, selection of the length of sampled history upon which to base the decision to exercise control. The optimization is performed by the methods of policy iteration or linear programming and achieves minimization of the cost per unit-time of 1) the cost of output quality, 2) the sampling cost, and 3) the cost of exercising control. The class of processes to be controlled are assumed to shift from higher to lower quality levels according to a discrete or a discretely approximated continuous probability law. The shift is irreversible unless outside influence, called corrective action, is exercised; it may be time dependent when the process is said to have an aging failure characteristic. The control system studied is a sampling plan which bases the decision of whether or not to take corrective action on a sampled history of fixed maximum duration. This plan yields an nth order Markov chain which is converted to a first-order chain by state definition. The transition probabilities are Bayesian estimates based on a geometric prior probability distribution and a multinomial sample probability distribution. The process and system taken together represent what has been called Dynamic Inference [1].
Keywords :
Control systems; Cost function; History; Linear programming; Minimization methods; Optimization methods; Probability distribution; Quality control; Sampling methods; Timing;
fLanguage :
English
Journal_Title :
Systems Science and Cybernetics, IEEE Transactions on
Publisher :
ieee
ISSN :
0536-1567
Type :
jour
DOI :
10.1109/TSSC.1966.300078
Filename :
4082068
Link To Document :
بازگشت