DocumentCode
3313337
Title
Composite rules selection using reinforcement learning for dynamic job-shop scheduling
Author
Wei, Yingzi ; Zhao, Mingyang
Author_Institution
Shenyang Inst. of Autom., Chinese Acad. of Sci., Shenyang, China
Volume
2
fYear
2004
fDate
1-3 Dec. 2004
Firstpage
1083
Abstract
Dispatching rules are usually applied dynamically to schedule the job in the dynamic job-shop. Existing scheduling approaches seldom address the machine selection in the scheduling process. Following the principles of traditional dispatching rules, composite rules, considering both the machine selection and job selection, were proposed in this paper. Reinforcement learning (IRL) is an on-line actor critic method. The dynamic system is trained to enhance its learning and adaptive capability by a RL algorithm. We define the conception of pressure for describing the system feature and determining the state sequence of search space. Designing a reward function should be guided based on the scheduling goal. We present the conception of jobs´ estimated mean lateness (EMLT) that is used to determine the amount of reward or penalty. The scheduling system is trained by Q-learning algorithm through the learning stage and then it successively schedules the operations. Competitive results with the RL-agent approach suggest that it can be used as real-time optimal scheduling technology.
Keywords
dynamic scheduling; job shop scheduling; learning (artificial intelligence); Q-learning algorithm; composite rules selection; dispatching rules; dynamic job shop scheduling; estimated mean lateness; real-time optimal scheduling technology; reinforcement learning; Automation; Computer science; Dispatching; Dynamic scheduling; Intelligent agent; Job shop scheduling; Learning; Manufacturing; Optimal scheduling; Scheduling algorithm;
fLanguage
English
Publisher
ieee
Conference_Titel
Robotics, Automation and Mechatronics, 2004 IEEE Conference on
Print_ISBN
0-7803-8645-0
Type
conf
DOI
10.1109/RAMECH.2004.1438070
Filename
1438070
Link To Document