Title :
Dynamic assembly sequence selection using reinforcement learning
Author :
Lowe, Gordon ; Shirinzadeh, Bijan
Author_Institution :
Sch. of Comput. Sci. & Software Eng., Monash Univ., Clayton, Vic., Australia
fDate :
26 April-1 May 2004
Abstract :
Determining the most appropriate sequence for assembling products requires assessment of the process, product, and the technology applied. Most production engineers apply constraint based evaluation and history to identify the solution sequence. What if their solution is sub-optimal? In this paper a self-learning technique for selecting a sequence and dynamically changing the sequence is presented, selection is based on the history of assemblies. The evaluation is dependent on part properties rather than parts and their relationships, thus no previous knowledge of parts and their interaction is required in the decision making process. The method assumes assembly is without constraint, for example, a highly flexible robotic assembly cell. This maximises the ability of the algorithm to select sequences for new products and optimise them. The heart of the algorithm is a reinforcement-learning model, which punishes failed assembly steps, this facilitates feedback sequence selection, when: current methods are merely feedforward. This feedback approach addresses combinatorial explosion that can cripple assembly planners.
Keywords :
assembly planning; decision making; knowledge based systems; learning (artificial intelligence); process control; robotic assembly; assembly planning; decision making process; dynamic assembly sequence selection; flexible robotic assembly cell; reinforcement learning; self-learning technique; Application specific processors; Computer science; Feedback; History; Humans; Learning; Process planning; Product design; Production; Robotic assembly;
Conference_Titel :
Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International Conference on
Print_ISBN :
0-7803-8232-3
DOI :
10.1109/ROBOT.2004.1307458