شماره ركورد كنفرانس :
5470
عنوان مقاله :
Maintenance planning for a continuous monitoring system using deep reinforcement learning
پديدآورندگان :
Azizi F Fa.azizi@alzahra.ac.ir Department of Statistics, Faculty of Mathematical Sciences, Alzahra University, Tehran, Iran , Rasay H H.rasay@kut.ac.ir Kermanshah University of Technology, Kermanshah, Iran , Safari A a.safari@ut.ac.ir Department of Mathematics, Statistics and Computer Sciences,University of Tehran, Tehran, Iran
تعداد صفحه :
7
كليدواژه :
Dynamic Maintenance , Manufacturing Systems , Deep Reinforcement Learning
سال انتشار :
1402
عنوان كنفرانس :
نهمين سمينار تخصصي نظريه قابليت اعتماد و كاربردهاي آن
زبان مدرك :
انگليسي
چكيده فارسي :
This paper proposes a maintenance decision-making framework for multi-unit systems using Machine Learning (ML). Specifically, we propose to use Deep Reinforcement Learning (RL) for a dynamic maintenance model of a multi-unit parallel system that is subject to stochastic degradation and random failures. As each unit deteriorates independently in a three-state homogeneous Markov process, we consider each unit to be in one of three states: healthy, unhealthy, or a failed state. We model the interaction among system states based on the Birth/Birth-Death process. By combining individual component states, we define the overall system state. To minimize costs, we use the Markov Decision Process (MDP) framework to solve the optimal maintenance policy. We apply the Double Deep Q Networks (DDQN) algorithm to solve the problem, making the proposed RL solution more practical and effective in terms of time and cost savings than traditional MDP approaches. A numerical example is provided which demonstrates how the RL can be used to find the optimal maintenance policy for the system under study.
كشور :
ايران
لينک به اين مدرک :
بازگشت