Title :
Incremental learning for robot control
Author :
Chiang, I-Jen ; Hsu, Jane Yung jen
Author_Institution :
Dept. of Comput. Sci. & Inf. Eng., Nat. Taiwan Univ., Taipei, Taiwan
Abstract :
A robot can learn to act by trial and error in the world. A robot continues to obtain information about the environment from its sensors and to choose a suitable action to take. Having executed an action, the robot receives a reinforcement signal from the world indicating how well the action performed in that situation. The evaluation is used to adjust the robot´s action selection policy for the given state. State clustering by least-square-error or Hamming distance, hierarchical learning architecture, and prioritized swapping can reduce the number of states, but a large portion of the space still has to be considered. This paper presents a new solution to this problem. A state is taken to be a combination of the robot´s sensor status. Each sensor is viewed as an independent component. The importance of each sensor status relative to each action is computed based on the frequency of its occurrences. Not all sensors are needed for every action, for example, the forward sensors play the most important roles when the robot is moving forward
Keywords :
intelligent control; learning (artificial intelligence); learning systems; robots; state-space methods; incremental learning; machine learning; robot control; sensor status; state clustering; state space; Convergence; Frequency; Hamming distance; Learning systems; Orbital robotics; Performance evaluation; Robot control; Robot sensing systems; State estimation; State-space methods;
Conference_Titel :
Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-2559-1
DOI :
10.1109/ICSMC.1995.538473