• DocumentCode
    1858079
  • Title

    Efficient Energy Management Using Adaptive Reinforcement Learning-Based Scheduling in Large-Scale Distributed Systems

  • Author

    Hussin, Masnida ; Lee, Young Choon ; Zomaya, Albert Y.

  • Author_Institution
    Centre for Distrib. & High Performance Comput., Univ. of Sydney, Sydney, NSW, Australia
  • fYear
    2011
  • fDate
    13-16 Sept. 2011
  • Firstpage
    385
  • Lastpage
    393
  • Abstract
    Energy consumption in large-scale distributed systems, such as computational grids and clouds gains a lot of attention recently due to its significant performance, environmental and economic implications. These systems consume a massive amount of energy not only for powering them, but also cooling them. More importantly, the explosive increase in energy consumption is not linear to resource utilization as only a marginal percentage of energy is consumed for actual computational works. This energy problem becomes more challenging with uncertainty and variability of workloads and heterogeneous resources in those systems. This paper presents a dynamic scheduling algorithm incorporating reinforcement learning for good performance and energy efficiency. This incorporation helps the scheduler observe and adapt to various processing requirements (tasks) and different processing capacities (resources). The learning process of our scheduling algorithm develops an association between the best action (schedule) and the current state of the environment (parallel system). We have also devised a task-grouping technique to help the decision-making process of our algorithm. The grouping technique is adaptive in nature since it incorporates current workload and energy consumption for the best action. Results from our extensive simulations with varying processing capacities and a diverse set of tasks demonstrate the effectiveness of this learning approach.
  • Keywords
    decision making; large-scale systems; learning (artificial intelligence); parallel processing; power aware computing; processor scheduling; resource allocation; adaptive reinforcement learning based scheduling; decision making process; dynamic scheduling algorithm; energy consumption; energy efficiency; energy management; large scale distributed system; parallel system; resource utilization; task grouping technique; Dynamic scheduling; Energy consumption; Energy efficiency; Power demand; Processor scheduling; Program processors; dynamic scheduling; energy efficiency; reinforcement learning; task grouping;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Parallel Processing (ICPP), 2011 International Conference on
  • Conference_Location
    Taipei City
  • ISSN
    0190-3918
  • Print_ISBN
    978-1-4577-1336-1
  • Electronic_ISBN
    0190-3918
  • Type

    conf

  • DOI
    10.1109/ICPP.2011.18
  • Filename
    6047206