A Multi-Agent Machine Learning Framework for Intelligent Energy Demand Management

A Multi-Agent Machine Learning Framework for Intelligent Energy Demand Management

Ying Guo, Rongxin Li
DOI: 10.4018/978-1-60960-171-3.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In order to cope with the unpredictability of the energy market and provide rapid response when supply is strained by demand, an emerging technology, called energy demand management, enables appliances to manage and defer their electricity consumption when price soars. Initial experiments with our multi-agent, power load management simulator, showed a marked reduction in energy consumption when price-based constraints were imposed on the system. However, these results also revealed an unforeseen, negative effect: that reducing consumption for a bounded time interval decreases system stability. The reason is that price-driven control synchronizes the energy consumption of individual agents. Hence price, alone, is an insufficient measure to define global goals in a power load management system. In this chapter the authors explore the effectiveness of a multi-objective, system-level goal which combines both price and system stability. The authors apply the commonly known reinforcement learning framework, enabling the energy distribution system to be both cost saving and stable. They test the robustness of their algorithm by applying it to two separate systems, one with indirect feedback and one with direct feedback from local load agents. Results show that their method is not only adaptive to multiple systems, but is also able to find the optimal balance between both system stability and energy cost.
Chapter Preview
Top

Introduction

As technology advances, consumers become increasingly more power hungry. This causes many countries, including Australia, to suffer from an increasing gap between electricity supply and demand (Hou, 2007; US, 2002). The traditional way of tackling such a problem is to increase supply by investing heavily in infrastructure and building more generators. Alternatively, power load management can reduce levels of power consumption on the demand side, and hence reduce the level of energy required to run appliances, saving money and reducing the risk of inadequate supply (Sutton, et. al. 1998; Wilson, et. al. 2003).

Electricity distribution is a complex system, consisting of loads, generators, and transmission and distribution networks (Guo, et. al. 2005). To control this physical system, generators and retailers bid into a market that balances supply and demand while ensuring safe network operation. Demand and price fluctuate quickly and loads that are responsive in real time can have high value to retailers and networks. This scenario is ideal for the adoption of multi-agent technology (Dimeas, et. al. 2004; McArthur, et. al. 2005). A network of autonomous agents can be overlaid on the physical distribution network, controlling customer loads and, where available, local generators (Borenstein et. al. 2002; Hagg, et. al. 1995; Li, et. al. 2008; Platt, 2009; Ygge, 1998).

Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) is developing an energy management and control system that consists of an agent network to be installed at multiple levels of the electricity distribution network (James, et. al. 2006; Li, et. al. 2007; Li, et. al. 2007; Zeman, et. al. 2008). This system has a three-level architecture consisting of the following types of agents: 1) the top-level broker agent; 2) the middle-level group agent and 2) the bottom-level appliance agent.

Appliance agents are responsible for the low-level management of consumption for each end-use device. At the bottom level, appliances can be intelligently switched on or off based on customer preferences (Guo, et. al. 2005). A cluster of such agents is then managed by a group agent on the middle level (Li, et. al. 2007; Ogston, et. al. 2007). The group agents also receive an energy quota from an upper-level broker agent representing the needs of electricity market participants and network operators. This energy quota is a limit on the total energy consumption for a group of appliances.

In this paper, we focus on the intelligent management of the broker agent -- that is, how to choose the optimal strategy to provide the real-time quota to the group agent. As the top level in the system, the broker agent needs to incorporate information from the market (such as the current local price of energy) as well as input from group agents (Guo, 2007). The broker agent is required to manage the risk of exposure to volatile wholesale pool prices and reduce strain on the network.

Energy price can change dramatically when the demand is very high such as a hot summer afternoon. Because market energy prices and weather conditions are dynamic, we chose to use one of the machine learning algorithms, the reinforcement learning (RL) framework, as an online learning approach. The two usual approaches to reinforcement learning are model-free and model-based. Model-free algorithms perform well for simple problems. Because the energy market is a dynamic system which exhibits unpredictable properties such as the price of energy, we cannot easily generate a model beforehand. Hence we use the model free RL algorithm – Q-learning. To cope with the dynamic nature of the energy market, we enable the reward matrix to update towards the optimum (which is different from conventional RL) while the agent learns the environment. We present this approach and its application for setting the system-level goal. The experimental results show that by using this method, the broker agent is not only adaptive to multiple systems, but is also able to find an optimal balance between both system stability and energy cost.

Complete Chapter List

Search this Book:
Reset