Daniel James Bruce Harrold
Deep Reinforcement Learning for Smart Energy Networks
Harrold, Daniel James Bruce
Authors
Contributors
Zhong Fan
Supervisor
Abstract
To reduce global greenhouse gas emissions, the world must find intelligent solutions to maximise the utilisation of carbon-free renewable energy sources (RES). Energy storage systems (ESS) can be used to store energy when RES generation exceeds demand to then be discharged later at peak times to maximise utilisation of the RES, as well as profit from dynamic energy prices to perform energy arbitrage. Both RES and ESSs are difficult to implement at large scales but can be applied in localised microgrids that trade with the main utility grid. However, these microgrids require an intelligent energy management system able to account for the intermittent RES, fluctuating demand, and volatile dynamic energy prices.
For this, the use of reinforcement learning (RL) in which a control agent learns to interact with its environment to maximise a reward is investigated. RL agents can learn to control ESSs with incomplete information of the environment, ideal for energy networks with complex and potentially unknown dynamics difficult to model and solve by heuristic optimisation methods. Although the use of RL for ESS control in smart energy networks has increased over the past decade, many of the state-ofthe- art algorithms in RL have yet to be applied to smart energy network applications, meaning researchers may be missing considerable performance benefits.
In this thesis, a microgrid environment is designed for RL agent training using demand and weather data collected from Keele University as well as dynamic energy prices from real wholesale markets to train agents in both the aims of RES integration and energy arbitrage. Variants of this environment are used to evaluate different RL algorithms for RES integration and energy arbitrage where sample efficiency is key due to the limited amount of data available to train from.
The findings showed that RL is able to learn effective control policies for ESS control. In particular, the off-policy methods Deep Q-Networks (DQN) and Deep Deterministic Policy Gradients (DDPG) were able to achieve good performance as using an experience replay buffer to reuse transitions provided much better sample efficiency. By investigating different types of action-space, it was found that using functional actions which vary depending on RES output allowed the discrete control of DQN to match and surpass the performance of the continuous control DDPG.
The use of the Rainbow algorithm - an advancement over DQN - was applied to an energy arbitrage problem. The method is notable for having good sample efficiency qualities, which is important for this work in which agents only have a limited amount of data to learn from. The use of a distributional value function estimate was novel in the field of smart energy application, with only scalar estimates used in literature. The results found that Rainbow and its component C51 performed the best due to this distributional value function, which allows the agent to capture the stochasticity of the environment.
Finally, multi-agent RL is used to cooperatively control different types of electrical ESS in a hybrid ESS (HESS), as well as trade with self-interested external microgrids looking to reduce their own energy bills. Different single-agent and multi-agent approaches were tested using variants of DDPG and Multi-Agent DDPG (MADDG) to assess if the energy network should be managed by a single centralised controller or multiple distributed agents. The results found that the multi-agent approaches performed the best due to providing each component agent its own reward function based on marginal contribution, allowing it to assess their own individual performance within the wider system.
Citation
Harrold, D. J. B. Deep Reinforcement Learning for Smart Energy Networks. (Thesis). Keele University. https://keele-repository.worktribe.com/output/530054
Thesis Type | Thesis |
---|---|
Deposit Date | Jul 31, 2023 |
Publicly Available Date | Jul 31, 2023 |
Public URL | https://keele-repository.worktribe.com/output/530054 |
Award Date | 2023-07 |
Files
HarroldPhD2023
(6.7 Mb)
PDF