A Novel Model Free Deep Reinforcement Learning Framework for Energy Management of a PV Integrated En

A Novel Model Free Deep Reinforcement Learning Framework for Energy Management of a PV Integrated En

Abstract:

This paper utilizes a fully model-free and data-driven deep reinforcement learning (DRL) framework to develop an intelligent controller that can exploit information to optimally schedule the energy hub with the aim of minimizing energy costs and emissions. By posing the energy hub scheduling problem as a multi-dimensional continuous state and action space, the proposed deep deterministic policy gradient (DDPG) method enables more cost-effective control strategies. The method can lead to a more efficient operation by considering nonlinear physical characteristics of the energy hub components like nonconvex feasible operating regions of combined heat and power (CHP) units, valve-point effects of power-only units, and fuel cell dynamic efficiency. Moreover, to provide great potential for the DDPG agent to learn an optimal policy in an efficient way, a hybrid forecasting model based on convolutional neural networks (CNNs) and bidirectional long short-term memories (BLSTMs) is developed to overcome the risk associated with PV power generation that can be highly intermittent, particularly on cloudy days. The effectiveness and applicability of the proposed scheduling framework in reducing energy costs and emissions while coping with uncertainties are demonstrated by comparing it against conventional robust optimization and stochastic programming approaches as well as state-of-the-art DRL methods in different case studies.