Article
Deep-Reinforcement-Learning-Based Two-Timescale Voltage
Control for Distribution Systems
Jing Zhang
1
, Yiqi Li
1
, Zhi Wu
1,
* , Chunyan Rong
2
, Tao Wang
2
, Zhang Zhang
2
and Suyang Zhou
1
Citation: Zhang, J.; Li, Y.; Wu, Z.;
Rong, C.; Wang, T.; Zhang, Z.; Zhou, S.
Deep-Reinforcement-Learning-Based
Two-Timescale Voltage Control for
Distribution Systems. Energies 2021,
14, 3540. https://doi.org/10.3390/
en14123540
Academic Editor: Tek Tjing Lie
Received: 7 May 2021
Accepted: 11 June 2021
Published: 14 June 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
School of Electrical Engineering, Southeast University, Nanjing 210096, China; jzhang@seu.edu.cn (J.Z.);
220192833@seu.edu.cn (Y.L.); suyang.zhou@seu.edu.cn (S.Z.)
2
Institute of State Grid Hebei Electric Power Company Economic and Technological Research,
Shijiazhuang 050000, China; Rongchunyan@hotmail.com (C.R.); Wangtao037@hotmail.com (T.W.);
zhzh019@hotmail.com (Z.Z.)
* Correspondence: zwu@seu.edu.cn; Tel.: +86-150-0518-4780
Abstract:
Because of the high penetration of renewable energies and the installation of new control
devices, modern distribution networks are faced with voltage regulation challenges. Recently, the
rapid development of artificial intelligence technology has introduced new solutions for optimal
control problems with high dimensions and dynamics. In this paper, a deep reinforcement learning
method is proposed to solve the two-timescale optimal voltage control problem. All control variables
are assigned to different agents, and discrete variables are solved by a deep Q network (DQN)
agent while the continuous variables are solved by a deep deterministic policy gradient (DDPG)
agent. All agents are trained simultaneously with specially designed reward aiming at minimizing
long-term average voltage deviation. Case study is executed on a modified IEEE-123 bus system,
and the results demonstrate that the proposed algorithm has similar or even better performance than
the model-based optimal control scheme and has high computational efficiency and competitive
potential for online application.
Keywords: deep reinforcement learning; two timescales; voltage control; distribution network
1. Introduction
1.1. Background and Motivation
The high penetration of distributed generation (DG) energy sources, such as photo-
voltaic (PV), has made distribution networks faced with the problem of voltage regulation.
Usually, the voltage profiles in distribution networks are regulated by the control of slow
regulation devices (e.g., on-load tap changers (OLTCs) and shunt capacitors) and fast
regulation devices (e.g., PV inverters and static var compensators (SVCs)). While these
regulators are all applied to adjust the distribution of reactive power in the grid, the real
power flow can also impact the nodal voltages in distribution networks [
1
,
2
]. Thus, the
real and reactive power control of different devices should be taken into account in order
to mitigate possible voltage violations.
The lack of measurement systems (e.g., supervisory control and data acquisition
(SCADA) and phasor measurement units (PMUs)) in traditional distribution networks
leads to the insufficient measurement of network information, and voltage control methods
generally adopt model-based regulation, which rely highly on the precise physical model.
In essence, voltage control through real and reactive power optimization is a highly nonlin-
ear programming problem with abundant variables and massive constraints. Solving such
problems using mathematical programming methods (e.g., the second-order cone relax-
ation technique [
3
] and duality theory [
4
]) is often limited by the number of variables, and
may even fail when the scale of the distribution network is too large. Therefore, heuristic
algorithms which are less dependent on the model are applied to solve these problems (e.g.,
particle swarm optimization (PSO) [
5
] and genetic algorithm (GA) [
6
]). However, these
Energies 2021, 14, 3540. https://doi.org/10.3390/en14123540 https://www.mdpi.com/journal/energies