基于深度强化学习的配电系统双时间尺度电压控制

ID:39273

阅读量:0

大小:4.51 MB

页数:15页

时间:2023-03-14

金币:2

上传者:战必胜
energies
Article
Deep-Reinforcement-Learning-Based Two-Timescale Voltage
Control for Distribution Systems
Jing Zhang
1
, Yiqi Li
1
, Zhi Wu
1,
* , Chunyan Rong
2
, Tao Wang
2
, Zhang Zhang
2
and Suyang Zhou
1

 
Citation: Zhang, J.; Li, Y.; Wu, Z.;
Rong, C.; Wang, T.; Zhang, Z.; Zhou, S.
Deep-Reinforcement-Learning-Based
Two-Timescale Voltage Control for
Distribution Systems. Energies 2021,
14, 3540. https://doi.org/10.3390/
en14123540
Academic Editor: Tek Tjing Lie
Received: 7 May 2021
Accepted: 11 June 2021
Published: 14 June 2021
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
School of Electrical Engineering, Southeast University, Nanjing 210096, China; jzhang@seu.edu.cn (J.Z.);
220192833@seu.edu.cn (Y.L.); suyang.zhou@seu.edu.cn (S.Z.)
2
Institute of State Grid Hebei Electric Power Company Economic and Technological Research,
Shijiazhuang 050000, China; Rongchunyan@hotmail.com (C.R.); Wangtao037@hotmail.com (T.W.);
zhzh019@hotmail.com (Z.Z.)
* Correspondence: zwu@seu.edu.cn; Tel.: +86-150-0518-4780
Abstract:
Because of the high penetration of renewable energies and the installation of new control
devices, modern distribution networks are faced with voltage regulation challenges. Recently, the
rapid development of artificial intelligence technology has introduced new solutions for optimal
control problems with high dimensions and dynamics. In this paper, a deep reinforcement learning
method is proposed to solve the two-timescale optimal voltage control problem. All control variables
are assigned to different agents, and discrete variables are solved by a deep Q network (DQN)
agent while the continuous variables are solved by a deep deterministic policy gradient (DDPG)
agent. All agents are trained simultaneously with specially designed reward aiming at minimizing
long-term average voltage deviation. Case study is executed on a modified IEEE-123 bus system,
and the results demonstrate that the proposed algorithm has similar or even better performance than
the model-based optimal control scheme and has high computational efficiency and competitive
potential for online application.
Keywords: deep reinforcement learning; two timescales; voltage control; distribution network
1. Introduction
1.1. Background and Motivation
The high penetration of distributed generation (DG) energy sources, such as photo-
voltaic (PV), has made distribution networks faced with the problem of voltage regulation.
Usually, the voltage profiles in distribution networks are regulated by the control of slow
regulation devices (e.g., on-load tap changers (OLTCs) and shunt capacitors) and fast
regulation devices (e.g., PV inverters and static var compensators (SVCs)). While these
regulators are all applied to adjust the distribution of reactive power in the grid, the real
power flow can also impact the nodal voltages in distribution networks [
1
,
2
]. Thus, the
real and reactive power control of different devices should be taken into account in order
to mitigate possible voltage violations.
The lack of measurement systems (e.g., supervisory control and data acquisition
(SCADA) and phasor measurement units (PMUs)) in traditional distribution networks
leads to the insufficient measurement of network information, and voltage control methods
generally adopt model-based regulation, which rely highly on the precise physical model.
In essence, voltage control through real and reactive power optimization is a highly nonlin-
ear programming problem with abundant variables and massive constraints. Solving such
problems using mathematical programming methods (e.g., the second-order cone relax-
ation technique [
3
] and duality theory [
4
]) is often limited by the number of variables, and
may even fail when the scale of the distribution network is too large. Therefore, heuristic
algorithms which are less dependent on the model are applied to solve these problems (e.g.,
particle swarm optimization (PSO) [
5
] and genetic algorithm (GA) [
6
]). However, these
Energies 2021, 14, 3540. https://doi.org/10.3390/en14123540 https://www.mdpi.com/journal/energies
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭