Deep Reinforcement Learning-Based Voltage Regulation Using Electric Springs in Active Distribution Networks
Abstract
1. Introduction
1.1. Deep Reinforcement Learning for Voltage Control
1.2. Voltage Regulation Devices: From FACTS to Electric Springs
1.3. Research Gap and Contributions
- A novel region-based penalized reward function that effectively guides the DRL agent to minimize voltage deviations while avoiding aggressive control actions.
- A coordinated control strategy for multiple ESs using the DDPG algorithm, representing the first application of DRL techniques to ES-based voltage regulation in ADNs.
- A systematic hyperparameter tuning methodology based on grid search that significantly improves the performance and robustness of the DDPG controller compared to default configurations.
- Comprehensive validation through simulation studies on the IEEE 33-bus and IEEE 69-bus test systems, demonstrating approximately a 40% reduction in mean voltage deviation compared to the no-control baseline.
2. Modeling and Theoretical Background of the Electric Spring and the DDPG Algorithm
2.1. Electric Spring Operation for Voltage Regulation in Active Distribution Networks
2.2. Reinforcement Learning DDPG Algorithm
2.2.1. DDPG Architecture and Key Components
- Actor Network : A neural network that maps states to continuous actions. The output layer uses a tanh activation function to bound actions within the valid range , which in this application corresponds to the ES control variable .
- Critic Network : A neural network that estimates the action-value function. The critic takes both the state and action as inputs and outputs a scalar Q-value representing the expected cumulative reward.
- Experience Replay Buffer: A finite-sized buffer that stores transition tuples from agent-environment interactions. During training, mini-batches are uniformly sampled from this buffer to break temporal correlations and improve learning stability.
- Target Networks: Separate copies of the actor () and critic () networks with parameters and , respectively. These target networks are updated slowly using soft updates:where is the target smoothing factor. This mechanism stabilizes learning by providing consistent target values for the Bellman backup.
- Exploration Noise: To encourage exploration, noise is added to the actor’s output during training. DDPG typically uses an Ornstein–Uhlenbeck (OU) process to generate temporally correlated noise suitable for physical control tasks.
2.2.2. Twin Delayed DDPG (TD3) Algorithm
- Clipped Double Q-Learning: TD3 maintains two critic networks and , each with its own target networks. The target value uses the minimum of the two critics to reduce overestimation:where and is clipped noise added to the target action.
- Delayed Policy Updates: The actor network is updated less frequently than the critics (typically once every two critic updates). Updating the actor less frequently allows the critic estimates to stabilize before being used to update the actor, reducing the accumulation of errors.
- Target Policy Smoothing: Noise is added to the target policy’s actions (as shown in (20)) to smooth the Q-function estimates and prevent the policy from exploiting narrow peaks in the Q-landscape.
2.2.3. Comparison with Other DRL Algorithms
3. Framework of the Proposed Approach
3.1. Reinforcement Learning Environment
- State: represents the information obtained after the actor applies an action in the environment at a time t. In this work, the voltage deviation of each bus is defined as , where b denotes the bus index. depicts the state information of all the buses at time t.
- Action: denotes the control signal issued by the agent, which determines the reactive power injection or absorption of ES number c (where c identifies each ES in the system) at iteration t. The set represents the actions of all ESs at t.
Sensitivity Analysis of Reward Function Threshold
3.2. Training and Evaluation
3.3. Hyperparameter Tuning
4. Case Study
Test Scenarios Description
5. Results
5.1. Training Results
5.2. Testing Results on the IEEE 33-Bus Network
5.3. Comparison with Metaheuristic-Based Centralized Voltage Control
5.4. Scalability of the Centralized Approach
6. Discussion
6.1. Performance Analysis and Comparison
6.2. Advantages of the Proposed Approach
- Model-free adaptation: Unlike optimization-based approaches that require accurate network models, the DDPG agent learns directly from interactions with the environment, enabling adaptation to model uncertainties and unmodeled dynamics.
- Real-time operation: Once trained, the DDPG agent can compute control actions in milliseconds, making it suitable for real-time voltage regulation under rapidly changing conditions.
- Coordinated multi-ES control: The centralized agent implicitly learns the interactions between multiple ESs, enabling coordinated operation without requiring explicit communication protocols.
- Cost-effective infrastructure: ESs require minimal energy storage compared to battery-based ESSs, reducing capital costs while achieving comparable voltage regulation performance.
- Flexibility: The trained agent can handle diverse operating scenarios without retraining, as demonstrated by consistent performance across 96 test cases with varying load and generation profiles.
6.3. Limitations and Challenges
- Centralized architecture: The current implementation relies on a centralized agent with full observability of all bus voltages. This architecture requires a reliable communication infrastructure and introduces a single point of failure. For larger networks, communication delays and bandwidth constraints may become significant concerns.
- Training data requirements: The DDPG agent requires extensive offline training with representative scenarios. If operational conditions deviate significantly from the training data (e.g., extreme weather events or major network reconfigurations), performance may degrade.
- Idealized ES model: The ES is modeled as an ideal controlled AC voltage source, neglecting converter losses, DC-link voltage dynamics, and harmonic distortion. While this simplification is common in planning studies, practical implementations may exhibit different behavior.
- Reactive power only: The ES configuration used in this work provides only reactive power compensation. For severe voltage deviations, active power injection from ESSs or curtailment of DG sources may be necessary.
- Scalability concerns: The state and action spaces grow linearly with the number of buses and ESs. For large-scale networks with hundreds of buses, the current architecture may face scalability challenges that require distributed or hierarchical approaches.
6.4. Practical Implementation Considerations
- Communication infrastructure: A supervisory control and data acquisition (SCADA) system or advanced metering infrastructure (AMI) is needed to collect real-time voltage measurements and dispatch control signals to ESs.
- Safety constraints: Additional constraints should be incorporated to prevent the ESs from operating beyond their physical limits and to ensure fail-safe behavior during communication failures.
- Online adaptation: Implementing online fine-tuning mechanisms would allow the agent to adapt to slow drifts in network characteristics without complete retraining.
- Regulatory compliance: The control strategy should be validated against utility standards and regulations governing voltage regulation and power quality.
6.5. Future Research Directions
- Developing distributed multi-agent DRL frameworks where each ES is controlled by a local agent with limited communication, thereby improving scalability and robustness.
- Integration of more detailed ES models, including converter dynamics, losses, and protection schemes.
- Combination of ES-based reactive compensation with battery ESSs for coordinated voltage-and-frequency regulation.
- Extension to unbalanced three-phase networks and investigation of phase-specific voltage regulation.
- Transfer learning approaches to enable rapid adaptation when deploying trained agents to new network configurations.
7. Conclusions
- The proposed DDPG-based controller successfully coordinates 12 electric springs to regulate voltage in the IEEE 33-bus test system under time-varying load and distributed generation conditions. The trained agent achieves an approximately 40% reduction in mean voltage deviation compared to the no-control baseline. In addition, a scalability study was conducted on the IEEE 69-bus test system using the same load and distributed generation conditions, with 5 electric springs installed, while keeping the reward design and the tuned DDPG-B hyperparameter configuration unchanged. Under this setting, the controller achieves a 38.49% improvement over the no-control case.
- The region-based penalized reward function effectively guides the agent toward maintaining voltage deviations below the target threshold while avoiding overly aggressive control actions. The sensitivity analysis confirms that the selected threshold of 0.7 provides an effective trade-off between voltage regulation performance and training stability.
- Systematic hyperparameter tuning of the DDPG algorithm (DDPG-B configuration) yields significant performance improvements over the default DDPG configuration (DDPG-A), TD3, and PPO, highlighting the importance of algorithm tuning for specific applications rather than relying on default parameters.
- The combination of DRL and electric springs represents a promising approach for cost-effective voltage regulation in distribution networks with high renewable penetration. Electric springs provide flexible reactive compensation without requiring large energy storage elements, while DRL enables adaptive, model-free control that can handle the uncertainty of renewable generation.
- The proposed approach keeps all bus voltages within ANSI C84.1 limits (0.95–1.05 p.u.) in 95 out of 96 test scenarios, with only minor violations occurring at remote buses during peak demand periods.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Nomenclature
| Abbreviations | |
| ADN | Active Distribution Network |
| BFSM | Backward–Forward Sweep Method |
| CL | Critical Load |
| D-FACTS | Distributed Flexible AC Transmission Systems |
| DDPG | Deep Deterministic Policy Gradient |
| DG | Distributed Generation |
| DPG | Deterministic Policy Gradient |
| DRL | Deep Reinforcement Learning |
| ES | Electric Spring |
| ESS | Energy Storage System |
| FACTS | Flexible AC Transmission Systems |
| GA | Genetic Algorithm |
| GWO | Grey Wolf Optimizer |
| NCL | Non-Critical Load |
| PCC | Point of Common Coupling |
| PPO | Proximal Policy Optimization |
| PSO | Particle Swarm Optimization |
| PV | Photovoltaic |
| RL | Reinforcement Learning |
| SAC | Soft Actor–Critic |
| SL | Smart Load |
| STATCOM | Static Synchronous Compensator |
| TD3 | Twin Delayed Deep Deterministic Policy Gradient |
| WT | Wind Turbine |
| ZIP | Constant-impedance, constant-current, constant-power load model |
| Symbols | |
| Activation variable of ES at bus j () | |
| Operating condition parameters at the PCC | |
| E | Environment in the RL framework |
| Episode number | |
| – | Weighting factors for ZIP load components |
| Current phasor in distribution line from bus i to j | |
| Current phasor through the noncritical load at bus j | |
| Current phasor through the ZIP load | |
| Impedance magnitude factor | |
| ES control variable () | |
| Set of all ES actions at time t | |
| Active and reactive power of ZIP load at bus j | |
| Rated active and reactive power of the ZIP load | |
| Reactive power output of the electric spring | |
| Action-value function for policy | |
| Reward at time step t | |
| Cumulative reward per episode | |
| Resistance and reactance of distribution line i–j | |
| Resistance and reactance of noncritical load at bus j | |
| State at time step t | |
| Set of all bus states at time t | |
| T | Total number of time steps per episode |
| Thevenin equivalent voltage phasor | |
| Nominal voltage magnitude of the critical load | |
| Voltage phasor at bus j | |
| Output voltage phasor of ES at bus j | |
| Mean voltage deviation of base case (no control) | |
| Mean dynamic voltage deviation with ES control | |
| Equivalent admittance of line segment j–k | |
| Overall admittance of the ZIP load | |
| Complex impedance of distribution line i–j | |
| Complex impedance of noncritical load at bus j | |
| Discount factor in RL | |
| Angle parameter for ES operation | |
| Parameters of actor and critic networks | |
| Phase angle of voltage | |
| Deterministic policy (actor) | |
| Angle parameters for ES model |
References
- Jiao, W.; Chen, J.; Wu, Q.; Li, C.; Zhou, B.; Huang, S. Distributed Coordinated Voltage Control for Distribution Networks with DG and OLTC Based on MPC and Gradient Projection. IEEE Trans. Power Syst. 2022, 37, 680–690. [Google Scholar] [CrossRef]
- Cao, D.; Zhao, J.; Hu, W.; Ding, F.; Huang, Q.; Chen, Z.; Blaabjerg, F. Data-Driven Multi-Agent Deep Reinforcement Learning for Distribution System Decentralized Voltage Control with High Penetration of PVs. IEEE Trans. Smart Grid 2021, 12, 4137–4150. [Google Scholar] [CrossRef]
- Zhang, X.; Wu, Z.; Sun, Q.; Gu, W.; Zheng, S.; Zhao, J. Application and progress of artificial intelligence technology in the field of distribution network voltage Control: A review. Renew. Sustain. Energy Rev. 2024, 192, 114282. [Google Scholar] [CrossRef]
- Naderi, E.; Pourakbari-Kasmaei, M.; Abdi, H. An efficient particle swarm optimization algorithm to solve optimal power flow problem integrated with FACTS devices. Appl. Soft Comput. 2019, 80, 243–262. [Google Scholar] [CrossRef]
- Khan, N.H.; Wang, Y.; Tian, D.; Raja, M.A.Z.; Jamal, R.; Muhammad, Y. Design of Fractional Particle Swarm Optimization Gravitational Search Algorithm for Optimal Reactive Power Dispatch Problems. IEEE Access 2020, 8, 146785–146806. [Google Scholar] [CrossRef]
- Hui, J. Adaptive sliding mode load-following control of a small modular reactor via reinforcement learning, nonlinear extended state observer, and neural network. Energy 2025, 333, 137317. [Google Scholar] [CrossRef]
- Hui, J. Nonlinear extended state observer-based model-free near-optimal sliding mode water level controller of an inverted U-tube steam generator. Eng. Appl. Artif. Intell. 2026, 163, 112755. [Google Scholar] [CrossRef]
- Toubeau, J.-F.; Bakhshideh Zad, B.; Hupez, M.; De Grève, Z.; Vallée, F. Deep Reinforcement Learning-Based Voltage Control to Deal with Model Uncertainties in Distribution Networks. Energies 2020, 13, 3928. [Google Scholar] [CrossRef]
- Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar]
- Fujimoto, S.; van Hoof, H.; Meger, D. Addressing Function Approximation Error in Actor-Critic Methods. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 1587–1596. [Google Scholar]
- Sun, X.; Xu, Z.; Qiu, J.; Liu, H.; Wu, H.; Tao, Y. Optimal Volt/Var Control for Unbalanced Distribution Networks with Human-in-the-Loop Deep Reinforcement Learning. IEEE Trans. Smart Grid 2024, 15, 2639–2651. [Google Scholar] [CrossRef]
- Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal Policy Optimization Algorithms. arXiv 2017, arXiv:1707.06347. [Google Scholar] [CrossRef]
- Qian, T.; Liang, Z.; Shao, C.; Zhang, H.; Hu, Q.; Wu, Z. Offline DRL for Price-Based Demand Response: Learning From Suboptimal Data and Beyond. IEEE Trans. Smart Grid 2024, 15, 4618–4635. [Google Scholar] [CrossRef]
- Zheng, W.; Pi, R.; Zhong, X.; Yang, C. Demand response for home energy management systems: A novel dual-agent DRL approach. Energy Syst. 2025. [Google Scholar] [CrossRef]
- Xiong, K.; Hu, W.; Cao, D.; Zhang, G.; Chen, Z.; Blaabjerg, F. A novel two-level deep reinforcement learning enabled game approach for incentive-based distributed voltage regulation with participation of autonomous photovoltaic inverters. Energy 2025, 324, 135934. [Google Scholar] [CrossRef]
- Perera, A.T.D.; Kamalaruban, P. Applications of reinforcement learning in energy systems. Renew. Sustain. Energy Rev. 2021, 137, 110618. [Google Scholar] [CrossRef]
- Luo, X.; Akhtar, Z.; Lee, C.K.; Chaudhuri, B.; Tan, S.-C.; Hui, S.Y.R. Distributed Voltage Control with Electric Springs: Comparison with STATCOM. IEEE Trans. Smart Grid 2015, 6, 209–219. [Google Scholar] [CrossRef]
- Lee, C.-K.; Liu, H.; Tan, S.-C.; Chaudhuri, B.; Hui, S.Y.R. Electric Spring and Smart Load: Technology, System-level Impact and Opportunities. IEEE J. Emerg. Sel. Top. Power Electron. 2020, 9, 6524–6544. [Google Scholar] [CrossRef]
- Chen, T.; Liu, Y.; Tan, S.-C.; Hui, S.Y.R. Distributed Cooperative Control of Multiple DC Electric Springs for Voltage Regulation in DC Microgrids. IEEE Trans. Ind. Electron. 2018, 65, 5520–5530. [Google Scholar] [CrossRef]
- Saha, S.; Dutta, S. Electric springs for coordinated voltage and frequency regulation in multi-area interconnected power systems. Sci. Rep. 2025, 15, 2847. [Google Scholar] [CrossRef]
- Akhtar, Z.; Chaudhuri, B.; Hui, S.Y.R. Primary Frequency Control Contribution From Smart Loads Using Reactive Compensation. IEEE Trans. Smart Grid 2015, 6, 2356–2365. [Google Scholar] [CrossRef]
- Tapia-Tinoco, G.; Valencia-Rivera, G.H.; Valtierra-Rodriguez, M.; Garcia-Perez, A.; Granados-Lieberman, D. Optimal Placement of Electric Springs in Unbalanced Distribution Networks using Improved Backward/Forward Sweep Method Based Genetic Algorithm. J. Mod. Power Syst. Clean Energy 2025, 13, 940–952. [Google Scholar] [CrossRef]
- Tapia-Tinoco, G.; Granados-Lieberman, D.; Rodriguez-Alejandro, D.A.; Valtierra-Rodriguez, M.; Garcia-Perez, A. A Robust Electric Spring Model and Modified Backward Forward Solution Method for Microgrids with Distributed Generation. Mathematics 2020, 8, 1326. [Google Scholar] [CrossRef]
- Wang, Q.; Cheng, M.; Chen, Z.; Wang, Z. Steady-State Analysis of Electric Springs with a Novel δ Control. IEEE Trans. Power Electron. 2015, 30, 7159–7169. [Google Scholar] [CrossRef]
- Watkins, C.J.C.H.; Dayan, P. Q-learning. Mach. Learn. 1992, 8, 279–292. [Google Scholar] [CrossRef]
- Jaber, Y.; Dharmasena, P.; Nassif, A.; Nassif, N. Hyperparameter Optimization of Neural Networks Using Grid Search for Predicting HVAC Heating Coil Performance. Buildings 2025, 15, 2753. [Google Scholar] [CrossRef]
- Baran, M.; Wu, F. Network reconfiguration in distribution systems for loss reduction and load balancing. IEEE Trans. Power Deliv. 1989, 4, 1401–1407. [Google Scholar] [CrossRef]
- Ur Rehman, A.; Ali, M.; Iqbal, S.; Habib, S.; Shafiq, A.; Elbarbary, Z.M.; Barnawi, A.B. Transition towards a sustainable power system: MA-DA&DC framework based voltage control in high PV penetration networks. Energy Rep. 2023, 9, 5922–5936. [Google Scholar]
- Electric Reliability Council of Texas, Inc. ERCOT Load Profiling. 2025. Available online: https://www.ercot.com/mktinfo/loadprofile (accessed on 1 April 2025).
- The National Renewable Energy Laboratory NREL Grid Modernization. 2025. Available online: https://www.nrel.gov/grid (accessed on 1 April 2025).
- ANSI C84.1-2020; American National Standard for Electric Power Systems and Equipment—Voltage Ratings (60 Hertz). American National Standards Institute: Washington, DC, USA, 2020.
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- YPEA Yarpiz®. Yarpiz YPEA Evolutionary Algorithms. Available online: https://yarpiz.com/477/ypea-yarpiz-evolutionary-algorithms (accessed on 1 April 2025).










| Characteristic | STATCOM [17] | ESS [14] | Electric Spring [17] |
|---|---|---|---|
| Energy storage required | No | Yes (large) | No (minimal DC-link) |
| Capital cost | High | Very high | Moderate |
| Response time | Fast | Medium | Fast |
| Scalability | Limited | Limited | High |
| Active power capability | Limited | Yes | No (reactive only) |
| Installation complexity | High | Moderate | Low |
| Feature | DDPG | TD3 |
|---|---|---|
| Number of critics | 1 | 2 (twin critics) |
| Q-value estimation | Single critic | Minimum of two critics |
| Policy update frequency | Every step | Delayed (every d steps) |
| Target action noise | None | Clipped Gaussian |
| Overestimation bias | Prone to overestimation | Reduced through clipping |
| Training stability | Moderate | Improved |
| Threshold | Mean (p.u.) | Convergence (Episodes) | Action Std. Dev. | Training Time (s) |
|---|---|---|---|---|
| 0.5 | - | - | - | - |
| 0.6 | 0.0157 | 232 | 0.7386 | 913.7078 |
| 0.7 | 0.0159 | 100 | 0.5570 | 225.3243 |
| 0.8 | 0.0211 | 182 | 0.2182 | 152.0885 |
| 0.9 | 0.0222 | <5 | 0.1705 | 160.4302 |
| Hyperparameter | TD3 | DDPG-A | DDPG-B |
|---|---|---|---|
| Discount factor | 0.99 | 0.99 | 0.9 |
| Experience buffer length | |||
| Mini-batch size | 64 | 64 | 128 |
| Target smooth factor | |||
| Sample time | 1 | 1 | 1 |
| Exploration noise | Gaussian | Ornstein–Uhlenbeck (0.3) | Ornstein–Uhlenbeck (0.1) |
| Actor learning rate | |||
| Critic learning rate | |||
| Variance decay rate | 0 | 0 | |
| Hidden layer 1 (critic) | 256 | 256 | 256 |
| Hidden layer 2 (critic) | 256 | 256 | 128 |
| Hidden layer 1 (actor) | 256 | 256 | 400 |
| Hidden layer 2 (actor) | 256 | 256 | 300 |
| Number of episodes | 500 | 500 | 200 |
| Time steps per episode | 500 | 500 | 200 |
| Property | Value |
|---|---|
| Number of buses | 33 |
| Number of loads | 32 |
| Total active power demand (kW) | 3715 |
| Total reactive power demand (kVAr) | 2300 |
| Number of photovoltaic units | 4 |
| Number of wind turbine units | 3 |
| Total DG active power generation capacity (kW) | 2229 |
| Number of ES units | 12 |
| Algorithm | Max (p.u.) | Min (p.u.) | Execution Time (min) |
|---|---|---|---|
| DDPG-B | 0.0306 | 0.0013 | 0.0159 |
| GA | 0.0288 | 0.0006 | 2.3629 |
| GWO | 0.0290 | 0.0006 | 2.3898 |
| PSO | 0.0289 | 0.0006 | 3.5140 |
| Case | Device | No. | Location (Bus) | Max Power (kW) | (p.u.) | Reduction (%) |
|---|---|---|---|---|---|---|
| Base case | PV gen. | 3 | 11, 49, 59 | 760.4, 760.4, 760.4 | 0.0076 | – |
| DDPG-B | ES | 5 | 12, 21, 50, 61, 64 | 116, 91.2, 307.7, 995.2, 181.6 | 0.0046 | 38.49 |
| PV gen. | 3 | 11, 49, 59 | 760.4, 760.4, 760.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Lara-Perez, J.I.; Trejo-Caballero, G.; Tapia-Tinoco, G.; Raya-González, L.E.; Garcia-Perez, A. Deep Reinforcement Learning-Based Voltage Regulation Using Electric Springs in Active Distribution Networks. Technologies 2026, 14, 87. https://doi.org/10.3390/technologies14020087
Lara-Perez JI, Trejo-Caballero G, Tapia-Tinoco G, Raya-González LE, Garcia-Perez A. Deep Reinforcement Learning-Based Voltage Regulation Using Electric Springs in Active Distribution Networks. Technologies. 2026; 14(2):87. https://doi.org/10.3390/technologies14020087
Chicago/Turabian StyleLara-Perez, Jesus Ignacio, Gerardo Trejo-Caballero, Guillermo Tapia-Tinoco, Luis Enrique Raya-González, and Arturo Garcia-Perez. 2026. "Deep Reinforcement Learning-Based Voltage Regulation Using Electric Springs in Active Distribution Networks" Technologies 14, no. 2: 87. https://doi.org/10.3390/technologies14020087
APA StyleLara-Perez, J. I., Trejo-Caballero, G., Tapia-Tinoco, G., Raya-González, L. E., & Garcia-Perez, A. (2026). Deep Reinforcement Learning-Based Voltage Regulation Using Electric Springs in Active Distribution Networks. Technologies, 14(2), 87. https://doi.org/10.3390/technologies14020087

