Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Authors = Diego Cerrai ORCID = 0000-0001-5918-4885

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 27270 KiB  
Article
Non-Parametric Machine Learning Modeling of Tree-Caused Power Outage Risk to Overhead Distribution Powerlines
by Harshana Wedagedara, Chandi Witharana, Robert Fahey, Diego Cerrai, Jason Parent and Amal S. Perera
Appl. Sci. 2024, 14(12), 4991; https://doi.org/10.3390/app14124991 - 7 Jun 2024
Cited by 2 | Viewed by 1582
Abstract
Trees in proximity to power lines can cause significant damage to utility infrastructure during storms, leading to substantial economic and societal costs. This study investigated the effectiveness of non-parametric machine learning algorithms in modeling tree-related outage risks to distribution power lines at a [...] Read more.
Trees in proximity to power lines can cause significant damage to utility infrastructure during storms, leading to substantial economic and societal costs. This study investigated the effectiveness of non-parametric machine learning algorithms in modeling tree-related outage risks to distribution power lines at a finer spatial scale. We used a vegetation risk model (VRM) comprising 15 predictor variables derived from roadside tree data, landscape information, vegetation management records, and utility infrastructure data. We evaluated the VRM’s performance using decision tree (DT), random forest (RF), k-Nearest Neighbor (k-NN), extreme gradient boosting (XGBoost), and support vector machine (SVM) techniques. The RF algorithm demonstrated the highest performance with an accuracy of 0.753, an AUC-ROC of 0.746, precision of 0.671, and an F1-score of 0.693. The SVM achieved the highest recall value of 0.727. Based on the overall performance, the RF emerged as the best machine learning algorithm, whereas the DT was the least suitable. The DT reported the lowest run times for both hyperparameter optimization (3.93 s) and model evaluation (0.41 s). XGBoost and the SVM exhibited the highest run times for hyperparameter tuning (9438.54 s) and model evaluation (112 s), respectively. The findings of this study are valuable for enhancing the resilience and reliability of the electric grid. Full article
(This article belongs to the Special Issue New Insights into Power System Resilience)
Show Figures

Figure 1

18 pages, 8457 KiB  
Article
A Statistical Framework for Evaluating the Effectiveness of Vegetation Management in Reducing Power Outages Caused during Storms in Distribution Networks
by William O. Taylor, Peter L. Watson, Diego Cerrai and Emmanouil Anagnostou
Sustainability 2022, 14(2), 904; https://doi.org/10.3390/su14020904 - 13 Jan 2022
Cited by 14 | Viewed by 4847
Abstract
This paper develops a statistical framework to analyze the effectiveness of vegetation management at reducing power outages during storms of varying severity levels. The framework was applied on the Eversource Energy distribution grid in Connecticut, USA based on 173 rain and wind events [...] Read more.
This paper develops a statistical framework to analyze the effectiveness of vegetation management at reducing power outages during storms of varying severity levels. The framework was applied on the Eversource Energy distribution grid in Connecticut, USA based on 173 rain and wind events from 2005–2020, including Hurricane Irene, Hurricane Sandy, and Tropical Storm Isaias. The data were binned by storm severity (high/low) and vegetation management levels, where a maximum applicable length of vegetation management for each circuit was determined, and the data were divided into four bins based on the actual length of vegetation management performed divided by the maximum applicable value (0–25%, 25–50%, 50–75%, and 75–100%). Then, weather and overhead line length normalized outage statistics were taken for each group. The statistics were used to determine the effectiveness of vegetation management and its dependence on storm severity. The results demonstrate a higher reduction in damages for lower-severity storms, with a reduction in normalized outages between 45.8% and 63.8%. For high-severity events, there is a large increase in effectiveness between the highest level of vegetation management and the two lower levels, with 75–100% vegetation management leading to a 37.3% reduction in trouble spots. Yet, when evaluating system reliability, it is important to look at all storms combined, and the results of this study provide useful information on total annual trouble spots and allow for analysis of how various vegetation management scenarios would impact trouble spots in the electric grid. This framework can also be used to better understand how more rigorous vegetation management standards (applying ETT) help reduce outages at an individual event level. In future work, a similar framework may be used to evaluate other resilience improvements. Full article
Show Figures

Figure 1

16 pages, 2783 KiB  
Article
The Effect of Lead-Time Weather Forecast Uncertainty on Outage Prediction Modeling
by Feifei Yang, Diego Cerrai and Emmanouil N. Anagnostou
Forecasting 2021, 3(3), 501-516; https://doi.org/10.3390/forecast3030031 - 5 Jul 2021
Cited by 20 | Viewed by 5351
Abstract
Weather-related power outages affect millions of utility customers every year. Predicting storm outages with lead times of up to five days could help utilities to allocate crews and resources and devise cost-effective restoration plans that meet the strict time and efficiency requirements imposed [...] Read more.
Weather-related power outages affect millions of utility customers every year. Predicting storm outages with lead times of up to five days could help utilities to allocate crews and resources and devise cost-effective restoration plans that meet the strict time and efficiency requirements imposed by regulatory authorities. In this study, we construct a numerical experiment to evaluate how weather parameter uncertainty, based on weather forecasts with one to five days of lead time, propagates into outage prediction error. We apply a machine-learning-based outage prediction model on storm-caused outage events that occurred between 2016 and 2019 in the northeastern United States. The model predictions, fed by weather analysis and other environmental parameters including land cover, tree canopy, vegetation characteristics, and utility infrastructure variables exhibited a mean absolute percentage error of 38%, Nash–Sutcliffe efficiency of 0.54, and normalized centered root mean square error of 68%. Our numerical experiment demonstrated that uncertainties of precipitation and wind-gust variables play a significant role in the outage prediction uncertainty while sustained wind and temperature parameters play a less important role. We showed that, while the overall weather forecast uncertainty increases gradually with lead time, the corresponding outage prediction uncertainty exhibited a lower dependence on lead times up to 3 days and a stepwise increase in the four- and five-day lead times. Full article
Show Figures

Figure 1

26 pages, 15465 KiB  
Article
Assimilating X- and S-Band Radar Data for a Heavy Precipitation Event in Italy
by Valerio Capecchi, Andrea Antonini, Riccardo Benedetti, Luca Fibbi, Samantha Melani, Luca Rovai, Antonio Ricchi and Diego Cerrai
Water 2021, 13(13), 1727; https://doi.org/10.3390/w13131727 - 22 Jun 2021
Cited by 4 | Viewed by 4416
Abstract
During the night between 9 and 10 September 2017, multiple flash floods associated with a heavy-precipitation event affected the town of Livorno, located in Tuscany, Italy. Accumulated precipitation exceeding 200 mm in two hours was recorded. This rainfall intensity is associated with a [...] Read more.
During the night between 9 and 10 September 2017, multiple flash floods associated with a heavy-precipitation event affected the town of Livorno, located in Tuscany, Italy. Accumulated precipitation exceeding 200 mm in two hours was recorded. This rainfall intensity is associated with a return period of higher than 200 years. As a consequence, all the largest streams of the Livorno municipality flooded several areas of the town. We used the limited-area weather research and forecasting (WRF) model, in a convection-permitting setup, to reconstruct the extreme event leading to the flash floods. We evaluated possible forecasting improvements emerging from the assimilation of local ground stations and X- and S-band radar data into the WRF, using the configuration operational at the meteorological center of Tuscany region (LaMMA) at the time of the event. Simulations were verified against weather station observations, through an innovative method aimed at disentangling the positioning and intensity errors of precipitation forecasts. A more accurate description of the low-level flows and a better assessment of the atmospheric water vapor field showed how the assimilation of radar data can improve quantitative precipitation forecasts. Full article
Show Figures

Figure 1

12 pages, 1447 KiB  
Article
Dynamic Modeling of Power Outages Caused by Thunderstorms
by Berk A. Alpay, David Wanik, Peter Watson, Diego Cerrai, Guannan Liang and Emmanouil Anagnostou
Forecasting 2020, 2(2), 151-162; https://doi.org/10.3390/forecast2020008 - 22 May 2020
Cited by 32 | Viewed by 7444
Abstract
Thunderstorms are complex weather phenomena that cause substantial power outages in a short period. This makes thunderstorm outage prediction challenging using eventwise outage prediction models (OPMs), which summarize the storm dynamics over the entire course of the storm into a limited number of [...] Read more.
Thunderstorms are complex weather phenomena that cause substantial power outages in a short period. This makes thunderstorm outage prediction challenging using eventwise outage prediction models (OPMs), which summarize the storm dynamics over the entire course of the storm into a limited number of parameters. We developed a new, temporally sensitive outage prediction framework designed for models to learn the hourly dynamics of thunderstorm-caused outages directly from weather forecasts. Validation of several models built on this hour-by-hour prediction framework and comparison with a baseline model show abilities to accurately report temporal and storm-wide outage characteristics, which are vital for planning utility responses to storm-caused power grid damage. Full article
(This article belongs to the Section Power and Energy Forecasting)
Show Figures

Figure 1

19 pages, 3831 KiB  
Article
Quantifying Uncertainty in Machine Learning-Based Power Outage Prediction Model Training: A Tool for Sustainable Storm Restoration
by Feifei Yang, David W. Wanik, Diego Cerrai, Md Abul Ehsan Bhuiyan and Emmanouil N. Anagnostou
Sustainability 2020, 12(4), 1525; https://doi.org/10.3390/su12041525 - 18 Feb 2020
Cited by 53 | Viewed by 7400
Abstract
A growing number of electricity utilities use machine learning-based outage prediction models (OPMs) to predict the impact of storms on their networks for sustainable management. The accuracy of OPM predictions is sensitive to sample size and event severity representativeness in the training dataset, [...] Read more.
A growing number of electricity utilities use machine learning-based outage prediction models (OPMs) to predict the impact of storms on their networks for sustainable management. The accuracy of OPM predictions is sensitive to sample size and event severity representativeness in the training dataset, the extent of which has not yet been quantified. This study devised a randomized and out-of-sample validation experiment to quantify an OPM’s prediction uncertainty to different training sample sizes and event severity representativeness. The study showed random error decreasing by more than 100% for sample sizes ranging from 10 to 80 extratropical events, and by 32% for sample sizes from 10 to 40 thunderstorms. This study quantified the minimum number of sample size for the OPM attaining an acceptable prediction performance. The results demonstrated that conditioning the training of the OPM to a subset of events representative of the predicted event’s severity reduced the underestimation bias exhibited in high-impact events and the overestimation bias in low-impact ones. We used cross entropy (CE) to quantify the relatedness of weather variable distribution between the training dataset and the forecasted event. Full article
Show Figures

Figure 1

Back to TopTop