Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = Deep Belief Network-Multilayer Perceptron

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 10457 KB  
Article
Landslide Susceptibility Mapping Considering Time-Varying Factors Based on Different Models
by Zhanfeng Wang, Chao Yin and Jingjing Li
Coatings 2026, 16(2), 207; https://doi.org/10.3390/coatings16020207 - 5 Feb 2026
Abstract
The selection of hazard factors is an important factor affecting the accuracy of landslide susceptibility mapping (LSM). The systematic development of an integrated input framework, incorporating both static and time-varying factors, as well as comparative studies of different input frameworks, remains at a [...] Read more.
The selection of hazard factors is an important factor affecting the accuracy of landslide susceptibility mapping (LSM). The systematic development of an integrated input framework, incorporating both static and time-varying factors, as well as comparative studies of different input frameworks, remains at a preliminary stage. The degree of fit between each data-driven method and landslide-prone environment cannot be known in advance, so the best modeling method can only be determined through comparative studies. Therefore, the Pearson correlation coefficient method and collinearity diagnostics were used to screen the hazard factors, and three hazard factor combinations, considering both static and time-varying factors, were established. A total of 4498 landslide grids and 4498 non-landslide grids were determined, among which 70% (3149 landslide grids and 3149 non-landslide grids) were training samples, and the remaining 30% (1349 landslide grids and 1349 non-landslide grids) were verification samples. The three combinations were input to five models (Support Vector Machine, Random Forest, Convolutional Neural Network-Random Forest, Convolutional Neural Network-Support Vector Machine and Deep Belief Network-Multilayer Perceptron). The results show that the LSM results of different combinations and models are quite varied, and the combination No.3 and the Deep Belief Network-Multilayer Perceptron are the best. The study area is divided into extremely low susceptible areas, low susceptible areas, medium susceptible areas, high susceptible areas and extremely high susceptible areas, and the extremely high susceptible areas mainly distribute in the northwest, south and east. The other models overestimate the distance from the fault and underestimate the distance from the road. The extreme tendency of LSM results of the combinations No.1 and No.2 are strong, and they are easy to produce error estimation areas, which overestimate the elevation and underestimate the distance from the river. The LSM results of the Convolutional Neural Network-Support Vector Machine are closer to those of the benchmark, which underestimates the distance from the road and overestimates the distance from the fault. This study selected the best combination and model through comparative studies and revealed the degree of influence of each hazard factor on landslide susceptibility, greatly improving LSM accuracy, which can provide a scientific basis for land use planning. Full article
21 pages, 974 KB  
Review
Artificial Intelligence in Audiology: A Scoping Review of Current Applications and Future Directions
by Andrea Frosolini, Leonardo Franz, Valeria Caragli, Elisabetta Genovese, Cosimo de Filippis and Gino Marioni
Sensors 2024, 24(22), 7126; https://doi.org/10.3390/s24227126 - 6 Nov 2024
Cited by 25 | Viewed by 11384
Abstract
The integration of artificial intelligence (AI) into medical disciplines is rapidly transforming healthcare delivery, with audiology being no exception. By synthesizing the existing literature, this review seeks to inform clinicians, researchers, and policymakers about the potential and challenges of integrating AI into audiological [...] Read more.
The integration of artificial intelligence (AI) into medical disciplines is rapidly transforming healthcare delivery, with audiology being no exception. By synthesizing the existing literature, this review seeks to inform clinicians, researchers, and policymakers about the potential and challenges of integrating AI into audiological practice. The PubMed, Cochrane, and Google Scholar databases were searched for articles published in English from 1990 to 2024 with the following query: “(audiology) AND (“artificial intelligence” OR “machine learning” OR “deep learning”)”. The PRISMA extension for scoping reviews (PRISMA-ScR) was followed. The database research yielded 1359 results, and the selection process led to the inclusion of 104 manuscripts. The integration of AI in audiology has evolved significantly over the succeeding decades, with 87.5% of manuscripts published in the last 4 years. Most types of AI were consistently used for specific purposes, such as logistic regression and other statistical machine learning tools (e.g., support vector machine, multilayer perceptron, random forest, deep belief network, decision tree, k-nearest neighbor, or LASSO) for automated audiometry and clinical predictions; convolutional neural networks for radiological image analysis; and large language models for automatic generation of diagnostic reports. Despite the advances in AI technologies, different ethical and professional challenges are still present, underscoring the need for larger, more diverse data collection and bioethics studies in the field of audiology. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

28 pages, 15253 KB  
Article
Response of Sustainable Solar Photovoltaic Power Output to Summer Heatwave Events in Northern China
by Zifan Huang, Zexia Duan, Yichi Zhang and Tianbo Ji
Sustainability 2024, 16(12), 5254; https://doi.org/10.3390/su16125254 - 20 Jun 2024
Cited by 2 | Viewed by 2967
Abstract
Understanding the resilience of photovoltaic (PV) systems to extreme weather, such as heatwaves, is crucial for advancing sustainable energy solutions. Although previous studies have often focused on forecasting PV power output or assessing the impact of geographical variations, the dynamic response of PV [...] Read more.
Understanding the resilience of photovoltaic (PV) systems to extreme weather, such as heatwaves, is crucial for advancing sustainable energy solutions. Although previous studies have often focused on forecasting PV power output or assessing the impact of geographical variations, the dynamic response of PV power outputs to extreme climate events still remains highly uncertain. Utilizing the PV power data and meteorological parameters recorded at 15 min intervals from 1 July 2018 to 13 June 2019 in Hebei Province, this study investigates the spatiotemporal characteristics of the PV power output and its response to heatwaves. Solar radiation and air temperature are pivotal in enhancing PV power output by approximately 30% during heatwave episodes, highlighting the significant contribution of PV systems to energy supplies under extreme climate conditions. Furthermore, this study systematically evaluates the performance of Random Forest (RF), Decision Tree Regression (DTR), Support Vector Machine (SVM), Light Gradient Boosting Machine (LightGBM), Deep Belief Network (DBN), and Multilayer Perceptron (MLP) models under both summer heatwave and non-heatwave conditions. The findings indicate that the RF and LightGBM models exhibit higher predictive accuracy and relative stability under heatwave conditions, with an R2 exceeding 0.98, with both an RMSE and MAE below 0.47 MW and 0.24 MW, respectively. This work not only reveals the potential of machine learning to enhance our understanding of climate–energy interplay but also contributes valuable insights for the formulation of adaptive strategies, which are critical for advancing sustainable energy solutions in the face of climate change. Full article
Show Figures

Figure 1

19 pages, 5020 KB  
Article
Solar Irradiance Forecasting Using Dynamic Ensemble Selection
by Domingos S. de O. Santos, Paulo S. G. de Mattos Neto, João F. L. de Oliveira, Hugo Valadares Siqueira, Tathiana Mikamura Barchi, Aranildo R. Lima, Francisco Madeiro, Douglas A. P. Dantas, Attilio Converti, Alex C. Pereira, José Bione de Melo Filho and Manoel H. N. Marinho
Appl. Sci. 2022, 12(7), 3510; https://doi.org/10.3390/app12073510 - 30 Mar 2022
Cited by 34 | Viewed by 5489
Abstract
Solar irradiance forecasting has been an essential topic in renewable energy generation. Forecasting is an important task because it can improve the planning and operation of photovoltaic systems, resulting in economic advantages. Traditionally, single models are employed in this task. However, issues regarding [...] Read more.
Solar irradiance forecasting has been an essential topic in renewable energy generation. Forecasting is an important task because it can improve the planning and operation of photovoltaic systems, resulting in economic advantages. Traditionally, single models are employed in this task. However, issues regarding the selection of an inappropriate model, misspecification, or the presence of random fluctuations in the solar irradiance series can result in this approach underperforming. This paper proposes a heterogeneous ensemble dynamic selection model, named HetDS, to forecast solar irradiance. For each unseen test pattern, HetDS chooses the most suitable forecasting model based on a pool of seven well-known literature methods: ARIMA, support vector regression (SVR), multilayer perceptron neural network (MLP), extreme learning machine (ELM), deep belief network (DBN), random forest (RF), and gradient boosting (GB). The experimental evaluation was performed with four data sets of hourly solar irradiance measurements in Brazil. The proposed model attained an overall accuracy that is superior to the single models in terms of five well-known error metrics. Full article
Show Figures

Figure 1

16 pages, 1269 KB  
Article
A Deep Neural Network-Based Pain Classifier Using a Photoplethysmography Signal
by Hyunjun Lim, Byeongnam Kim, Gyu-Jeong Noh and Sun K. Yoo
Sensors 2019, 19(2), 384; https://doi.org/10.3390/s19020384 - 18 Jan 2019
Cited by 41 | Viewed by 4977
Abstract
Side effects occur when excessive or low doses of analgesics are administered compared to the required amount to mediate the pain induced during surgery. It is important to accurately assess the pain level of the patient during surgery. We proposed a pain classifier [...] Read more.
Side effects occur when excessive or low doses of analgesics are administered compared to the required amount to mediate the pain induced during surgery. It is important to accurately assess the pain level of the patient during surgery. We proposed a pain classifier based on a deep belief network (DBN) using photoplethysmography (PPG). Our DBN learned about a complex nonlinear relationship between extracted PPG features and pain status based on the numeric rating scale (NRS). A bagging ensemble model was used to improve classification performance. The DBN classifier showed better classification results than multilayer perceptron neural network (MLPNN) and support vector machine (SVM) models. In addition, the classification performance was improved when the selective bagging model was applied compared with the use of each single model classifier. The pain classifier based on DBN using a selective bagging model can be helpful in developing a pain classification system. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

12 pages, 840 KB  
Article
Prediction of Acute Kidney Injury after Liver Transplantation: Machine Learning Approaches vs. Logistic Regression Model
by Hyung-Chul Lee, Soo Bin Yoon, Seong-Mi Yang, Won Ho Kim, Ho-Geol Ryu, Chul-Woo Jung, Kyung-Suk Suh and Kook Hyun Lee
J. Clin. Med. 2018, 7(11), 428; https://doi.org/10.3390/jcm7110428 - 8 Nov 2018
Cited by 164 | Viewed by 11657
Abstract
Acute kidney injury (AKI) after liver transplantation has been reported to be associated with increased mortality. Recently, machine learning approaches were reported to have better predictive ability than the classic statistical analysis. We compared the performance of machine learning approaches with that of [...] Read more.
Acute kidney injury (AKI) after liver transplantation has been reported to be associated with increased mortality. Recently, machine learning approaches were reported to have better predictive ability than the classic statistical analysis. We compared the performance of machine learning approaches with that of logistic regression analysis to predict AKI after liver transplantation. We reviewed 1211 patients and preoperative and intraoperative anesthesia and surgery-related variables were obtained. The primary outcome was postoperative AKI defined by acute kidney injury network criteria. The following machine learning techniques were used: decision tree, random forest, gradient boosting machine, support vector machine, naïve Bayes, multilayer perceptron, and deep belief networks. These techniques were compared with logistic regression analysis regarding the area under the receiver-operating characteristic curve (AUROC). AKI developed in 365 patients (30.1%). The performance in terms of AUROC was best in gradient boosting machine among all analyses to predict AKI of all stages (0.90, 95% confidence interval [CI] 0.86–0.93) or stage 2 or 3 AKI. The AUROC of logistic regression analysis was 0.61 (95% CI 0.56–0.66). Decision tree and random forest techniques showed moderate performance (AUROC 0.86 and 0.85, respectively). The AUROC of support the vector machine, naïve Bayes, neural network, and deep belief network was smaller than that of the other models. In our comparison of seven machine learning approaches with logistic regression analysis, the gradient boosting machine showed the best performance with the highest AUROC. An internet-based risk estimator was developed based on our model of gradient boosting. However, prospective studies are required to validate our results. Full article
(This article belongs to the Special Issue The Future of Artificial Intelligence in Clinical Medicine)
Show Figures

Figure 1

20 pages, 1542 KB  
Article
EMD-Based Predictive Deep Belief Network for Time Series Prediction: An Application to Drought Forecasting
by Norbert A. Agana and Abdollah Homaifar
Hydrology 2018, 5(1), 18; https://doi.org/10.3390/hydrology5010018 - 27 Feb 2018
Cited by 64 | Viewed by 7278
Abstract
Drought is a stochastic natural feature that arises due to intense and persistent shortage of precipitation. Its impact is mostly manifested as agricultural and hydrological droughts following an initial meteorological phenomenon. Drought prediction is essential because it can aid in the preparedness and [...] Read more.
Drought is a stochastic natural feature that arises due to intense and persistent shortage of precipitation. Its impact is mostly manifested as agricultural and hydrological droughts following an initial meteorological phenomenon. Drought prediction is essential because it can aid in the preparedness and impact-related management of its effects. This study considers the drought forecasting problem by developing a hybrid predictive model using a denoised empirical mode decomposition (EMD) and a deep belief network (DBN). The proposed method first decomposes the data into several intrinsic mode functions (IMFs) using EMD, and a reconstruction of the original data is obtained by considering only relevant IMFs. Detrended fluctuation analysis (DFA) was applied to each IMF to determine the threshold for robust denoising performance. Based on their scaling exponents, irrelevant intrinsic mode functions are identified and suppressed. The proposed method was applied to predict different time scale drought indices across the Colorado River basin using a standardized streamflow index (SSI) as the drought index. The results obtained using the proposed method was compared with standard methods such as multilayer perceptron (MLP) and support vector regression (SVR). The proposed hybrid model showed improvement in prediction accuracy, especially for multi-step ahead predictions. Full article
Show Figures

Figure 1

Back to TopTop