Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (55)

Search Parameters:
Keywords = machine committee

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 7423 KB  
Article
Exploring the Determinants of Rural Housing Vacancy in Mountainous Regions: Evidence from Jinshan Town, Fujian Province, China
by Wenkui Wang, Xue Ji, Chanjuan Xu, Haiping Zhou and Tao Luo
Land 2025, 14(11), 2187; https://doi.org/10.3390/land14112187 - 3 Nov 2025
Viewed by 401
Abstract
The rational management of vacant rural housing is critical for optimization of Territorial Spatial Patterns. Although the issue of rural housing vacancy (RHV) has attracted widespread attention, systematic investigations in mountainous regions remain limited. This study is based on census data covering 3039 [...] Read more.
The rational management of vacant rural housing is critical for optimization of Territorial Spatial Patterns. Although the issue of rural housing vacancy (RHV) has attracted widespread attention, systematic investigations in mountainous regions remain limited. This study is based on census data covering 3039 rural houses across six villages in Jinshan Town, Nanjing County, Zhangzhou City, Fujian Province, China. Using binary logistic regression and the XGBoost machine learning model, it systematically identifies the dominant determinants of rural housing vacancy in mountainous areas and evaluates their relative importance. The results show that the relative importance of the influencing factors is ranked as follows: locational conditions, physical housing characteristics, and topographic features. Specifically, among locational factors, the distances to the national road, county government, township government, and village committee centers are the most critical determinants of housing vacancy. In terms of physical attributes, the number of stories, the structural type, the floor area per story, and the orientation of the house are key variables. Regarding topographic factors, slope and aspect have limited overall influence. The two models yielded consistent directions and magnitudes of the key predictors, confirming the robustness and reliability of the results. The findings of this study help address the existing gaps in research regions, influencing factors, and methodological approaches, thereby contributing to the promotion of sustainable rural development. Full article
(This article belongs to the Section Land Socio-Economic and Political Issues)
Show Figures

Figure 1

28 pages, 6469 KB  
Article
Outlier Detection in Hydrological Data Using Machine Learning: A Case Study in Lao PDR
by Chung-Soo Kim, Cho-Rong Kim and Kah-Hoong Kok
Water 2025, 17(21), 3120; https://doi.org/10.3390/w17213120 - 30 Oct 2025
Viewed by 431
Abstract
Ensuring the quality of hydrological data is critical for effective flood forecasting, water resource management, and disaster risk reduction, especially in regions vulnerable to typhoons and extreme weather. This study presents a framework for quality control and outlier detection in rainfall and water [...] Read more.
Ensuring the quality of hydrological data is critical for effective flood forecasting, water resource management, and disaster risk reduction, especially in regions vulnerable to typhoons and extreme weather. This study presents a framework for quality control and outlier detection in rainfall and water level time series data using both supervised and unsupervised machine learning algorithms. The proposed approach is capable of detecting outliers arising from sensor malfunctions, missing values, and extreme measurements that may otherwise compromise the reliability of hydrological datasets. Supervised learning using XGBoost was trained on labeled historical data to detect known outlier patterns, while the unsupervised Isolation Forest algorithm was employed to identify unknown or rare outliers without the need for prior labels. This established framework was evaluated using hydrological datasets collected from Lao PDR, one of the member countries of the Typhoon Committee. The results demonstrate that the adopted machine learning algorithms effectively detected real-world outliers, thereby enhancing real-time monitoring and supporting data-driven decision-making. The Isolation Forest model yielded 1.21 and 12 times more false positives and false negatives, respectively, than the XGBoost model, demonstrating that XGBoost achieved superior outlier detection performance when labeled data were available. The proposed framework is designed to assist member countries in shifting from manual, human-dependent processes to AI-enabled, data-driven hydrological data management. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

19 pages, 4647 KB  
Article
Using Machine Learning to Create Prognostic Systems for Primary Prostate Cancer
by Kevin Guan, Andy Guan, Anwar E. Ahmed, Andrew J. Waters, Shyh-Han Tan and Dechang Chen
Diagnostics 2025, 15(19), 2462; https://doi.org/10.3390/diagnostics15192462 - 26 Sep 2025
Viewed by 603
Abstract
Background: Cancer staging, guided by anatomical and clinicopathologic factors, is essential for determining treatment strategies and patient prognosis. The current gold standard for prostate cancer is the American Joint Committee on Cancer (AJCC) Tumor, Lymph Node, and Metastasis (TNM) Staging System 9th Version [...] Read more.
Background: Cancer staging, guided by anatomical and clinicopathologic factors, is essential for determining treatment strategies and patient prognosis. The current gold standard for prostate cancer is the American Joint Committee on Cancer (AJCC) Tumor, Lymph Node, and Metastasis (TNM) Staging System 9th Version (2024). This system incorporates five prognostic variables: tumor (T), spread to lymph nodes (N), metastasis (M), prostate-specific antigen (PSA) levels (P), and Grade Group/Gleason score (G). While effective, further refinement of prognostic systems may improve prediction of patient outcomes and support more individualized treatment. Methods: We applied the Ensemble Algorithm for Clustering Cancer Data (EACCD), an unsupervised machine learning approach. EACCD involves three steps: calculating initial dissimilarities, performing ensemble learning, and conducting hierarchical clustering. We first developed an EACCD model using the five AJCC variables (T, N, M, P, G). The model was then expanded to include two additional factors, age (A) and race (R). Prostate cancer patient data were obtained from the Surveillance, Epidemiology, and End Results (SEER) program from the National Cancer Institute. Results: The EACCD algorithm effectively stratified patients into distinct prognostic groups, each with well-separated survival curves. The five-variable model achieved a concordance index (C-index) of 0.8293 (95% CI: 0.8245–0.8341), while the seven-variable model, including age and race, improved performance to 0.8504 (95% CI: 0.8461–0.8547). Both outperformed the AJCC TNM system, which had a C-index of 0.7676 (95% CI: 0.7622–0.7731). Conclusions: EACCD provides a refined prognostic framework for primary localized prostate cancer, demonstrating superior accuracy over the AJCC staging system. With further validation in independent cohorts, EACCD could enhance risk stratification and support precision oncology. Full article
(This article belongs to the Special Issue AI and Big Data in Medical Diagnostics)
Show Figures

Figure 1

22 pages, 4206 KB  
Article
Piezoelectric Hysteresis Modeling Under a Variable Frequency Based on a Committee Machine Approach
by Francesco Aggogeri and Nicola Pellegrini
Sensors 2025, 25(17), 5371; https://doi.org/10.3390/s25175371 - 31 Aug 2025
Viewed by 553
Abstract
Piezoelectric actuators, widely used in micro-positioning and active control systems, show important hysteresis characteristics. In particular, the hysteresis contribution is a complex phenomenon that is difficult to model when the input amplitude and frequency are time-dependent. Existing dynamic physical models poorly describe the [...] Read more.
Piezoelectric actuators, widely used in micro-positioning and active control systems, show important hysteresis characteristics. In particular, the hysteresis contribution is a complex phenomenon that is difficult to model when the input amplitude and frequency are time-dependent. Existing dynamic physical models poorly describe the hysteresis influence of industrial mechatronic devices. This paper proposes a novel hybrid data-driven model based on the Bouc–Wen and backlash hysteresis formulations to appraise and compensate for the nonlinear effects. Firstly, the performance of the piezoelectric actuator was simulated and then tested in a complete representative domain, and then using the committee machine approach. Experimental campaigns were conducted to develop an algorithm that incorporated Bouc–Wen and backlash hysteresis parameters derived via genetic algorithm (GA) and particle swarm optimization (PSO) approaches for identification. These parameters were combined in a committee machine using a set of frequency clusters. The results obtained demonstrated an error reduction of 23.54% for the committee machine approach compared with the complete approach. The root mean square error (RMSE) was 0.42 µm, and the maximum absolute error (MAE) appraisal was close to 0.86 µm in the 150–250 Hz domain via the Bouc–Wen sub-model tuned with the genetic algorithm (GA). Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 594 KB  
Article
Information-Theoretic Cost–Benefit Analysis of Hybrid Decision Workflows in Finance
by Philip Beaucamp, Harvey Maylor and Min Chen
Entropy 2025, 27(8), 780; https://doi.org/10.3390/e27080780 - 23 Jul 2025
Viewed by 894
Abstract
Analyzing and leveraging data effectively has been an advantageous strategy in the management workflows of many contemporary organizations. In business and finance, data-informed decision workflows are nowadays essential for enabling development and growth. However, there is yet a theoretical or quantitative approach for [...] Read more.
Analyzing and leveraging data effectively has been an advantageous strategy in the management workflows of many contemporary organizations. In business and finance, data-informed decision workflows are nowadays essential for enabling development and growth. However, there is yet a theoretical or quantitative approach for analyzing the cost–benefit of the processes in such workflows, e.g., in determining the trade-offs between machine- and human-centric processes and quantifying biases. The aim of this work is to translate an information-theoretic concept and measure for cost–benefit analysis to a methodology that is relevant to the analysis of hybrid decision workflows in business and finance. We propose to combine an information-theoretic approach (i.e., information-theoretic cost–benefit analysis) and an engineering approach (e.g., workflow decomposition), which enables us to utilize information-theoretic measures to estimate the cost–benefit of individual processes quantitatively. We provide three case studies to demonstrate the feasibility of the proposed methodology, including (i) the use of a statistical and computational algorithm, (ii) incomplete information and humans’ soft knowledge, and (iii) cognitive biases in a committee meeting. While this is an early application of information-theoretic cost–benefit analysis to business and financial workflows, it is a significant step towards the development of a systematic, quantitative, and computer-assisted approach for optimizing data-informed decision workflows. Full article
Show Figures

Figure 1

23 pages, 676 KB  
Article
The Role of Standards in Teaching How to Design Machine Elements
by Lorena Deleanu, Constantin Georgescu, George Ghiocel Ojoc, Cristina Popa and Alexandru Viorel Vasiliu
Standards 2025, 5(3), 18; https://doi.org/10.3390/standards5030018 - 16 Jul 2025
Viewed by 860
Abstract
This paper introduces arguments in favor of the intensive use of standards in both teaching the Machine Elements discipline and solving the first projects of mechanical design (gearboxes, jacks, pumps, tanks, etc.). The paper presents a SWOTT approach to the use of new [...] Read more.
This paper introduces arguments in favor of the intensive use of standards in both teaching the Machine Elements discipline and solving the first projects of mechanical design (gearboxes, jacks, pumps, tanks, etc.). The paper presents a SWOTT approach to the use of new in-force standards in teaching the design of machine elements. The use of information from standards in courses and design handbooks is regulated by various standardization associations at different levels internationally, such as the ISO (International Organization of Standardization), IEC (International Electrotechnical Commission), and ITU (International Telecommunication), and regional associations such as the CEN (European Commission for Standardization), CENELEC (European Committee for Electrotechnical Standardization) and ETSI (European Telecommunications Standards Institute), and national associations (for instance, the ASRO—Association of Standardization of Romania). In general, the conditions for using partial information from standards vary, but the authors present common lines and recommendations for introducing information from standards in books and design handbooks for engineering students. The use of information from standards for terms, materials, calculation models, test methods etc. is beneficial for students. This will provide them a good professional education towards adapting to a specific job in the field of mechanical engineering, where conformity to norms and standards is required by the dynamics of production, product quality and, not least, the safety of machines and operators. Full article
Show Figures

Figure 1

14 pages, 1728 KB  
Article
Auto Machine Learning and Convolutional Neural Network in Diabetes Mellitus Research—The Role of Histopathological Images in Designing and Exploring Experimental Models
by Iulian Tătaru, Simona Moldovanu, Oana-Maria Dragostin, Carmen Lidia Chiţescu, Alexandra-Simona Zamfir, Ionut Dragostin, Liliana Strat and Carmen Lăcrămioara Zamfir
Biomedicines 2025, 13(6), 1494; https://doi.org/10.3390/biomedicines13061494 - 18 Jun 2025
Viewed by 713
Abstract
Histopathological images represent a valuable data source for pathologists, who can provide clinicians with essential landmarks for complex pathologies. The development of sophisticated computational models for histopathological images has received significant attention in recent years, but most of them rely on free datasets. [...] Read more.
Histopathological images represent a valuable data source for pathologists, who can provide clinicians with essential landmarks for complex pathologies. The development of sophisticated computational models for histopathological images has received significant attention in recent years, but most of them rely on free datasets. Materials and Methods: Motivated by this drawback, the authors created an original histopathological image dataset that resulted from an animal experimental model, acquiring images from normal female rats/rats with experimentally induced diabetes mellitus (DM)/rats who received an antidiabetic therapy with a synthetic compound (AD_SC). Images were acquired from vaginal, uterine, and ovarian samples from both MD and AD_DC specimens. The experiment received the approval of the Medical Ethics Committee of the “Gr. T. Popa” University of Medicine and Pharmacy, Iași, Romania (Approval No. 169/22.03.2022). The novelty of the study consists of the following aspects. The first is the use of a diabetes-induced animal model to evaluate the impact of an antidiabetic therapy with a synthetic compound in female rats, focusing on three distinct organs of the reproductive system (vagina, ovary, and uterus), to provide a more comprehensive understanding of how diabetes affects female reproductive health as a whole. The second comprises image classification with a custom-built convolutional neural network (CB-CNN), the extraction of textural features (contrast, entropy, energy, and homogeneity), and their classification with PyCaret Auto Machine Learning (AutoML). Results: Experimental findings indicate that uterine tissue, both for MD and AD_DC, can be diagnosed with an accuracy of 94.5% and 85.8%, respectively. The Linear Discriminant Analysis (LDA) classifier features indicate a high accuracy of 86.3% when supplied with features extracted from vaginal tissue. Conclusions: Our research underscores the efficacy of classifying with two AI algorithms, CNN and machine learning. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Cancer and Other Diseases)
Show Figures

Figure 1

20 pages, 6637 KB  
Article
Kolmogorov–Arnold Networks for Reduced-Order Modeling in Unsteady Aerodynamics and Aeroelasticity
by Yuchen Zhang, Han Tang, Lianyi Wei, Guannan Zheng and Guowei Yang
Appl. Sci. 2025, 15(11), 5820; https://doi.org/10.3390/app15115820 - 22 May 2025
Cited by 1 | Viewed by 974
Abstract
Kolmogorov–Arnold Networks (KANs) are a recent development in machine learning, offering strong functional representation capabilities, enhanced interpretability, and reduced parameter complexity. Leveraging these advantages, this paper proposes a KAN-based reduced-order model (ROM) for unsteady aerodynamics and aeroelasticity. To effectively capture temporal dependencies inherent [...] Read more.
Kolmogorov–Arnold Networks (KANs) are a recent development in machine learning, offering strong functional representation capabilities, enhanced interpretability, and reduced parameter complexity. Leveraging these advantages, this paper proposes a KAN-based reduced-order model (ROM) for unsteady aerodynamics and aeroelasticity. To effectively capture temporal dependencies inherent in nonlinear unsteady flow phenomena, an architecture termed Kolmogorov–Arnold Gated Recurrent Network (KAGRN) is introduced. By incorporating a recurrent structure and a gating mechanism, the proposed model effectively captures time-delay effects and enables the selective control and preservation of long-term temporal dependencies. This architecture provides high predictive accuracy, good generalization capability, and fast prediction speed. The performance of the model is evaluated using simulations of the NACA (National Advisory Committee for Aeronautics) 64A010 airfoil undergoing harmonic motion and limit cycle oscillations in transonic flow conditions. Results demonstrate that the proposed model can not only accurately and efficiently predict unsteady aerodynamic coefficients, but also effectively capture nonlinear aeroelastic responses. Full article
(This article belongs to the Special Issue Advances in Unsteady Aerodynamics and Aeroelasticity)
Show Figures

Figure 1

19 pages, 6912 KB  
Article
Committee Machine Learning for Electrofacies-Guided Well Placement and Oil Recovery Optimization
by Adewale Amosu, Dung Bui, Oluwapelumi Oke, Abdul-Muaizz Koray, Emmanuel Appiah Kubi, Najmudeen Sibaweihi and William Ampomah
Appl. Sci. 2025, 15(6), 3020; https://doi.org/10.3390/app15063020 - 11 Mar 2025
Viewed by 1124
Abstract
Electrofacies are log-related signatures that reflect specific physical and compositional characteristics of rock units. The concept was developed to encapsulate a collection of recorded well-log responses, enabling the characterization and differentiation of one rock unit from another. The analysis of the lateral and [...] Read more.
Electrofacies are log-related signatures that reflect specific physical and compositional characteristics of rock units. The concept was developed to encapsulate a collection of recorded well-log responses, enabling the characterization and differentiation of one rock unit from another. The analysis of the lateral and vertical distribution of electrofacies is crucial for understanding reservoir properties; however, well-log analysis can be labor-intensive, time-consuming, and prone to inaccuracies due to the subjective nature of the process. In addition, there is no unique way of reliably classifying logs or deriving electrofacies due to the varying accuracy of different methods. In this study, we develop a workflow that mitigates the variability in results produced by different clustering algorithms using a committee machine. Using several unsupervised machine learning methods, including k-means, k-median, hierarchical clustering, spectral clustering, and the Gaussian mixture model, we predict electrofacies from wireline well log data and generate their 3D vertical and lateral distributions and inferred geological properties. The results from the different methods are used to constitute a committee machine, which is then used to implement electrofacies-guided well placement. 3D distributed petrophysical properties are also computed from core-calibrated porosity and permeability data for reservoir simulation. The results indicate that wells producing from a specific electrofacies, as predicted by the committee machine, have significantly better production than wells producing from other electrofacies. This proposed detailed machine learning workflow allows for strategic decision-making in development and the practical application of these findings for improved oil recovery. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
Show Figures

Figure 1

29 pages, 12614 KB  
Article
Characterization of a Fragmentation in a Highly Elliptical Orbit via an Optical Multi-Observatory Survey Strategy
by Matteo Rossetti, Lorenzo Cimino, Lorenzo Mariani, Simone Varanese, Gaetano Zarcone, Elisa Maria Alessi, Alessandro Rossi, Alessandro Nastasi, Carmelo Arcidiacono, Simone Zaggia, Matteo Simioni, Alfredo Biagini, Alessandra Di Cecco and Fabrizio Piergentili
Aerospace 2025, 12(3), 181; https://doi.org/10.3390/aerospace12030181 - 25 Feb 2025
Viewed by 1199
Abstract
Surveys of fragmentations, especially in the early stages of the given event, are fundamental for determining the number of fragments, identifying and cataloging them, and monitoring their future evolution. The development of a ground-based optical survey strategy, i.e., a suitable observation and detection [...] Read more.
Surveys of fragmentations, especially in the early stages of the given event, are fundamental for determining the number of fragments, identifying and cataloging them, and monitoring their future evolution. The development of a ground-based optical survey strategy, i.e., a suitable observation and detection method for the fragments generated by these events, is an important contribution to acquiring data and monitoring these catastrophic phenomena. An optical survey offers an interesting and cost-effective method that supports radar operations in the Low Earth Orbit regime and can monitor higher orbits where radar cannot be used. This paper presents a developed optical survey strategy for multi-observatory observations. The strategy was tested on the fragmentation event of FREGAT R/B CLUSTER 2, a rocket body with a “dummy” payload, fragmented on 8 April 2024 on a Highly Elliptical Orbit. The observational campaign involved different observatory systems, and it represented a key collaboration within the Inter-Agency Space Debris Coordination Committee. The survey started from a simulation of the cloud of fragments and was implemented by the planification and coordination of different observatory systems with different schemes and methods to scan the sky vault. The acquired survey data were analyzed using machine learning methods to identify the unknown objects, i.e., the fragments. The data acquired were compared with the simulated cloud used for the survey, and a correlation of measurements belonging to the same object was performed. Also, the parent body was characterized in its tumbling motion by the light curve acquisition. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

19 pages, 4336 KB  
Article
Machine Learning with Voting Committee for Frost Prediction
by Vinícius Albuquerque de Almeida, Juliana Aparecida Anochi, José Roberto Rozante and Haroldo Fraga de Campos Velho
Meteorology 2025, 4(1), 6; https://doi.org/10.3390/meteorology4010006 - 24 Feb 2025
Viewed by 1779
Abstract
A machine learning (ML)-based methodology for predicting frosts was applied to the southern and southeastern regions of Brazil, as well as to other countries including Uruguay, Paraguay, northern Argentina, and southeastern Bolivia. The machine learning model (using TensorFlow (TF)) was compared to the [...] Read more.
A machine learning (ML)-based methodology for predicting frosts was applied to the southern and southeastern regions of Brazil, as well as to other countries including Uruguay, Paraguay, northern Argentina, and southeastern Bolivia. The machine learning model (using TensorFlow (TF)) was compared to the frost index (IG from the Portuguese: Índice de Geada) developed by the National Institute for Space Research (INPE, Brazil). The IG is estimated using meteorological variables from a regional weather numerical model (RWNM). After calculating the two indices using the ML model and the RWNM, a voting committee (VC) was trained to select between the computed outputs. The AdaBoostClassifier algorithm was employed to implement the voting committee. The study area was subdivided into three distinct subregions: R1 (outside Brazil), R2 (the south of Brazil), and R3 (southeastern Brazil). Two forecasting time scales were evaluated: 24 h and 72 h. The 24 h forecasts from both approaches (TF and RWNM) exhibited a similar performance in terms of the number of accurate predictions. However, in the region covering Uruguay and northern Argentina, the TensorFlow model demonstrated superior frost prediction accuracy. Additionally, the TensorFlow model outperformed the RWNM for the 72 h forecast horizon. Full article
Show Figures

Figure 1

10 pages, 3495 KB  
Technical Note
Machine Learning for Predicting Neutron Effective Dose
by Ali A. A. Alghamdi
Appl. Sci. 2024, 14(13), 5740; https://doi.org/10.3390/app14135740 - 1 Jul 2024
Cited by 4 | Viewed by 1789
Abstract
The calculation of effective doses is crucial in many medical and radiation fields in order to ensure safety and compliance with regulatory limits. Traditionally, Monte Carlo codes using detailed human body computational phantoms have been used for such calculations. Monte Carlo dose calculations [...] Read more.
The calculation of effective doses is crucial in many medical and radiation fields in order to ensure safety and compliance with regulatory limits. Traditionally, Monte Carlo codes using detailed human body computational phantoms have been used for such calculations. Monte Carlo dose calculations can be time-consuming and require expertise in different processes when building the computational phantom and dose calculations. This study employs various machine learning (ML) algorithms to predict the organ doses and effective dose conversion coefficients (DCCs) from different anthropomorphic phantoms. A comprehensive data set comprising neutron energy bins, organ labels, masses, and densities is compiled from Monte Carlo studies, and it is used to train and evaluate the supervised ML models. This study includes a broad range of phantoms, including those from the International Commission on Radiation Protection (ICRP-110, ICRP-116 phantom), the Visible-Human Project (VIP-man phantom), and the Medical Internal Radiation Dose Committee (MIRD-Phantom), with row data prepared using numerical data and organ categorical labeled data. Extreme gradient boosting (XGB), gradient boosting (GB), and the random forest-based Extra Trees regressor are employed to assess the performance of the ML models against published ICRP neutron DCC values using the mean square error, mean absolute error, and R2 metrics. The results demonstrate that the ML predictions significantly vary in lower energy ranges and vary less in higher neutron energy ranges while showing good agreement with ICRP values at mid-range energies. Moreover, the categorical data models align closely with the reference doses, suggesting the potential of ML in predicting effective doses for custom phantoms based on regional populations, such as the Saudi voxel-based model. This study paves the way for efficient dose prediction using ML, particularly in scenarios requiring rapid results without extensive computational resources or expertise. The findings also indicate potential improvements in data representation and the inclusion of larger data sets to refine model accuracy and prevent overfitting. Thus, ML methods can serve as valuable techniques for the continued development of personalized dosimetry. Full article
Show Figures

Figure 1

17 pages, 4196 KB  
Article
The Prediction of Flow Stress in the Hot Compression of a Ni-Cr-Mo Steel Using Machine Learning Algorithms
by Tao Pan, Chengmin Song, Zhiyu Gao, Tian Xia and Tianqi Wang
Processes 2024, 12(3), 441; https://doi.org/10.3390/pr12030441 - 22 Feb 2024
Cited by 5 | Viewed by 2087
Abstract
The constitutive model refers to the mapping relationship between the stress and deformation conditions (such as strain, strain rate, and temperature) after being loaded. In this work, the hot deformation behavior of a Ni-Cr-Mo steel was investigated by conducting isothermal compression tests using [...] Read more.
The constitutive model refers to the mapping relationship between the stress and deformation conditions (such as strain, strain rate, and temperature) after being loaded. In this work, the hot deformation behavior of a Ni-Cr-Mo steel was investigated by conducting isothermal compression tests using a Gleeble-3800 thermal simulator with deformation temperatures ranging from 800 °C to 1200 °C, strain rates ranging from 0.01 s−1 to 10 s−1, and deformations of 55%. To analyze the constitutive relation of the Ni-Cr-Mo steel at high temperatures, five machine learning algorithms were employed to predict the flow stress, namely, back-propagation artificial neural network (BP-ANN), Random Committee, Bagging, k-nearest neighbor (k-NN), and a library for support vector machines (libSVM). A comparative study between the experimental and the predicted results was performed. The results show that correlation coefficient (R), root mean square error (RMSE), mean absolute value error (MAE), mean square error (MSE), and average absolute relative error (AARE) obtained from the Random Committee on the testing set are 0.98897, 8.00808 MPa, 5.54244 MPa, 64.12927 MPa2 and 5.67135%, respectively, whereas the metrics obtained via other algorithms are all inferior to the Random Committee. It suggests that the Random Committee can predict the flow stress of the steel more effectively. Full article
(This article belongs to the Special Issue Digital Research and Development of Materials and Processes)
Show Figures

Figure 1

31 pages, 4700 KB  
Article
Comparative Analysis between Intelligent Machine Committees and Hybrid Deep Learning with Genetic Algorithms in Energy Sector Forecasting: A Case Study on Electricity Price and Wind Speed in the Brazilian Market
by Thiago Conte and Roberto Oliveira
Energies 2024, 17(4), 829; https://doi.org/10.3390/en17040829 - 9 Feb 2024
Cited by 6 | Viewed by 1845
Abstract
Global environmental impacts such as climate change require behavior from society that aims to minimize greenhouse gas emissions. This includes the substitution of fossil fuels with other energy sources. An important aspect of efficient and sustainable management of the electricity supply in Brazil [...] Read more.
Global environmental impacts such as climate change require behavior from society that aims to minimize greenhouse gas emissions. This includes the substitution of fossil fuels with other energy sources. An important aspect of efficient and sustainable management of the electricity supply in Brazil is the prediction of some variables of the national electric system (NES), such as the price of differences settlement (PLD) and wind speed for wind energy. In this context, the present study investigated two distinct forecasting approaches. The first involved the combination of deep artificial neural network techniques, long short-term memory (LSTM), and multilayer perceptron (MLP), optimized through the canonical genetic algorithm (GA). The second approach focused on machine committees including MLP, decision tree, linear regression, and support vector machine (SVM) in one committee, and MLP, LSTM, SVM, and autoregressive integrated moving average (ARIMA) in another. The results indicate that the hybrid AG + LSTM algorithm demonstrated the best performance for PLD, with a mean squared error (MSE) of 4.68. For wind speed, there is a MSE of 1.26. These solutions aim to contribute to the Brazilian electricity market’s decision making. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

13 pages, 389 KB  
Article
Deep Learning for Combating Misinformation in Multicategorical Text Contents
by Rafał Kozik, Wojciech Mazurczyk, Krzysztof Cabaj, Aleksandra Pawlicka, Marek Pawlicki and Michał Choraś
Sensors 2023, 23(24), 9666; https://doi.org/10.3390/s23249666 - 7 Dec 2023
Cited by 5 | Viewed by 2736
Abstract
Currently, one can observe the evolution of social media networks. In particular, humans are faced with the fact that, often, the opinion of an expert is as important and significant as the opinion of a non-expert. It is possible to observe changes and [...] Read more.
Currently, one can observe the evolution of social media networks. In particular, humans are faced with the fact that, often, the opinion of an expert is as important and significant as the opinion of a non-expert. It is possible to observe changes and processes in traditional media that reduce the role of a conventional ‘editorial office’, placing gradual emphasis on the remote work of journalists and forcing increasingly frequent use of online sources rather than actual reporting work. As a result, social media has become an element of state security, as disinformation and fake news produced by malicious actors can manipulate readers, creating unnecessary debate on topics organically irrelevant to society. This causes a cascading effect, fear of citizens, and eventually threats to the state’s security. Advanced data sensors and deep machine learning methods have great potential to enable the creation of effective tools for combating the fake news problem. However, these solutions often need better model generalization in the real world due to data deficits. In this paper, we propose an innovative solution involving a committee of classifiers in order to tackle the fake news detection challenge. In that regard, we introduce a diverse set of base models, each independently trained on sub-corpora with unique characteristics. In particular, we use multi-label text category classification, which helps formulate an ensemble. The experiments were conducted on six different benchmark datasets. The results are promising and open the field for further research. Full article
(This article belongs to the Special Issue Deep Learning for Information Fusion and Pattern Recognition)
Show Figures

Figure 1

Back to TopTop