Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = cost-sensitive learning optimisation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
56 pages, 3273 KB  
Systematic Review
Artificial Intelligence and Machine Learning in Cold Spray Additive Manufacturing: A Systematic Literature Review
by Habib Afsharnia and Javaid Butt
J. Manuf. Mater. Process. 2025, 9(10), 334; https://doi.org/10.3390/jmmp9100334 - 13 Oct 2025
Cited by 1 | Viewed by 1526
Abstract
Due to its unique benefits over conventional subtractive manufacturing, additive manufacturing methods continue to attract interest in both academia and industry. One such method is called Cold Spray Additive Manufacturing (CSAM), a solid-state coating deposition technology to manufacture repair metallic components using a [...] Read more.
Due to its unique benefits over conventional subtractive manufacturing, additive manufacturing methods continue to attract interest in both academia and industry. One such method is called Cold Spray Additive Manufacturing (CSAM), a solid-state coating deposition technology to manufacture repair metallic components using a gas jet and powder particles. CSAM offers low heat input, stable phases, suitability for heat-sensitive substrates, and high deposition rates. However, persistent challenges include porosity control, geometric accuracy near edges and concavities, anisotropy, and cost sensitivities linked to gas selection and nozzle wear. Interdisciplinary research across manufacturing science, materials characterisation, robotics, control, artificial intelligence (AI), and machine learning (ML) is deployed to overcome these issues. ML supports quality prediction, inverse parameter design, in situ monitoring, and surrogate models that couple process physics with data. To demonstrate the impact of AI and ML on CSAM, this study presents a systematic literature review to identify, evaluate, and analyse published studies in this domain. The most relevant studies in the literature are analysed using keyword co-occurrence and clustering. Four themes were identified: design for CSAM, material analytics, real-time monitoring and defect analytics, and deposition and AI-enabled optimisation. Based on this synthesis, core challenges are identified as small and varied datasets, transfer and identifiability limits, and fragmented sensing. Main opportunities are outlined as physics-based surrogates, active learning, uncertainty-aware inversion, and cloud-edge control for reliable and adaptable ML use in CSAM. By systematically mapping the current landscape, this work provides a critical roadmap for researchers to target the most significant challenges and opportunities in applying AI/ML to industrialise CSAM. Full article
Show Figures

Figure 1

36 pages, 4953 KB  
Article
Can Proxy-Based Geospatial and Machine Learning Approaches Map Sewer Network Exposure to Groundwater Infiltration?
by Nejat Zeydalinejad, Akbar A. Javadi, Mark Jacob, David Baldock and James L. Webber
Smart Cities 2025, 8(5), 145; https://doi.org/10.3390/smartcities8050145 - 5 Sep 2025
Viewed by 2365
Abstract
Sewer systems are essential for sustainable infrastructure management, influencing environmental, social, and economic aspects. However, sewer network capacity is under significant pressure, with many systems overwhelmed by challenges such as climate change, ageing infrastructure, and increasing inflow and infiltration, particularly through groundwater infiltration [...] Read more.
Sewer systems are essential for sustainable infrastructure management, influencing environmental, social, and economic aspects. However, sewer network capacity is under significant pressure, with many systems overwhelmed by challenges such as climate change, ageing infrastructure, and increasing inflow and infiltration, particularly through groundwater infiltration (GWI). Current research in this area has primarily focused on general sewer performance, with limited attention to high-resolution, spatially explicit assessments of sewer exposure to GWI, highlighting a critical knowledge gap. This study responds to this gap by developing a high-resolution GWI assessment. This is achieved by integrating fuzzy-analytical hierarchy process (AHP) with geographic information systems (GISs) and machine learning (ML) to generate GWI probability maps across the Dawlish region, southwest United Kingdom, complemented by sensitivity analysis to identify the key drivers of sewer network vulnerability. To this end, 16 hydrological–hydrogeological thematic layers were incorporated: elevation, slope, topographic wetness index, rock, alluvium, soil, land cover, made ground, fault proximity, fault length, mass movement, river proximity, flood potential, drainage order, groundwater depth (GWD), and precipitation. A GWI probability index, ranging from 0 to 1, was developed for each 1 m × 1 m area per season. The model domain was then classified into high-, intermediate-, and low-GWI-risk zones using K-means clustering. A consistency ratio of 0.02 validated the AHP approach for pairwise comparisons, while locations of storm overflow (SO) discharges and model comparisons verified the final outputs. SOs predominantly coincided with areas of high GWI probability and high-risk zones. Comparison of AHP-weighted GIS output clustered via K-means with direct K-means clustering of AHP-weighted layers yielded a Kappa value of 0.70, with an 81.44% classification match. Sensitivity analysis identified five key factors influencing GWI scores: GWD, river proximity, flood potential, rock, and alluvium. The findings underscore that proxy-based geospatial and machine learning approaches offer an effective and scalable method for mapping sewer network exposure to GWI. By enabling high-resolution risk assessment, the proposed framework contributes a novel proxy and machine-learning-based screening tool for the management of smart cities. This supports predictive maintenance, optimised infrastructure investment, and proactive management of GWI in sewer networks, thereby reducing costs, mitigating environmental impacts, and protecting public health. In this way, the method contributes not only to improved sewer system performance but also to advancing the sustainability and resilience goals of smart cities. Full article
Show Figures

Figure 1

23 pages, 4887 KB  
Article
Occupancy-Based Predictive AI-Driven Ventilation Control for Energy Savings in Office Buildings
by Violeta Motuzienė, Jonas Bielskus, Rasa Džiugaitė-Tumėnienė and Vidas Raudonis
Sustainability 2025, 17(9), 4140; https://doi.org/10.3390/su17094140 - 3 May 2025
Cited by 4 | Viewed by 2932
Abstract
Despite stricter global energy codes, performance standards, and advanced renewable technologies, the building sector must accelerate its transition to zero carbon emissions. Many studies show that new buildings, especially non-residential ones, often fail to meet projected performance levels due to poor maintenance and [...] Read more.
Despite stricter global energy codes, performance standards, and advanced renewable technologies, the building sector must accelerate its transition to zero carbon emissions. Many studies show that new buildings, especially non-residential ones, often fail to meet projected performance levels due to poor maintenance and management of HVAC systems. The application of predictive AI models offers a cost-effective solution to enhance the efficiency and sustainability of these systems, thereby contributing to more sustainable building operations. The study aims to enhance the control of a variable air volume (VAV) system using machine learning algorithms. A novel ventilation control model, AI-VAV, is developed using a hybrid extreme learning machine (ELM) algorithm combined with simulated annealing (SA) optimisation. The model is trained on long-term monitoring data from three office buildings, enhancing robustness and avoiding the data reliability issues seen in similar models. Sensitivity analysis reveals that accurate occupancy prediction is achieved with 8500 to 10,000 measurement steps, resulting in potential additional energy savings of up to 7.5% for the ventilation system compared to traditional VAV systems, while maintaining CO2 concentrations below 1000 ppm, and up to 12.5% if CO2 concentrations are slightly above 1000 ppm for 1.5% of the time. Full article
Show Figures

Figure 1

19 pages, 5346 KB  
Article
Metastable Substructure Embedding and Robust Classification of Multichannel EEG Data Using Spectral Graph Kernels
by Rashmi N. Muralinath, Vishwambhar Pathak and Prabhat K. Mahanti
Future Internet 2025, 17(3), 102; https://doi.org/10.3390/fi17030102 - 23 Feb 2025
Cited by 1 | Viewed by 1153
Abstract
Classification of neurocognitive states from Electroencephalography (EEG) data is complex due to inherent challenges such as noise, non-stationarity, non-linearity, and the high-dimensional and sparse nature of connectivity patterns. Graph-theoretical approaches provide a powerful framework for analysing the latent state dynamics using connectivity measures [...] Read more.
Classification of neurocognitive states from Electroencephalography (EEG) data is complex due to inherent challenges such as noise, non-stationarity, non-linearity, and the high-dimensional and sparse nature of connectivity patterns. Graph-theoretical approaches provide a powerful framework for analysing the latent state dynamics using connectivity measures across spatio-temporal-spectral dimensions. This study applies the graph Koopman embedding kernels (GKKE) method to extract latent neuro-markers of seizures from epileptiform EEG activity. EEG-derived graphs were constructed using correlation and mean phase locking value (mPLV), with adjacency matrices generated via threshold-binarised connectivity. Graph kernels, including Random Walk, Weisfeiler–Lehman (WL), and spectral-decomposition (SD) kernels, were evaluated for latent space feature extraction by approximating Koopman spectral decomposition. The potential of graph Koopman embeddings in identifying latent metastable connectivity structures has been demonstrated with empirical analyses. The robustness of these features was evaluated using classifiers such as Decision Trees, Support Vector Machine (SVM), and Random Forest, on Epilepsy-EEG from the Children’s Hospital Boston’s (CHB)-MIT dataset and cognitive-load-EEG datasets from online repositories. The classification workflow combining mPLV connectivity measure, WL graph Koopman kernel, and Decision Tree (DT) outperformed the alternative combinations, particularly considering the accuracy (91.7%) and F1-score (88.9%), The comparative investigation presented in results section convinces that employing cost-sensitive learning improved the F1-score for the mPLV-WL-DT workflow to 91% compared to 88.9% without cost-sensitive learning. This work advances EEG-based neuro-marker estimation, facilitating reliable assistive tools for prognosis and cognitive training protocols. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

25 pages, 4502 KB  
Article
Parsimonious Random-Forest-Based Land-Use Regression Model Using Particulate Matter Sensors in Berlin, Germany
by Janani Venkatraman Jagatha, Christoph Schneider and Tobias Sauter
Sensors 2024, 24(13), 4193; https://doi.org/10.3390/s24134193 - 27 Jun 2024
Cited by 1 | Viewed by 2156
Abstract
Machine learning (ML) methods are widely used in particulate matter prediction modelling, especially through use of air quality sensor data. Despite their advantages, these methods’ black-box nature obscures the understanding of how a prediction has been made. Major issues with these types of [...] Read more.
Machine learning (ML) methods are widely used in particulate matter prediction modelling, especially through use of air quality sensor data. Despite their advantages, these methods’ black-box nature obscures the understanding of how a prediction has been made. Major issues with these types of models include the data quality and computational intensity. In this study, we employed feature selection methods using recursive feature elimination and global sensitivity analysis for a random-forest (RF)-based land-use regression model developed for the city of Berlin, Germany. Land-use-based predictors, including local climate zones, leaf area index, daily traffic volume, population density, building types, building heights, and street types were used to create a baseline RF model. Five additional models, three using recursive feature elimination method and two using a Sobol-based global sensitivity analysis (GSA), were implemented, and their performance was compared against that of the baseline RF model. The predictors that had a large effect on the prediction as determined using both the methods are discussed. Through feature elimination, the number of predictors were reduced from 220 in the baseline model to eight in the parsimonious models without sacrificing model performance. The model metrics were compared, which showed that the parsimonious_GSA-based model performs better than does the baseline model and reduces the mean absolute error (MAE) from 8.69 µg/m3 to 3.6 µg/m3 and the root mean squared error (RMSE) from 9.86 µg/m3 to 4.23 µg/m3 when applying the trained model to reference station data. The better performance of the GSA_parsimonious model is made possible by the curtailment of the uncertainties propagated through the model via the reduction of multicollinear and redundant predictors. The parsimonious model validated against reference stations was able to predict the PM2.5 concentrations with an MAE of less than 5 µg/m3 for 10 out of 12 locations. The GSA_parsimonious performed best in all model metrics and improved the R2 from 3% in the baseline model to 17%. However, the predictions exhibited a degree of uncertainty, making it unreliable for regional scale modelling. The GSA_parsimonious model can nevertheless be adapted to local scales to highlight the land-use parameters that are indicative of PM2.5 concentrations in Berlin. Overall, population density, leaf area index, and traffic volume are the major predictors of PM2.5, while building type and local climate zones are the less significant predictors. Feature selection based on sensitivity analysis has a large impact on the model performance. Optimising models through sensitivity analysis can enhance the interpretability of the model dynamics and potentially reduce computational costs and time when modelling is performed for larger areas. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

13 pages, 18698 KB  
Article
Leveraging AI in Postgraduate Medical Education for Rapid Skill Acquisition in Ultrasound-Guided Procedural Techniques
by Flora Wen Xin Xu, Amanda Min Hui Choo, Pamela Li Ming Ting, Shao Jin Ong and Deborah Khoo
J. Imaging 2023, 9(10), 225; https://doi.org/10.3390/jimaging9100225 - 16 Oct 2023
Cited by 5 | Viewed by 2612
Abstract
Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety considerations. Hence, there is [...] Read more.
Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety considerations. Hence, there is a genuine and urgent need for trainees to attain accelerated skill acquisition in a time- and cost-efficient manner that minimises risk to patients. We propose a two-step solution: First, we have created an agar phantom model that simulates human tissue and structures like vessels and nerve bundles. Moreover, we have adopted deep learning techniques to provide trainees with live visualisation of target structures and automate assessment of their user speed and accuracy. Key structures like the needle tip, needle body, target blood vessels, and nerve bundles, are delineated in colour on the processed image, providing an opportunity for real-time guidance of needle positioning and target structure penetration. Quantitative feedback on user speed (time taken for target penetration), accuracy (penetration of correct target), and efficacy in needle positioning (percentage of frames where the full needle is visualised in a longitudinal plane) are also assessable using our model. Our program was able to demonstrate a sensitivity of 99.31%, specificity of 69.23%, accuracy of 91.33%, precision of 89.94%, recall of 99.31%, and F1 score of 0.94 in automated image labelling. Full article
(This article belongs to the Special Issue Application of Machine Learning Using Ultrasound Images, 2nd Edition)
Show Figures

Figure 1

19 pages, 4088 KB  
Article
Threshold-Based BRISQUE-Assisted Deep Learning for Enhancing Crack Detection in Concrete Structures
by Sanjeetha Pennada, Marcus Perry, Jack McAlorum, Hamish Dow and Gordon Dobie
J. Imaging 2023, 9(10), 218; https://doi.org/10.3390/jimaging9100218 - 10 Oct 2023
Cited by 12 | Viewed by 3656
Abstract
Automated visual inspection has made significant advancements in the detection of cracks on the surfaces of concrete structures. However, low-quality images significantly affect the classification performance of convolutional neural networks (CNNs). Therefore, it is essential to evaluate the suitability of image datasets used [...] Read more.
Automated visual inspection has made significant advancements in the detection of cracks on the surfaces of concrete structures. However, low-quality images significantly affect the classification performance of convolutional neural networks (CNNs). Therefore, it is essential to evaluate the suitability of image datasets used in deep learning models, like Visual Geometry Group 16 (VGG16), for accurate crack detection. This study explores the sensitivity of the BRISQUE method to different types of image degradations, such as Gaussian noise and Gaussian blur. By evaluating the performance of the VGG16 model on these degraded datasets with varying levels of noise and blur, a correlation is established between image degradation and BRISQUE scores. The results demonstrate that images with lower BRISQUE scores achieve higher accuracy, F1 score, and Matthew’s correlation coefficient (MCC) in crack classification. The study proposes the implementation of a BRISQUE score threshold (BT) to optimise training and testing times, leading to reduced computational costs. These findings have significant implications for enhancing accuracy and reliability in automated visual inspection systems for crack detection and structural health monitoring (SHM). Full article
(This article belongs to the Special Issue Feature Papers in Section AI in Imaging)
Show Figures

Figure 1

16 pages, 575 KB  
Article
Cost-Sensitive Models to Predict Risk of Cardiovascular Events in Patients with Chronic Heart Failure
by Maria Carmela Groccia, Rosita Guido, Domenico Conforti, Corrado Pelaia, Giuseppe Armentaro, Alfredo Francesco Toscani, Sofia Miceli, Elena Succurro, Marta Letizia Hribal and Angela Sciacqua
Information 2023, 14(10), 542; https://doi.org/10.3390/info14100542 - 3 Oct 2023
Cited by 4 | Viewed by 1826
Abstract
Chronic heart failure (CHF) is a clinical syndrome characterised by symptoms and signs due to structural and/or functional abnormalities of the heart. CHF confers risk for cardiovascular deterioration events which cause recurrent hospitalisations and high mortality rates. The early prediction of these events [...] Read more.
Chronic heart failure (CHF) is a clinical syndrome characterised by symptoms and signs due to structural and/or functional abnormalities of the heart. CHF confers risk for cardiovascular deterioration events which cause recurrent hospitalisations and high mortality rates. The early prediction of these events is very important to limit serious consequences, improve the quality of care, and reduce its burden. CHF is a progressive condition in which patients may remain asymptomatic before the onset of symptoms, as observed in heart failure with a preserved ejection fraction. The early detection of underlying causes is critical for treatment optimisation and prognosis improvement. To develop models to predict cardiovascular deterioration events in patients with chronic heart failure, a real dataset was constructed and a knowledge discovery task was implemented in this study. The dataset is imbalanced, as it is common in real-world applications. It thus posed a challenge because imbalanced datasets tend to be overwhelmed by the abundance of majority-class instances during the learning process. To address the issue, a pipeline was developed specifically to handle imbalanced data. Different predictive models were developed and compared. To enhance sensitivity and other performance metrics, we employed multiple approaches, including data resampling, cost-sensitive methods, and a hybrid method that combines both techniques. These methods were utilised to assess the predictive capabilities of the models and their effectiveness in handling imbalanced data. By using these metrics, we aimed to identify the most effective strategies for achieving improved model performance in real scenarios with imbalanced datasets. The best model for predicting cardiovascular events achieved mean a sensitivity 65%, a mean specificity 55%, and a mean area under the curve of 0.71. The results show that cost-sensitive models combined with over/under sampling approaches are effective for the meaningful prediction of cardiovascular events in CHF patients. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

20 pages, 1640 KB  
Article
Mitigation of Regional Disparities in Quality Education for Maintaining Sustainable Development at Local Study Centres: Diagnosis and Remedies for Open Universities in China
by Wen Tang, Xiangyang Zhang and Youyi Tian
Sustainability 2022, 14(22), 14834; https://doi.org/10.3390/su142214834 - 10 Nov 2022
Cited by 5 | Viewed by 2834
Abstract
Regional disparities in quality education remain a sensitive issue in developing and developed countries and also in basic and higher education. The issue for the moment is especially crucial for open educational institutions regarding the stability of the open education ecosystem and the [...] Read more.
Regional disparities in quality education remain a sensitive issue in developing and developed countries and also in basic and higher education. The issue for the moment is especially crucial for open educational institutions regarding the stability of the open education ecosystem and the capacity for sustainable development. Our research focuses on the aspect of the quality of teaching and learning and its enhancement. In the study, we systematically explored the regional disparities of teaching and learning quality in local study centres with samples of 72 from Jiangsu Open University (JOU), China. With statistical toolkits and a typological research paradigm, we have identified the ranking of the local study centres according to holistic performance. By the clustering methods, we categorised the local study centres as belonging to four types: potentially contradictory, urgently to be reformed, less cost-effective, and normatively autonomous in terms of their basic attributes, learners’ support services, and teaching commitment. The research findings proved that the region where the study centres are located did have impacts on the quality of teaching and learning, and the scalability of student enrolment. The authors conclude and suggest that mitigation of the regional disparities in quality education will facilitate the optimisation of the local study centres in the regional education ecosystem and maintain sustainable development. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

25 pages, 6025 KB  
Article
Strength Predictive Modelling of Soils Treated with Calcium-Based Additives Blended with Eco-Friendly Pozzolans—A Machine Learning Approach
by Eyo U. Eyo, Samuel J. Abbey and Colin A. Booth
Materials 2022, 15(13), 4575; https://doi.org/10.3390/ma15134575 - 29 Jun 2022
Cited by 15 | Viewed by 3492
Abstract
The unconfined compressive strength (UCS) of a stabilised soil is a major mechanical parameter in understanding and developing geomechanical models, and it can be estimated directly by either lab testing of retrieved core samples or remoulded samples. However, due to the effort, high [...] Read more.
The unconfined compressive strength (UCS) of a stabilised soil is a major mechanical parameter in understanding and developing geomechanical models, and it can be estimated directly by either lab testing of retrieved core samples or remoulded samples. However, due to the effort, high cost and time associated with these methods, there is a need to develop a new technique for predicting UCS values in real time. An artificial intelligence paradigm of machine learning (ML) using the gradient boosting (GB) technique is applied in this study to model the unconfined compressive strength of soils stabilised by cementitious additive-enriched agro-based pozzolans. Both ML regression and multinomial classification of the UCS of the stabilised mix are investigated. Rigorous sensitivity-driven diagnostic testing is also performed to validate and provide an understanding of the intricacies of the decisions made by the algorithm. Results indicate that the well-tuned and optimised GB algorithm has a very high capacity to distinguish between positive and negative UCS categories (‘firm’, ‘very stiff’ and ‘hard’). An overall accuracy of 0.920, weighted recall rates and precision scores of 0.920 and 0.938, respectively, were produced by the GB model. Multiclass prediction in this regard shows that only 12.5% of misclassified instances was achieved. When applied to a regression problem, a coefficient of determination of approximately 0.900 and a mean error of about 0.335 were obtained, thus lending further credence to the high performance of the GB algorithm used. Finally, among the eight input features utilised as independent variables, the additives seemed to exhibit the strongest influence on the ML predictive modelling. Full article
(This article belongs to the Special Issue Functional Materials, Machine Learning, and Optimization)
Show Figures

Figure 1

29 pages, 646 KB  
Article
An Artificial-Immune-System-Based Algorithm Enhanced with Deep Reinforcement Learning for Solving Returnable Transport Item Problems
by Fatima Ezzahra Achamrah, Fouad Riane, Evren Sahin and Sabine Limbourg
Sustainability 2022, 14(10), 5805; https://doi.org/10.3390/su14105805 - 11 May 2022
Cited by 16 | Viewed by 4469
Abstract
This paper proposes a new approach, i.e., virtual pooling, for optimising returnable transport item (RTI) flows in a two-level closed-loop supply chain. The supply chain comprises a set of suppliers delivering their products loaded on RTIs to a set of customers. RTIs are [...] Read more.
This paper proposes a new approach, i.e., virtual pooling, for optimising returnable transport item (RTI) flows in a two-level closed-loop supply chain. The supply chain comprises a set of suppliers delivering their products loaded on RTIs to a set of customers. RTIs are of various types. The objective is to model a deterministic, multi-supplier, multi-customer inventory routing problem with pickup and delivery of multi-RTI. The model includes inventory-level constraints, the availability of empty RTIs to suppliers, and the minimisation of the total cost, including inventory holding, screening, maintenance, transportation, sharing, and purchasing costs for new RTIs. Furthermore, suppliers with common customers coordinate to virtually pool their inventory of empty RTIs held by customers so that, when loaded RTIs are delivered to customers, each may benefit from this visit to pick up the empty RTI, regardless of the ownership. To handle the combinatorial complexity of the model, a new artificial-immune-system-based algorithm coupled with deep reinforcement learning is proposed. The algorithm combines artificial immune systems’ strong global search ability and a strong self-adaptability ability into a goal-driven performance enhanced by deep reinforcement learning, all tailored to the suggested mathematical model. Computational experiments on randomly generated instances highlight the performance of the proposed approach. From a managerial point of view, the results stress that this new approach allows for economies of scale and cost reduction at the level of all involved parties to about 40%. In addition, a sensitivity analysis on the unit cost of transportation and the procurement of new RTIs is conducted, highlighting the benefits and limits of the proposed model compared to dedicated and physical pooling modes. Full article
(This article belongs to the Special Issue New Trends in Sustainable Supply Chain and Logistics Management)
Show Figures

Figure 1

11 pages, 1100 KB  
Article
Forecasting Erroneous Neural Machine Translation of Disease Symptoms: Development of Bayesian Probabilistic Classifiers for Cross-Lingual Health Translation
by Meng Ji, Wenxiu Xie, Riliu Huang and Xiaobo Qian
Int. J. Environ. Res. Public Health 2021, 18(18), 9873; https://doi.org/10.3390/ijerph18189873 - 19 Sep 2021
Cited by 3 | Viewed by 2629
Abstract
Background: Machine translation (MT) technologies have increasing applications in healthcare. Despite their convenience, cost-effectiveness, and constantly improved accuracy, research shows that the use of MT tools in medical or healthcare settings poses risks to vulnerable populations. Objectives: We aimed to develop machine learning [...] Read more.
Background: Machine translation (MT) technologies have increasing applications in healthcare. Despite their convenience, cost-effectiveness, and constantly improved accuracy, research shows that the use of MT tools in medical or healthcare settings poses risks to vulnerable populations. Objectives: We aimed to develop machine learning classifiers (MNB and RVM) to forecast nuanced yet significant MT errors of clinical symptoms in Chinese neural MT outputs. Methods: We screened human translations of MSD Manuals for information on self-diagnosis of infectious diseases and produced their matching neural MT outputs for subsequent pairwise quality assessment by trained bilingual health researchers. Different feature optimisation and normalisation techniques were used to identify the best feature set. Results: The RVM classifier using optimised, normalised (L2 normalisation) semantic features achieved the highest sensitivity, specificity, AUC, and accuracy. MNB achieved similar high performance using the same optimised semantic feature set. The best probability threshold of the best performing RVM classifier was found at 0.6, with a very high positive likelihood ratio (LR+) of 27.82 (95% CI: 3.99, 193.76), and a low negative likelihood ratio (LR−) of 0.19 (95% CI: 0.08, 046), suggesting the high diagnostic utility of our model to predict the probabilities of erroneous MT of disease symptoms to help reverse potential inaccurate self-diagnosis of diseases among vulnerable people without adequate medical knowledge or an ability to ascertain the reliability of MT outputs. Conclusion: Our study demonstrated the viability, flexibility, and efficiency of introducing machine learning models to help promote risk-aware use of MT technologies to achieve optimal, safer digital health outcomes for vulnerable people. Full article
(This article belongs to the Special Issue Machine Learning Applications in Public Health)
Show Figures

Figure 1

34 pages, 11943 KB  
Article
Energy Loss Impact in Electrical Smart Grid Systems in Australia
by Ashraf Zaghwan and Indra Gunawan
Sustainability 2021, 13(13), 7221; https://doi.org/10.3390/su13137221 - 28 Jun 2021
Cited by 5 | Viewed by 5263
Abstract
This research draws attention to the potential and contextual influences on energy loss in Australia’s electricity market and smart grid systems. It further examines barriers in the transition toward optimising the benefit opportunities between electricity demand and electricity supply. The main contribution of [...] Read more.
This research draws attention to the potential and contextual influences on energy loss in Australia’s electricity market and smart grid systems. It further examines barriers in the transition toward optimising the benefit opportunities between electricity demand and electricity supply. The main contribution of this study highlights the impact of individual end-users by controlling and automating individual home electricity profiles within the objective function set (AV) of optimum demand ranges. Three stages of analysis were accomplished to achieve this goal. Firstly, we focused on feasibility analysis using ‘weight of evidence’ (WOE) and ‘information value’ (IV) techniques to check sample data segmentation and possible variable reduction. Stage two of sensitivity analysis (SA) used a generalised reduced gradient algorithm (GRG) to detect and compare a nonlinear optimisation issue caused by end-user demand. Stage three of analysis used two methods adopted from the machine learning toolbox, piecewise linear distribution (PLD) and the empirical cumulative distribution function (ECDF), to test the normality of time series data and measure the discrepancy between them. It used PLD and ECDF to derive a nonparametric representation of the overall cumulative distribution function (CDF). These analytical methods were all found to be relevant and provided a clue to the sustainability approach. This study provides insights into the design of sustainable homes, which must go beyond the concept of increasing the capacity of renewable energy. In addition to this, this study examines the interplay between the variance estimation of the problematic levels and the perception of energy loss to introduce a novel realistic model of cost–benefit incentives. This optimisation goal contrasted with uncertainties that remain as to what constitutes the demand impact and individual house effects in diverse clustering patterns in a specific grid system. While ongoing effort is still needed to look for strategic solutions for this class of complex problems, this research shows significant contextual opportunities to manage the complexity of the problem according to the nature of the case, representing dense and significant changes in the situational complexity. Full article
(This article belongs to the Special Issue Applications of Complex System Approach in Project Management)
Show Figures

Figure 1

Back to TopTop