Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (932)

Search Parameters:
Keywords = big data tool

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 4049 KB  
Review
Innovative Systems Biology in Baijiu Fermentation: Unveiling Omics Landscapes and Microbial Synergy
by Dandan Song, Lulu Song, Yangli Luo, Juan Chen, Chunlin Zhang and Liang Yang
Foods 2026, 15(5), 871; https://doi.org/10.3390/foods15050871 (registering DOI) - 4 Mar 2026
Abstract
The production of Chinese Baijiu relies on the synergistic metabolism of multi-species microbial communities in an open environment. Its intricate microbial succession and flavor formation mechanisms have long been considered complex systems that are difficult to fully deconstruct. Traditional culture-dependent techniques inherently fail [...] Read more.
The production of Chinese Baijiu relies on the synergistic metabolism of multi-species microbial communities in an open environment. Its intricate microbial succession and flavor formation mechanisms have long been considered complex systems that are difficult to fully deconstruct. Traditional culture-dependent techniques inherently fail to comprehensively capture the actual functional roles and dynamic regulation of “viable but non-culturable” (VBNC) microorganisms within this complex system. In recent years, the rapid advancement of multi-omics technologies has offered a novel perspective for elucidating the underlying fermentation mechanisms of Baijiu. This paper systematically reviews the recent progress in the application of metagenomics, metatranscriptomics, metaproteomics, and metabolomics in Baijiu research. Specific focus is placed on the unique contributions of these tools to resolving microbial community structural diversity, mining key functional genes and enzymes, uncovering microbial stress response mechanisms under environmental fluctuations, identifying phages and spoilage microorganisms, and tracing the metabolic pathways of flavor substances. Furthermore, the pivotal role of multi-omics integration strategies in constructing “microbe–metabolite” regulatory networks is highlighted. Finally, current challenges regarding standardization and data integration are discussed, with an outlook on leveraging omics big data to promote digital monitoring and intelligent brewing in the Baijiu industry. Full article
Show Figures

Figure 1

36 pages, 11602 KB  
Review
Fluorescent Labeling Methods for Brain Structure Research
by Chunguang Yin, Jiangcan Li, Keyu Meng, Jiade Zhang, Meihe Chen, Ruibing Chen, Yuyang Hu, Shuodong Wang and Sheng Xie
Molecules 2026, 31(5), 817; https://doi.org/10.3390/molecules31050817 - 28 Feb 2026
Viewed by 92
Abstract
The brain is a complex structural network. The employment of fluorescent labeling techniques in conjunction with advanced imaging methodologies facilitates comprehensive analysis of multiscale brain anatomy, thereby offering insights into fundamental principles of function and addressing neurological disorders. This review summarizes technological advances [...] Read more.
The brain is a complex structural network. The employment of fluorescent labeling techniques in conjunction with advanced imaging methodologies facilitates comprehensive analysis of multiscale brain anatomy, thereby offering insights into fundamental principles of function and addressing neurological disorders. This review summarizes technological advances in fluorescent labeling methods in the field of neuroscience, and their applications in neural circuit analysis, cerebrovascular imaging, neuronal activity monitoring, and fluorescence-guided treatment of brain tumors. A challenging trend in integrating smart fluorescent labeling with tissue clearing, wide-field 3D imaging, artificial intelligence-assisted data processing/reconstruction, and multimodal information fusion is highlighted and discussed. The future direction of combining high-resolution, low-damage, dynamic imaging with big data analysis is envisioned, providing tools for understanding brain structure and function and their roles in disease. Full article
(This article belongs to the Special Issue Fluorescent Molecular Tools for Neuroscience Research)
Show Figures

Figure 1

13 pages, 870 KB  
Article
Natural Language Processing-Assisted Incidental Pulmonary Nodule Evaluation Program: Impact on Lung Cancer Outcomes
by Noa Tamam Shenholz, Keren Hod, Liat Toderis, Noam Fink, Arnon Makori, Michael Peer, Evgeni Gershman, Merav A. Ben-David and Elizabeth Dudnik
Med. Sci. 2026, 14(1), 104; https://doi.org/10.3390/medsci14010104 - 21 Feb 2026
Viewed by 155
Abstract
Introduction: Early detection and timely treatment (Tx) initiation are critical to improving lung cancer (LC) outcomes. This study assessed the natural language processing (NLP)-assisted incidental pulmonary nodule (IPN) evaluation program, which employs chest computer tomography (CT) report analysis as an LC diagnostic [...] Read more.
Introduction: Early detection and timely treatment (Tx) initiation are critical to improving lung cancer (LC) outcomes. This study assessed the natural language processing (NLP)-assisted incidental pulmonary nodule (IPN) evaluation program, which employs chest computer tomography (CT) report analysis as an LC diagnostic screening (LCS) tool to identify suspicious lung findings (SLF) necessitating further investigation, and evaluated its impact on prognosis and diagnostic work-up and Tx timelines for patients with LC. Materials and Methods: Consecutive LC patients (n = 200) diagnosed at Assuta Medical Centers (AMC) between January 2019 and December 2022 were retrieved from the AMC electronic database using the MDClone big data platform, and divided into two groups: group A (NLP-assisted IPN evaluation, n = 100) and group B (traditional referral for evaluation of SLF by the community physician, n = 100). Stage at diagnosis, different diagnostic work-up and Tx timelines, and overall survival (OS) were assessed. Results: The NLP-assisted IPN evaluation program led to a significant stage shift (stage I disease: 48% vs. 27% in groups A and B, respectively, p = 0.013). Although the time from imaging to Tx initiation was similar (2.1 ± 5.3 months vs. 2.6 ± 5.9 months in groups A and B, respectively, p = 0.654), the time to systemic Tx (p = 0.035) and the time to radiotherapy (p = 0.044) were significantly shorter in group A. Conclusions: Implementing an NLP-assisted IPN evaluation program may enable earlier LC detection, driving a stage shift towards earlier diagnosis, improved diagnostic efficiency, and expedited time-critical interventions. Full article
(This article belongs to the Special Issue Feature Papers in Section “Cancer and Cancer-Related Research”)
Show Figures

Figure 1

18 pages, 502 KB  
Article
Construction of an Evaluation System for Big Food Concept Education and Its Behavioral Impact Mechanism Among College Students—An Empirical Study Based on a Survey of Students
by Yong He, Ruirui Tang, Minlun Hu, Fang Chen, Xiaoqian Gao, Dandan Li and Yaowen Liu
Foods 2026, 15(4), 776; https://doi.org/10.3390/foods15040776 - 21 Feb 2026
Viewed by 293
Abstract
Education on the Big Food Concept, as a strategic framework for ensuring national food security and promoting high-quality agricultural development, represents a key nexus between ideological and political education and quality-oriented education for college students. Based on survey data from 1268 students across [...] Read more.
Education on the Big Food Concept, as a strategic framework for ensuring national food security and promoting high-quality agricultural development, represents a key nexus between ideological and political education and quality-oriented education for college students. Based on survey data from 1268 students across six provinces in China, this study utilized the Delphi method, the analytic hierarchy process (AHP), and structural equation modeling (SEM) to develop a four-dimensional evaluation system encompassing cognitive, affective, value, and behavioral dimensions. It examined the relationship and underlying mechanism through which Big Food Concept education influences student behavior. The results indicate that college students’ overall understanding of the Big Food Concept remains at a moderate level, with particularly limited awareness of diversified food supply systems. The weights of the dimensions in the educational evaluation system were as follows: behavioral dimension (0.342) > cognitive dimension (0.287) > value dimension (0.221) > affective dimension (0.150). Big Food Concept education shapes student behavior through the sequential pathway of cognitive enlightenment, affective resonance, and value internalization, with value internalization demonstrating the strongest mediating effect (β = 0.413, p < 0.001). The evaluation system developed in this study is a practical tool for assessing the effectiveness of Big Food Concept education in higher institutions, while the identified mechanism provides a theoretical basis for implementing targeted educational practices. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

21 pages, 422 KB  
Article
Making It Look Green: Big Data Analytics, External Pressure, and Corporate Greenwashing
by Huiwen Su and Sitong Li
Sustainability 2026, 18(4), 2121; https://doi.org/10.3390/su18042121 - 21 Feb 2026
Viewed by 240
Abstract
Digital technologies are widely viewed as important tools for enhancing corporate environmental performance. However, there is growing recognition that their environmental impacts are not uniformly positive and may even generate unintended negative consequences. Drawing on institutional theory and impression management theory, we argue [...] Read more.
Digital technologies are widely viewed as important tools for enhancing corporate environmental performance. However, there is growing recognition that their environmental impacts are not uniformly positive and may even generate unintended negative consequences. Drawing on institutional theory and impression management theory, we argue that big data analytics (BDA) provides firms with powerful capabilities to strategically manage environmental impressions in response to external pressures. Using panel data of Chinese listed firms from 2012 to 2023, we provide empirical evidence that BDA significantly promotes corporate greenwashing. Specifically, BDA facilitates greenwashing through the reinforcement of three core dimensions of impression management: self-serving bias, symbolic management, and accounting rhetoric. Moreover, by distinguishing between different types of external pressures, our results show that constraint-based non-market pressures weaken the relationship between BDA and greenwashing, whereas opportunity-based market pressures strengthen it. Our study enriches the digitalization and corporate environmental performance literature by revealing the dark side of digital technologies and offering a more nuanced understanding of how specific technologies shape corporate environmental misconduct. Full article
Show Figures

Figure 1

26 pages, 506 KB  
Review
Alzheimer’s 2030: From Precision Genomics to Artificial Intelligence
by Valeria D’Argenio, Rossella Tomaiuolo, Silvia Bargeri and Giulia Sancesario
Genes 2026, 17(2), 233; https://doi.org/10.3390/genes17020233 - 12 Feb 2026
Viewed by 497
Abstract
Alzheimer’s disease (AD) represents a critical global health challenge, with its prevalence and associated costs expected to double significantly by 2030 and 2050. While lifestyle interventions are crucial, sporadic late-onset AD has a substantial genetic component (40–80% heritability), though known variants limit the [...] Read more.
Alzheimer’s disease (AD) represents a critical global health challenge, with its prevalence and associated costs expected to double significantly by 2030 and 2050. While lifestyle interventions are crucial, sporadic late-onset AD has a substantial genetic component (40–80% heritability), though known variants limit the scope of traditional precision medicine. Crucially, sex and gender are significant risk determinants, with women accounting for two-thirds of cases due to a complex interplay of biological and sociocultural factors. This review focuses on the influence of genetic and gender-related factors, examining large-scale genome-wide association studies (GWASs) and their role in developing advanced genetic risk scores (GRS) for precision genomics. We also explore the potential of Artificial Intelligence (AI) for multimodal big data analysis and digital health tools to promote personalized prevention and emerging concerns about ethics, privacy and data treatment. The convergence of these findings underscores the urgent need for a genetic-, sex- and gender-informed precision-medicine approach to AD. Full article
Show Figures

Figure 1

16 pages, 2666 KB  
Article
Urban Heat Exposure and Demographic Susceptibility Assessment Under Extreme Heat Conditions: The Case of Milan
by Maddalena Buffoli, Roxana Maria Sala, Stefano Arruzzoli and Stefano Capolongo
Climate 2026, 14(2), 44; https://doi.org/10.3390/cli14020044 - 2 Feb 2026
Viewed by 365
Abstract
Rapid urbanization and global warming are amplifying heat-related health risks, particularly for vulnerable age groups. This study develops an open-source risk assessment framework that uses big data from remote sensing, land use, and population datasets to evaluate heat-related health risks. The framework integrates [...] Read more.
Rapid urbanization and global warming are amplifying heat-related health risks, particularly for vulnerable age groups. This study develops an open-source risk assessment framework that uses big data from remote sensing, land use, and population datasets to evaluate heat-related health risks. The framework integrates indicators of green infrastructure, Land Surface Temperature (LST), and demographic vulnerability to identify areas of increased health risk. Milan (Italy) was used as the case study for the application to test the methodology and validate its capacity to detect spatial correlations between Surface Urban Heat Island (Surface UHI) intensity and concentrations of sensitive population groups (children aged 0–5 and elderly aged 65+). The results highlight distinct spatial inequalities in heat exposure and health vulnerability, confirming the method’s potential to support climate adaptation and public health planning. By relying entirely on open-access data and tools, this approach offers a replicable and scalable model for assessing climate-related health risks and informing evidence-based strategies that can support public administrations to visualize risk, prioritize interventions, and enhance urban resilience. Full article
(This article belongs to the Section Climate Adaptation and Mitigation)
Show Figures

Figure 1

21 pages, 536 KB  
Review
Applications of AI for the Optimal Operations of Power Systems Under Extreme Weather Events: A Task-Driven and Methodological Review
by Zehua Zhao, Jiajia Yang, Xiangjing Su, Yang Du and Mohan Jacob
Energies 2026, 19(2), 506; https://doi.org/10.3390/en19020506 - 20 Jan 2026
Viewed by 268
Abstract
The increasingly frequent and severe natural disasters have posed significant challenges to the resilience of power systems worldwide, creating an urgent need to investigate the security issues associated with these extreme events and to develop effective risk mitigation strategies. Meanwhile, as one of [...] Read more.
The increasingly frequent and severe natural disasters have posed significant challenges to the resilience of power systems worldwide, creating an urgent need to investigate the security issues associated with these extreme events and to develop effective risk mitigation strategies. Meanwhile, as one of the leading topics in current research, artificial intelligence (AI) has demonstrated outstanding performance across various domains, such as AI-driven smart grids and smart cities. In particular, its efficiency in processing big data and solving complex computational problems has made AI a powerful tool for supporting decision-making in complex scenarios. This article presents a focused overview of power system resilience against natural disasters, highlighting recent advancements in AI-based approaches aimed at enhancing system security and response capabilities. It begins by introducing various types of natural disasters and their corresponding impacts on power systems. Then, a systematic overview of AI applications in power systems under disaster scenarios is provided, with a classification based on the task categories, i.e., predictive, descriptive and prescriptive tasks. Following this, this article analyzes current research trends and finds a growing shift from knowledge-based models towards data-driven models. Furthermore, this paper discusses the major challenges in this research field, including data processing, data management, and data analytics; the challenges introduced by large language models in power systems; and the limitations related to AI model interpretability and generalization capability. Finally, this article outlines several potential future research directions. Full article
Show Figures

Figure 1

22 pages, 2001 KB  
Article
A Hybrid CNN-LSTM Architecture for Seismic Event Detection Using High-Rate GNSS Velocity Time Series
by Deniz Başar and Rahmi Nurhan Çelik
Sensors 2026, 26(2), 519; https://doi.org/10.3390/s26020519 - 13 Jan 2026
Viewed by 349
Abstract
Global Navigation Satellite Systems (GNSS) have become essential tools in geomatics engineering for precise positioning, cadastral surveys, topographic mapping, and deformation monitoring. Recent advances integrate GNSS with emerging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing, and unmanned aerial systems [...] Read more.
Global Navigation Satellite Systems (GNSS) have become essential tools in geomatics engineering for precise positioning, cadastral surveys, topographic mapping, and deformation monitoring. Recent advances integrate GNSS with emerging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing, and unmanned aerial systems (UAS), which have greatly improved accuracy, efficiency, and analytical capabilities in managing geospatial big data. In this study, we propose a hybrid Convolutional Neural Network–Long Short Term Memory (CNN-LSTM) architecture for seismic detection using high-rate (5 Hz) GNSS velocity time series. The model is trained on a large synthetic dataset generated by and real high-rate GNSS non-event data. Model performance was evaluated using real event and non-event data through an event-based approach. The results demonstrate that a hybrid deep-learning architecture can provide a reliable framework for seismic detection with high-rate GNSS velocity time series. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

44 pages, 4883 KB  
Article
Mapping the Role of Artificial Intelligence and Machine Learning in Advancing Sustainable Banking
by Alina Georgiana Manta, Claudia Gherțescu, Roxana Maria Bădîrcea, Liviu Florin Manta, Jenica Popescu and Mihail Olaru
Sustainability 2026, 18(2), 618; https://doi.org/10.3390/su18020618 - 7 Jan 2026
Viewed by 521
Abstract
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and [...] Read more.
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and Web of Science to explore how decentralized digital infrastructures and AI-driven analytical capabilities contribute to sustainable financial development, transparent governance, and climate-resilient digital societies. Findings indicate a rapid increase in interdisciplinary work integrating Distributed Ledger Technology (DLT) with large-scale data processing, federated learning, privacy-preserving computation, and intelligent automation—tools that can enhance financial inclusion, regulatory integrity, and environmental risk management. Keyword network analyses reveal blockchain’s growing role in improving data provenance, security, and trust—key governance dimensions for sustainable and resilient financial systems—while AI/ML and big data analytics dominate research on predictive intelligence, ESG-related risk modeling, customer well-being analytics, and real-time decision support for sustainable finance. Comparative analyses show distinct emphases: Web of Science highlights decentralized architectures, consensus mechanisms, and smart contracts relevant to transparent financial governance, whereas Scopus emphasizes customer-centered analytics, natural language processing, and high-throughput data environments supporting inclusive and equitable financial services. Patterns of global collaboration demonstrate strong internationalization, with Europe, China, and the United States emerging as key hubs in shaping sustainable and digitally resilient banking infrastructures. By mapping intellectual, technological, and collaborative structures, this study clarifies how decentralized intelligence—enabled by the fusion of AI/ML, blockchain, and big data—supports secure, scalable, and sustainability-driven financial ecosystems. The results identify critical research pathways for strengthening financial governance, enhancing climate and social resilience, and advancing digital transformation, which contributes to more inclusive, equitable, and sustainable societies. Full article
Show Figures

Figure 1

19 pages, 857 KB  
Article
Data-Driven Insights: Leveraging Sentiment Analysis and Latent Profile Analysis for Financial Market Forecasting
by Eyal Eckhaus
Big Data Cogn. Comput. 2026, 10(1), 24; https://doi.org/10.3390/bdcc10010024 - 7 Jan 2026
Viewed by 822
Abstract
Background: This study explores an innovative integration of big data analytics techniques aimed at enhancing predictive modeling in financial markets. It investigates how combining sentiment analysis with latent profile analysis (LPA) can accurately forecast stock prices. This research aligns with big data [...] Read more.
Background: This study explores an innovative integration of big data analytics techniques aimed at enhancing predictive modeling in financial markets. It investigates how combining sentiment analysis with latent profile analysis (LPA) can accurately forecast stock prices. This research aligns with big data methodologies by leveraging automated content analysis and segmentation algorithms to address real-world challenges in data-driven decision-making. This study leverages advanced computational methods to process and segment large-scale unstructured data, demonstrating scalability in data-rich environments. Methods: We compiled a corpus of 3843 financial news articles on Teva Pharmaceuticals from Bloomberg and Reuters. Sentiment scores were generated using the VADER tool, and LPA was applied to identify eight distinct sentiment profiles. These profiles were then used in segmented regression models and Structural Equation Modeling (SEM) to assess their predictive value for stock price fluctuations. Results: Six of the eight latent profiles demonstrated significantly higher predictive accuracy compared to traditional sentiment-based models. The combined profile-based regression model explained 47% of the stock price variance (R2 = 0.47), compared to 10% (R2 = 0.10) in the baseline model using sentiment analysis alone. Conclusion: This study pioneers the use of latent profile analysis (LPA) in sentiment analysis for stock price prediction, offering a novel integration of clustering and financial forecasting. By uncovering complex, non-linear links between market sentiment and stock movements, it addresses a key gap in the literature and establishes a powerful foundation for advancing sentiment-based financial models. Full article
Show Figures

Figure 1

16 pages, 1390 KB  
Review
Advancing a Hybrid Decision-Making Model in Anesthesiology: Applications of Artificial Intelligence in the Perioperative Setting
by Gilberto Duarte-Medrano, Natalia Nuño-Lámbarri, Daniele Salvatore Paternò, Luigi La Via, Simona Tutino, Guillermo Dominguez-Cherit and Massimiliano Sorbello
Healthcare 2026, 14(1), 97; https://doi.org/10.3390/healthcare14010097 - 31 Dec 2025
Viewed by 828
Abstract
Artificial intelligence (AI) is rapidly transforming anesthesiology practice across perioperative settings. This review explores the evolution and implementation of hybrid decision-making models that integrate AI capabilities with human clinical expertise. From historical foundations to current applications, we examine how machine learning algorithms, deep [...] Read more.
Artificial intelligence (AI) is rapidly transforming anesthesiology practice across perioperative settings. This review explores the evolution and implementation of hybrid decision-making models that integrate AI capabilities with human clinical expertise. From historical foundations to current applications, we examine how machine learning algorithms, deep learning networks, and big data analytics are enhancing anesthetic care. Key applications include perioperative risk prediction, AI-assisted patient education, automated analysis of clinical records, airway management support, predictive hemodynamic monitoring, closed-loop anesthetic delivery systems, and pain management optimization. In procedural contexts, AI demonstrates promising utility in regional anesthesia through anatomical structure identification and needle navigation, monitoring anesthetic depth via EEG analysis, and improving quality control in endoscopic sedation. Educational applications include intelligent simulators for procedural training and academic productivity tools. Despite significant advances, implementation challenges persist, including algorithmic bias, data security concerns, clinical validation requirements, and ethical considerations regarding AI-generated content. The optimal integration model emphasizes a complementary approach where AI augments rather than replaces clinical judgment—combining computational efficiency with the irreplaceable contextual understanding and ethical reasoning of the anesthesiologist. This hybrid paradigm reinforces the anesthesiologist’s leadership role in perioperative care while enhancing safety, precision, and efficiency through technological innovation. As AI integration advances, continued emphasis on algorithmic transparency, rigorous clinical validation, and human oversight remains essential to ensure that these technologies enhance rather than compromise patient-centered anesthetic care. Full article
(This article belongs to the Special Issue Smart and Digital Health)
Show Figures

Figure 1

15 pages, 3785 KB  
Article
A Sustainable Manufacturing Approach: Experimental and Machine Learning-Based Surface Roughness Modelling in PMEDM
by Vaibhav Ganachari, Aleksandar Ašonja, Shailesh Shirguppikar, Ruturaj U. Kakade, Mladen Radojković, Blaža Stojanović and Aleksandar Vencl
J. Manuf. Mater. Process. 2026, 10(1), 10; https://doi.org/10.3390/jmmp10010010 - 29 Dec 2025
Cited by 1 | Viewed by 469
Abstract
The powder-mixed electric-discharge machining (PMEDM) process has been the focus of researchers for quite some time. This method overcomes the constraints of conventional machining, viz., low material removal rate (MRR) and high surface roughness (SR) in hard-cut materials, tool failure, and a high [...] Read more.
The powder-mixed electric-discharge machining (PMEDM) process has been the focus of researchers for quite some time. This method overcomes the constraints of conventional machining, viz., low material removal rate (MRR) and high surface roughness (SR) in hard-cut materials, tool failure, and a high tool wear ratio (TWR). However, to determine the optimal machining parameter levels for improving MRR, surface finish must be measured during actual experimentation using various parameter levels across different materials. It is a very costly and time-consuming process for industries. However, in the age of Industry 4.0 and artificial intelligence machine learning (AI-ML), it provides an efficient solution to real manufacturing problems when big data is available. In this study, experimentation was conducted on AISI D2 steel using the PMEDM process for SR analysis with different parameters, viz. current, voltage, cycle time (TOn), powder concentration (PC), and duty factor (DF). Moreover, machine learning models were used to predict SR values for selected parameter levels in the PMEDM process. In this research, Gaussian process regression (GPR) with a squared exponential kernel, support vector machines, and ensemble regression models were used for computational analysis. The results of this work showed that Gaussian regression, support vector machine, and ensemble regression achieved 95%, 92%, and 83% accuracy, respectively. The GPR model achieved the best predictive performance among these three models. Full article
Show Figures

Figure 1

16 pages, 2601 KB  
Article
Diagnostic Accuracy of an Offline CNN Framework Utilizing Multi-View Chest X-Rays for Screening 14 Co-Occurring Communicable and Non-Communicable Diseases
by Latika Giri, Pradeep Raj Regmi, Ghanshyam Gurung, Grusha Gurung, Shova Aryal, Sagar Mandal, Samyam Giri, Sahadev Chaulagain, Sandip Acharya and Muhammad Umair
Diagnostics 2026, 16(1), 66; https://doi.org/10.3390/diagnostics16010066 - 24 Dec 2025
Viewed by 749
Abstract
Background: Chest radiography is the most widely used diagnostic imaging modality globally, yet its interpretation is hindered by a critical shortage of radiologists, especially in low- and middle-income countries (LMICs). The interpretation is both time-consuming and error-prone in high-volume settings. Artificial Intelligence (AI) [...] Read more.
Background: Chest radiography is the most widely used diagnostic imaging modality globally, yet its interpretation is hindered by a critical shortage of radiologists, especially in low- and middle-income countries (LMICs). The interpretation is both time-consuming and error-prone in high-volume settings. Artificial Intelligence (AI) systems trained on public data may lack generalizability to multi-view, real-world, local images. Deep learning tools have the potential to augment radiologists by providing real-time decision support by overcoming these. Objective: We evaluated the diagnostic accuracy of a deep learning-based convolutional neural network (CNN) trained on multi-view, hybrid (public and local datasets) for detecting thoracic abnormalities in chest radiographs of adults presenting to a tertiary hospital, operating in offline mode. Methodology: A CNN was pretrained on public datasets (Vin Big, NIH) and fine-tuned on a local dataset from a Nepalese tertiary hospital, comprising frontal (PA/AP) and lateral views from emergency, ICU, and outpatient settings. The dataset was annotated by three radiologists for 14 pathologies. Data augmentation simulated poor-quality images and artifacts. Performance was evaluated on a held-out test set (N = 522) against radiologists’ consensus, measuring AUC, sensitivity, specificity, mean average precision (mAP), and reporting time. Deployment feasibility was tested via PACS integration and standalone offline mode. Results: The CNN achieved an overall AUC of 0.86 across 14 abnormalities, with 68% sensitivity, 99% specificity, and 0.93 mAP. Colored bounding boxes improved clarity when multiple pathologies co-occurred (e.g., cardiomegaly with effusion). The system performed effectively on PA, AP, and lateral views, including poor-quality ER/ICU images. Deployment testing confirmed seamless PACS integration and offline functionality. Conclusions: The CNN trained on adult CXRs performed reliably in detecting key thoracic findings across varied clinical settings. Its robustness to image quality, integration of multiple views and visualization capabilities suggest it could serve as a useful aid for triage and diagnosis. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

8 pages, 3205 KB  
Proceeding Paper
A New Testing Procedure to Quantify and Assess Fatigue Properties of High-Performance Leaf Springs
by Efstratios Giannakis, Paschalis Adamidis, Christos Gakias and Georgios Savaidis
Eng. Proc. 2025, 119(1), 11; https://doi.org/10.3390/engproc2025119011 - 11 Dec 2025
Viewed by 408
Abstract
This paper focuses on the study of the material and component fatigue parameters of high-performance leaf springs used in the suspension systems of heavy-duty commercial trucks. Currently, there is limited information on the fatigue performance, material characteristics and surface properties of leaf spring [...] Read more.
This paper focuses on the study of the material and component fatigue parameters of high-performance leaf springs used in the suspension systems of heavy-duty commercial trucks. Currently, there is limited information on the fatigue performance, material characteristics and surface properties of leaf spring components, as manufacturers do not disclose this data. Therefore, production engineers need to conduct extensive experimental testing throughout various phases of product development, consuming significant resources and time. The paper presents well documented experimental procedures on a big variety of testing samples and prototypes with a set methodology providing valuable data such as (a) understanding of the advanced mechanical properties of new leaf-spring production lines, (b) settlement of a well-founded basis for the development of new theoretical tools, and (c) reducing the existing development and testing effort. Full article
(This article belongs to the Proceedings of The 8th International Conference of Engineering Against Failure)
Show Figures

Figure 1

Back to TopTop