Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (667)

Search Parameters:
Keywords = automated analysis platforms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 5671 KB  
Review
Grey-Box RC Building Models for Intelligent Management of Large-Scale Energy Flexibility: From Mass Modeling to Decentralized Digital Twins
by Leonardo A. Bisogno Bernardini, Jérôme H. Kämpf, Umberto Desideri, Francesco Leccese and Giacomo Salvadori
Energies 2026, 19(1), 77; https://doi.org/10.3390/en19010077 - 23 Dec 2025
Abstract
Managing complex and large-scale building facilities requires reliable, easily interpretable, and computationally efficient models. Considering the electrical-circuit analogy, lumped-parameter resistance–capacitance (RC) thermal models have emerged as both simulation surrogates and advanced tools for energy management. This review synthesizes recent uses of RC models [...] Read more.
Managing complex and large-scale building facilities requires reliable, easily interpretable, and computationally efficient models. Considering the electrical-circuit analogy, lumped-parameter resistance–capacitance (RC) thermal models have emerged as both simulation surrogates and advanced tools for energy management. This review synthesizes recent uses of RC models for building energy management in large facilities and aggregates. A systematic review of the most recent international literature, based on the analysis of 70 peer-reviewed articles, led to the classification of three main areas: (i) the physics and modeling potential of RC models; (ii) the methods for automation, calibration, and scalability; and (iii) applications in model predictive control (MPC), energy flexibility, and digital twins (DTs). The results show that these models achieve an efficient balance between accuracy and simplicity, allowing for real-time deployment in embedded control systems and building-automation platforms. In complex and large-scale situations, a growing integration with machine learning (ML) techniques, semantic frameworks, and stochastic methods within virtual environments is evident. Nonetheless, challenges persist regarding the standardization of performance metrics, input data quality, and real-scale validation. This review provides essential and up-to-date guidance for developing interoperable solutions for complex building energy systems, supporting integrated management across district, urban, and community levels for the future. Full article
35 pages, 11909 KB  
Review
Emerging Microfluidic Plasma Separation Technologies for Point-of-Care Diagnostics: Moving Beyond Conventional Centrifugation
by Ergun Alperay Tarim, Michael G. Mauk and Mohamed El-Tholoth
Biosensors 2026, 16(1), 14; https://doi.org/10.3390/bios16010014 - 23 Dec 2025
Abstract
Plasma separation is an essential step in blood-based diagnostics. While traditional centrifugation is effective, it is costly and usually restricted to centralized laboratories because it requires relatively expensive equipment, a supply of consumables, and trained personnel. In an effort to alleviate these shortcomings, [...] Read more.
Plasma separation is an essential step in blood-based diagnostics. While traditional centrifugation is effective, it is costly and usually restricted to centralized laboratories because it requires relatively expensive equipment, a supply of consumables, and trained personnel. In an effort to alleviate these shortcomings, microfluidic and point-of-care devices offering rapid and low-cost plasma separation from small sample volumes, such as finger-stick samples, are quickly emerging as an alternative. Such microscale plasma separation systems enable reduced costs, rapid test results, self-testing, and broader accessibility, particularly in resource-limited or remote settings and facilitate the integration of separation, fluid handling, and downstream analysis into portable, automated lab-on-a-chip platforms. This review highlights advances in microfluidic systems and lab-on-a-chip devices for plasma separation categorized in design strategies, separation principles and characteristics, application purposes, and future directions for the decentralization of healthcare and personalized medicine. Full article
(This article belongs to the Special Issue Advanced Microfluidic Devices and Lab-on-Chip (Bio)sensors)
Show Figures

Figure 1

22 pages, 4777 KB  
Article
Research on Automatic Recognition and Dimensional Quantification of Surface Cracks in Tunnels Based on Deep Learning
by Zhidan Liu, Xuqing Luo, Jiaqiang Yang, Zhenhua Zhang, Fan Yang and Pengyong Miao
Modelling 2026, 7(1), 4; https://doi.org/10.3390/modelling7010004 - 23 Dec 2025
Abstract
Cracks serve as a critical indicator of tunnel structural degradation. Manual inspections are difficult to meet engineering requirements due to their time-consuming and labor-intensive nature, high subjectivity, and significant error rates, while traditional image processing methods exhibit poor performance under complex backgrounds and [...] Read more.
Cracks serve as a critical indicator of tunnel structural degradation. Manual inspections are difficult to meet engineering requirements due to their time-consuming and labor-intensive nature, high subjectivity, and significant error rates, while traditional image processing methods exhibit poor performance under complex backgrounds and irregular crack morphologies. To address these limitations, this study developed a high-quality dataset of tunnel crack images and proposed an improved lightweight semantic segmentation network, LiteSqueezeSeg, to enable precise crack identification and quantification. The model was systematically trained and optimized using a dataset comprising 10,000 high-resolution images. Experimental results demonstrate that the proposed model achieves an overall accuracy of 95.15% in crack detection. Validation on real-world tunnel surface images indicates that the method effectively suppresses background noise interference and enables high-precision quantification of crack length, average width, and maximum width, with all relative errors maintained within 5%. Furthermore, an integrated intelligent detection system was developed based on the MATLAB (R2023b) platform, facilitating automated crack feature extraction and standardized defect grading. This system supports routine tunnel maintenance and safety assessment, substantially enhancing both inspection efficiency and evaluation accuracy. Through synergistic innovations in lightweight network architecture, accurate quantitative analysis, and standardized assessment protocols, this research establishes a comprehensive technical framework for tunnel crack detection and structural health evaluation, offering an efficient and reliable intelligent solution for tunnel condition monitoring. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence in Modelling)
Show Figures

Figure 1

18 pages, 679 KB  
Review
The Responsible Health AI Readiness and Maturity Index (RHAMI): Applications for a Global Narrative Review of Leading AI Use Cases in Public Health Nutrition
by Dominique J. Monlezun, Gary Marshall, Lillian Omutoko, Patience Oduor, Donald Kokonya, John Rayel, Claudia Sotomayor, Oleg Sinyavskiy, Timothy Aksamit, Keir MacKay, David Grindem, Dhairya Jarsania, Tarek Souaid, Alberto Garcia, Colleen Gallagher, Cezar Iliescu, Sagar B. Dugani, Maria Ines Girault, María Elizabeth De Los Ríos Uriarte and Nandan Anavekar
Nutrients 2026, 18(1), 38; https://doi.org/10.3390/nu18010038 - 22 Dec 2025
Viewed by 38
Abstract
Poor diet is the leading preventable risk factor for death worldwide, associated with over 10 million premature deaths and USD 8 trillion related costs every year. Artificial intelligence or AI is rapidly emerging as the most historically disruptive, innovatively dynamic, rapidly scaled, cost-efficient, [...] Read more.
Poor diet is the leading preventable risk factor for death worldwide, associated with over 10 million premature deaths and USD 8 trillion related costs every year. Artificial intelligence or AI is rapidly emerging as the most historically disruptive, innovatively dynamic, rapidly scaled, cost-efficient, and economically productive technology (which is increasingly providing transformative countermeasures to these negative health trends, especially in low- and middle-income countries (LMICs) and underserved communities which bear the greatest burden from them). Yet widespread confusion persists among healthcare systems and policymakers on how to best identify, integrate, and evolve the safe, trusted, effective, affordable, and equitable AI solutions that are right for their communities, especially in public health nutrition. We therefore provide here the first known global, comprehensive, and actionable narrative review of the state of the art of AI-accelerated nutrition assessment and healthy eating for healthcare systems, generated by the first automated end-to-end empirical index for responsible health AI readiness and maturity: the Responsible Health AI readiness and Maturity Index (RHAMI). The index is built and the analysis and review conducted by a multi-national team spanning the Global North and South, consisting of front-line clinicians, ethicists, engineers, executives, administrators, public health practitioners, and policymakers. RHAMI analysis identified the top-performing healthcare systems and their nutrition AI, along with leading use cases including multimodal edge AI nutrition assessments as ambient intelligence, the strategic scaling of practical embedded precision nutrition platforms, and sovereign swarm agentic AI social networks for sustainable healthy diets. This index-based review is meant to facilitate standardized, continuous, automated, and real-time multi-disciplinary and multi-dimensional strategic planning, implementation, and optimization of AI capabilities and functionalities worldwide, aligned with healthcare systems’ strategic objectives, practical constraints, and local cultural values. The ultimate strategic objectives of the RHAMI’s application for AI-accelerated public health nutrition are to improve population health, financial efficiency, and societal equity through the global cooperation of the public and private sectors stretching across the Global North and South. Full article
Show Figures

Figure 1

20 pages, 3117 KB  
Article
Comprehensive Analysis of Different Subtypes of Oxylipins to Determine a LC–MS/MS Approach in Clinical Research
by Yurou Zhao, Zhengyu Fang, Zeyu Li, Yizhe Liu, Yang Bai, Xiaoqing Wang, Hongjun Yang and Na Guo
Metabolites 2026, 16(1), 4; https://doi.org/10.3390/metabo16010004 - 22 Dec 2025
Viewed by 35
Abstract
Background/Objectives: Different oxylipin subtypes have unique biological properties, requiring effective analytical protocols. However, establishing a complete pathway detection protocol for comprehensive oxylipin analysis is challenging. This study aimed to evaluate the adaptability and specificity of oxylipin subtypes under different extraction schemes and to [...] Read more.
Background/Objectives: Different oxylipin subtypes have unique biological properties, requiring effective analytical protocols. However, establishing a complete pathway detection protocol for comprehensive oxylipin analysis is challenging. This study aimed to evaluate the adaptability and specificity of oxylipin subtypes under different extraction schemes and to develop a robust analytical platform for clinical biomarker investigation. Methods: We revealed the adaptability and specificity of oxylipin subtypes based on different single-step extraction schemes. A high-throughput quantitative automated solid-phase extraction coupled with a liquid chromatography–tandem mass spectrometry (aSPE–LC–MS/MS) analytical platform was established for a broad panel of complex oxylipins. The method was applied to serum samples of patients with coronary heart disease (CHD). Results: Our results verified that oxo-oxylipins, resolvin, and eicosanoids showed the best extraction efficiency under SPE protocol. Most hydroxy-oxylipins, dihydroxy-oxylipins, and HOTrEs are suitable for methanol protocol, HDHA for acetonitrile protocol, and epoxy-oxylipins for the methyl tert-butyl ether protocol, while medium-chain HETE is suitable for ethyl acetate protocol. Importantly, a novel sensitive fast method with wide coverage by the aSPE–LC–MS/MS analytical platform with satisfying sensitivity, accuracy and precision, extraction efficiency, low matrix effect, and linear calibration curves was obtained. Furthermore, we have successfully applied this method and found that 5-HETE, 11-HETE, and 15-HETE can serve as integrated biomarkers for patients with CHD, with high diagnostic performance. Conclusions: The study provides the best protocol for the clinically targeted detection of oxylipins and provides an important means for studying biomarkers of diseases. Full article
(This article belongs to the Section Endocrinology and Clinical Metabolic Research)
Show Figures

Graphical abstract

17 pages, 2961 KB  
Article
SIPEREA: A Scalable Imaging Platform for Measuring Two-Dimensional Growth of Duckweed
by Sang-Kyu Jung, Somen Nandi and Karen A. McDonald
Appl. Sci. 2026, 16(1), 66; https://doi.org/10.3390/app16010066 - 20 Dec 2025
Viewed by 151
Abstract
Biomass production in organisms is closely linked to their growth rate, necessitating rapid, in situ, nondestructive, and accurate growth measurement. Existing imaging platforms are often limited by high cost, lack of scalability, wired connections, or insufficient automation, restricting their applicability for high-throughput growth [...] Read more.
Biomass production in organisms is closely linked to their growth rate, necessitating rapid, in situ, nondestructive, and accurate growth measurement. Existing imaging platforms are often limited by high cost, lack of scalability, wired connections, or insufficient automation, restricting their applicability for high-throughput growth monitoring. Here, we present SIPEREA, a scalable imaging platform built on cost-effective ESP32-CAM modules. SIPEREA comprises three graphical user interface (GUI) based applications: (1) an embedded program for the ESP32-CAM responsible for imaging, (2) an image acquisition program for automatic wireless image transmission from multiple ESP32-CAMs, and (3) an image analysis program that automatically segments organisms in the images using a deep neural network (DNN) and calculates their area. The implementation of asynchronous, sequential wireless image acquisition enables the efficient management of multiple ESP32-CAM modules. To demonstrate the usefulness of this platform, we analyzed images captured over a two-week period using four ESP32-CAM units during Lemna sp. (duckweed) cultivation to compute doubling time. Full article
(This article belongs to the Special Issue Advanced IoT/ICT Technologies in Smart Systems)
Show Figures

Figure 1

24 pages, 2076 KB  
Article
Construction Waste Documentation System in Poland: Current State and Prospects for Automation
by Joanna Sagan and Paula Wojtaszek
Sustainability 2026, 18(1), 77; https://doi.org/10.3390/su18010077 - 20 Dec 2025
Viewed by 164
Abstract
Efficient documentation and traceability of construction waste are essential for meeting the objectives of the European Green Deal and the Circular Economy. In Poland, the national Database on Products, Packaging, and Waste Management (BDO) serves as the central platform for recording and reporting [...] Read more.
Efficient documentation and traceability of construction waste are essential for meeting the objectives of the European Green Deal and the Circular Economy. In Poland, the national Database on Products, Packaging, and Waste Management (BDO) serves as the central platform for recording and reporting waste flows, including those generated by the construction sector. However, its current structure imposes substantial administrative burdens, particularly on large-scale projects involving thousands of waste transports. This study examines the documentation workflow within the BDO system as applied to construction activities. Using process mapping, field studies, and interviews, the research identifies key bottlenecks and opportunities for improvement, especially through automation enabled by the integration of external applications connected to BDO via its public Application Programming Interface (API). Among nine identified systems, one was selected due to its comprehensive functionalities tailored to construction-sector needs. A study involving thirty users demonstrated that implementation of this system reduced the time required to issue a Waste Transfer Card (KPO) by 77% and fully automated entries in the Waste Records Register (KEO). As a result, the average administrative workload decreased by 87%. For a representative demolition company generating approximately 46,000 KPOs annually, the total time savings correspond to 8.2 months of full-time administrative work. This reduction translates into annual savings exceeding PLN 47,000 and yields a return on investment of over 100% within the first year. Sensitivity analysis indicates that the system’s effectiveness decreases with lower documentation volumes. The findings confirm that targeted automation and improved interface design can significantly enhance the efficiency, accuracy, and transparency of construction waste documentation. Full article
Show Figures

Figure 1

21 pages, 1745 KB  
Article
An Integrated Artificial Intelligence Tool for Predicting and Managing Project Risks
by Andreea Geamanu, Maria-Iuliana Dascalu, Ana-Maria Neagu and Raluca Ioana Guica
Mach. Learn. Knowl. Extr. 2026, 8(1), 1; https://doi.org/10.3390/make8010001 - 20 Dec 2025
Viewed by 170
Abstract
Artificial Intelligence (AI) is increasingly used to enhance project management practices, especially in risk analysis, where traditional tools often lack predictive capabilities. This study introduces an AI-based tool that supports project teams in identifying and interpreting risks through machine learning and integrated documentation [...] Read more.
Artificial Intelligence (AI) is increasingly used to enhance project management practices, especially in risk analysis, where traditional tools often lack predictive capabilities. This study introduces an AI-based tool that supports project teams in identifying and interpreting risks through machine learning and integrated documentation features. A synthetic dataset of 5000 project instances was generated using deterministic rules across 27 input variables, enabling the training of multi-output Decision Tree and Random Forest models to predict risk type, impact, probability, and response strategy. Due to the rule-based structure of the dataset, both models achieved near-perfect classification performance, with Random Forest showing slightly better regression accuracy. These results validate the modelling pipeline but should not be interpreted as real-world predictive accuracy. The trained models were deployed within a web platform offering prediction visualization, automated PDF reporting, result storage, and access to a structured risk management plan template. Survey feedback highlights strong user interest in AI-assisted mitigation suggestions, dashboards, notifications, and mobile access. The findings demonstrate the potential of AI to improve proactive risk assessment and decision-making in project environments. Full article
Show Figures

Graphical abstract

21 pages, 3674 KB  
Article
scSelector: A Flexible Single-Cell Data Analysis Assistant for Biomedical Researchers
by Xiang Gao, Peiqi Wu, Jiani Yu, Xueying Zhu, Shengyao Zhang, Hongxiang Shao, Dan Lu, Xiaojing Hou and Yunqing Liu
Genes 2026, 17(1), 2; https://doi.org/10.3390/genes17010002 - 19 Dec 2025
Viewed by 146
Abstract
Background: Standard single-cell RNA sequencing (scRNA-seq) analysis workflows face significant limitations, particularly the rigidity of clustering-dependent methods that can obscure subtle cellular heterogeneity and the potential loss of biologically meaningful cells during stringent quality control (QC) filtering. This study aims to develop [...] Read more.
Background: Standard single-cell RNA sequencing (scRNA-seq) analysis workflows face significant limitations, particularly the rigidity of clustering-dependent methods that can obscure subtle cellular heterogeneity and the potential loss of biologically meaningful cells during stringent quality control (QC) filtering. This study aims to develop scSelector (v1.0), an interactive software toolkit designed to empower researchers to flexibly select and analyze cell populations directly from low-dimensional embeddings, guided by their expert biological knowledge. Methods: scSelector was developed using Python, relying on core dependencies such as Scanpy (v1.9.0), Matplotlib (v3.4.0), and NumPy (v1.20.0). It integrates an intuitive lasso selection tool with backend analytical modules for differential expression and functional enrichment analysis. Furthermore, it incorporates Large Language Model (LLM) assistance via API integration (DeepSeek/Gemini) to provide automated, contextually informed cell-type and state prediction reports. Results: Validation across multiple public datasets demonstrated that scSelector effectively resolves functional heterogeneity within broader cell types, such as identifying distinct alpha-cell subpopulations with unique remodeling capabilities in pancreatic tissue. It successfully characterized rare populations, including platelets in PBMCs and extremely low-abundance endothelial cells in liver tissue (as few as 53 cells). Additionally, scSelector revealed that cells discarded by standard QC can represent biologically functional subpopulations, and it accurately dissected the states of outlier cells, such as proliferative NK cells. Conclusions: scSelector provides a flexible, researcher-centric platform that moves beyond the constraints of automated pipelines. By combining interactive selection with AI-assisted interpretation, it enhances the precision of scRNA-seq analysis and facilitates the discovery of novel cell types and complex cellular behaviors. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

15 pages, 1610 KB  
Article
Machine Learning Approaches for Classifying Chess Game Outcomes: A Comparative Analysis of Player Ratings and Game Dynamics
by Kamil Samara, Aaron Antreassian, Matthew Klug and Mohammad Sakib Hasan
Electronics 2026, 15(1), 1; https://doi.org/10.3390/electronics15010001 - 19 Dec 2025
Viewed by 158
Abstract
Online chess platforms generate vast amounts of game data, presenting opportunities to analyze match outcomes using machine learning approaches. This study develops and compares four machine learning models to classify chess game results (White win, Black win, or Draw) by integrating player rating [...] Read more.
Online chess platforms generate vast amounts of game data, presenting opportunities to analyze match outcomes using machine learning approaches. This study develops and compares four machine learning models to classify chess game results (White win, Black win, or Draw) by integrating player rating information with game dynamic metadata. We analyzed 11,510 complete games from the Lichess platform after preprocessing a dataset of 20,058 initial records. Seven key features were engineered to capture both pre-game skill parameters (player ratings, rating difference) and game complexity metrics (game duration, turn count). Four machine learning algorithms were implemented and optimized through grid search cross-validation: Multinomial Logistic Regression, Random Forest, K-Nearest Neighbors, and Histogram Gradient Boosting. The Gradient Boosting classifier achieved the highest performance with 83.19% accuracy on hold-out data and consistent 5-fold cross-validation scores (83.08% ± 0.009%). Feature importance analysis revealed that game complexity (number of turns) was the strongest correlate of the outcome across all models, followed by the rating difference between opponents. Draws represented only 5.11% of outcomes, creating class imbalance challenges that affected classification performance for this outcome category. The results demonstrate that ensemble methods, particularly gradient boosting, can effectively capture non-linear interactions between player skill and game length to classify chess outcomes. These findings have practical applications for chess platforms in automated content curation, post-game quality assessment, and engagement enhancement strategies. The study establishes a foundation for robust outcome analysis systems in online chess environments. Full article
(This article belongs to the Special Issue Machine Learning for Data Mining)
Show Figures

Figure 1

15 pages, 2420 KB  
Article
A Pre-Trained Model Customization Framework for Accelerated PET/MR Segmentation of Abdominal Fat in Obstructive Sleep Apnea
by Valentin Fauveau, Heli Patel, Jennifer Prevot, Bolong Xu, Oren Cohen, Samira Khan, Philip M. Robson, Zahi A. Fayad, Christoph Lippert, Hayit Greenspan, Neomi Shah and Vaishnavi Kundel
Diagnostics 2025, 15(24), 3243; https://doi.org/10.3390/diagnostics15243243 - 18 Dec 2025
Viewed by 149
Abstract
Background: Accurate quantification of visceral (VAT) and subcutaneous adipose tissue (SAT) is critical for understanding the cardiometabolic consequences of obstructive sleep apnea (OSA) and other chronic diseases. This study validates a customization framework using pre-trained networks for the development of automated VAT/SAT [...] Read more.
Background: Accurate quantification of visceral (VAT) and subcutaneous adipose tissue (SAT) is critical for understanding the cardiometabolic consequences of obstructive sleep apnea (OSA) and other chronic diseases. This study validates a customization framework using pre-trained networks for the development of automated VAT/SAT segmentation models using hybrid positron emission tomography (PET)/magnetic resonance imaging (MRI) data from OSA patients. While the widespread adoption of deep learning models continues to accelerate the automation of repetitive tasks, establishing a customization framework is essential for developing models tailored to specific research questions. Methods: A UNet-ResNet50 model, pre-trained on RadImageNet, was iteratively trained on 59, 157, and 328 annotated scans within a closed-loop system on the Discovery Viewer platform. Model performance was evaluated against manual expert annotations in 10 independent test cases (with 80–100 MR slices per scan) using Dice similarity coefficients, segmentation time, intraclass correlation coefficients (ICC) for volumetric and metabolic agreement (VAT/SAT volume and standardized uptake values [SUVmean]), and Bland–Altman analysis to evaluate the bias. Results: The proposed deep learning pipeline substantially improved segmentation efficiency. Average annotation time per scan was 121.8 min (manual segmentation), 31.8 min (AI-assisted segmentation), and only 1.2 min (fully automated AI segmentation). Segmentation performance, assessed on 10 independent scans, demonstrated high Dice similarity coefficients for masks (0.98 for VAT and SAT), though lower for contours/boundary delineation (0.43 and 0.54). Agreement between AI-derived and manual volumetric and metabolic VAT/SAT measures was excellent, with all ICCs exceeding 0.98 for the best model and with minimal bias. Conclusions: This scalable and accurate pipeline enables efficient abdominal fat quantification using hybrid PET/MRI for simultaneous volumetric and metabolic fat analysis. Our framework streamlines research workflows and supports clinical studies in obesity, OSA, and cardiometabolic diseases through multi-modal imaging integration and AI-based segmentation. This facilitates the quantification of depot-specific adipose metrics that may strongly influence clinical outcomes. Full article
Show Figures

Figure 1

18 pages, 1544 KB  
Article
Intelligent Operational Risk Management Using the Enhanced FMEA Method and Artificial Intelligence—A Case Study
by Kinga Ratajszczak, Alexandru-Vasile Oancea, Agnieszka Misztal, Nadia Ionescu, Laurențiu Mihai Ionescu and Anna Wencek
Appl. Sci. 2025, 15(24), 13199; https://doi.org/10.3390/app152413199 - 16 Dec 2025
Viewed by 244
Abstract
The main purpose of the article is to demonstrate how large language models (LLMs) can enhance and automate the Failure Modes and Effects Analysis (FMEA) method to improve the identification, assessment, and management of operational risk in modern technological systems. The study aims [...] Read more.
The main purpose of the article is to demonstrate how large language models (LLMs) can enhance and automate the Failure Modes and Effects Analysis (FMEA) method to improve the identification, assessment, and management of operational risk in modern technological systems. The study aims to show that integrating AI into FMEA increases the efficiency, precision, and reliability of detecting potential failures and evaluating their consequences, provided that expert supervision and model transparency are maintained. The research combines a literature review with a case study using OpenAI’s model to generate an automated FMEA for a manufacturing process. The methodology defines process components, identifies potential failure modes, and evaluates their risk impact. Five specialized libraries—structure, function, failure, risk, and optimization—serve as structured inputs for the LLM. A feedback mechanism allows the system to learn from previous analyses, improving future risk assessments and supporting continuous process optimization. The developed platform enables engineers to initiate projects, input data, generate and validate AI-based FMEA reports, and export results. Overall, the study demonstrates that the integration of LLMs into FMEA can transform operational risk management, making it more intelligent, adaptive, and data-driven. Full article
Show Figures

Figure 1

14 pages, 465 KB  
Article
Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems
by Daniel-Florin Dosaru, Alexandru-Corneliu Olteanu and Nicolae Țăpuș
Computers 2025, 14(12), 557; https://doi.org/10.3390/computers14120557 - 16 Dec 2025
Viewed by 145
Abstract
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The [...] Read more.
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage. Full article
Show Figures

Figure 1

29 pages, 1464 KB  
Article
Digital Transformation: Design and Implementation of a Blockchain Platform for Decentralized and Transparent Property Asset Transfer Using NFTs
by Dan Alexandru Mitrea, Constantin Viorel Marian and Rareş Alexandru Manolescu
World 2025, 6(4), 166; https://doi.org/10.3390/world6040166 - 15 Dec 2025
Viewed by 404
Abstract
In many jurisdictions, property registration and transfers remain constrained by inefficient, paper-based processes that depend on multiple intermediaries and bureaucratic approvals. This paper proposes a decentralized, blockchain-based property platform designed to streamline these processes using Non-Fungible Tokens (NFTs) and artificial intelligence (AI) agents [...] Read more.
In many jurisdictions, property registration and transfers remain constrained by inefficient, paper-based processes that depend on multiple intermediaries and bureaucratic approvals. This paper proposes a decentralized, blockchain-based property platform designed to streamline these processes using Non-Fungible Tokens (NFTs) and artificial intelligence (AI) agents to modernize public-sector asset management. The work addresses the persistent inefficiencies of paper-based property registration and ownership transfer by embedding legal and administrative logic within smart contracts and automating compliance through an intelligent conversational interface. The system was implemented using Ethereum-based ERC-721 standards, React for the user interface, and Langfuse-powered AI integration for guided user interaction. The pilot implementation presents secure, transparent, and auditable property-transfer transactions executed entirely on-chain, while hybrid IPFS-based storage and decentralized identifiers preserve privacy and legal validity. Comparative analysis against existing national initiatives indicates that the proposed architecture delivers decentralization, citizen control, and interoperability without compromising regulatory requirements. The system reduces bureaucratic overhead, simplifies transaction workflows, and lowers user error risk, thereby strengthening accountability and public trust. Overall, the paper outlines a viable foundation for legally aligned, AI-assisted digital property registries and offers a policy-oriented roadmap for integrating blockchain-enabled systems into public-sector governance infrastructures. Full article
(This article belongs to the Special Issue Data-Driven Strategic Approaches to Public Management)
Show Figures

Figure 1

32 pages, 7383 KB  
Article
Vertebra Segmentation and Cobb Angle Calculation Platform for Scoliosis Diagnosis Using Deep Learning: SpineCheck
by İrfan Harun İlkhan, Halûk Gümüşkaya and Firdevs Turgut
Informatics 2025, 12(4), 140; https://doi.org/10.3390/informatics12040140 - 11 Dec 2025
Viewed by 432
Abstract
This study presents SpineCheck, a fully integrated deep-learning-based clinical decision support platform for automatic vertebra segmentation and Cobb angle (CA) measurement from scoliosis X-ray images. The system unifies end-to-end preprocessing, U-Net-based segmentation, geometry-driven angle computation, and a web-based clinical interface within a single [...] Read more.
This study presents SpineCheck, a fully integrated deep-learning-based clinical decision support platform for automatic vertebra segmentation and Cobb angle (CA) measurement from scoliosis X-ray images. The system unifies end-to-end preprocessing, U-Net-based segmentation, geometry-driven angle computation, and a web-based clinical interface within a single deployable architecture. For secure clinical use, SpineCheck adopts a stateless “process-and-delete” design, ensuring that no radiographic data or Protected Health Information (PHI) are permanently stored. Five U-Net family models (U-Net, optimized U-Net-2, Attention U-Net, nnU-Net, and UNet3++) are systematically evaluated under identical conditions using Dice similarity, inference speed, GPU memory usage, and deployment stability, enabling deployment-oriented model selection. A robust CA estimation pipeline is developed by combining minimum-area rectangle analysis with Theil–Sen regression and spline-based anatomical modeling to suppress outliers and improve numerical stability. The system is validated on a large-scale dataset of 20,000 scoliosis X-ray images, demonstrating strong agreement with expert measurements based on Mean Absolute Error, Pearson correlation, and Intraclass Correlation Coefficient metrics. These findings confirm the reliability and clinical robustness of SpineCheck. By integrating large-scale validation, robust geometric modeling, secure stateless processing, and real-time deployment capabilities, SpineCheck provides a scalable and clinically reliable framework for automated scoliosis assessment. Full article
Show Figures

Figure 1

Back to TopTop