Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (202)

Search Parameters:
Keywords = expected model augmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 761 KB  
Review
Personalized Breast Reconstruction After Breast-Conserving Therapy: Risk-Informed Approaches to Technique Selection and Timing
by Thomas J. Sorenson, Carter J. Boyd, Rebecca Lisk and Nolan S. Karp
J. Pers. Med. 2026, 16(4), 197; https://doi.org/10.3390/jpm16040197 - 1 Apr 2026
Viewed by 264
Abstract
Breast-conserving therapy (BCT), consisting of lumpectomy followed by adjuvant radiation, provides oncologic outcomes equivalent to mastectomy for many patients with breast cancer. As survivorship increases, the demand for aesthetic restoration after BCT has grown; however, reconstructive strategies in this setting remain less standardized [...] Read more.
Breast-conserving therapy (BCT), consisting of lumpectomy followed by adjuvant radiation, provides oncologic outcomes equivalent to mastectomy for many patients with breast cancer. As survivorship increases, the demand for aesthetic restoration after BCT has grown; however, reconstructive strategies in this setting remain less standardized than those following mastectomy. Reconstruction after BCT presents distinct challenges due to partial tissue loss, nonuniform radiation injury, progressive fibrosis, and wide variability in patient expectations and tolerance for revision surgery. Consequently, mastectomy-based reconstructive algorithms are often insufficient for guiding care in this population. This review synthesizes contemporary reconstructive options following BCT through a personalized medicine framework, emphasizing patient-specific risk factors that influence technique selection, timing, and long-term outcomes. Key determinants include radiation exposure, breast morphology, comorbid conditions, prior breast surgery, and psychosocial preferences. Oncoplastic volume displacement, implant-based augmentation, fat grafting, and autologous reconstruction each demonstrate distinct risk profiles in the post-BCT tissue environment and require individualized application. Timing of reconstruction and willingness to undergo staged procedures play a central role in outcome durability and patient satisfaction. Across reconstructive strategies, revision burden emerges as a clinically meaningful, patient-centered outcome that is not adequately captured by traditional short-term complication metrics. A risk-informed approach that integrates individualized risk assessment with transparent counseling and shared decision-making may improve alignment between reconstructive planning and patient goals. Personalized reconstruction after BCT requires moving beyond technique-driven paradigms toward flexible, longitudinal care pathways. Future efforts should focus on developing BCT-specific predictive models and incorporating patient-reported outcomes to advance personalized reconstructive care. Full article
(This article belongs to the Section Personalized Therapy in Clinical Medicine)
Show Figures

Figure 1

20 pages, 502 KB  
Article
Design and Evaluation of a Retrieval-Augmented Generation LLM Chatbot with Structured Database Access
by Juan Burbano, Pablo Landeta-López, Cathy Guevara-Vega and Antonio Quiña-Mera
Appl. Sci. 2026, 16(7), 3147; https://doi.org/10.3390/app16073147 - 25 Mar 2026
Viewed by 508
Abstract
Context. The grocery sector is undergoing a massive shift in consumer behavior, with global chatbot usage projected to reach 8.4 billion units by 2024—surpassing the total human population—and online grocery revenue per shopper expected to hit USD 449.00 by 2023. In this competitive [...] Read more.
Context. The grocery sector is undergoing a massive shift in consumer behavior, with global chatbot usage projected to reach 8.4 billion units by 2024—surpassing the total human population—and online grocery revenue per shopper expected to hit USD 449.00 by 2023. In this competitive landscape, small grocery stores must adopt AI-driven tools to modernize their operations. However, these businesses often face significant inefficiencies in manual inventory management, resulting in errors and reduced competitiveness. Objective. This research aims to develop and validate a chatbot application using Large Language Models and Retrieval-Augmented Generation (RAG) for operational management of grocery stores. Method. The method employed a quantitative experimental approach with a five-component system architecture: a web interface, a FastAPI API, a Mistral-7B-Instruct-v0.2 model, a dynamic SQL generator, and a custom RAG application with an FAISS vector database, all integrated through SQLAlchemy 2.0.40. Results. The results demonstrate that a chatbot achieves an average response time of 0.08 s with 80% overall accuracy, showing a 96.2% improvement in information query time and a 92.9% reduction in operational errors. Conclusions. Major conclusions suggest that the chatbot system is effective for retail environments and has the potential to enhance the operational efficiency of grocery stores, serving as a foundation for future research in applied conversational assistance. Full article
Show Figures

Figure 1

24 pages, 1959 KB  
Article
LLM-Augmented Algorithmic Management: A Governance-Oriented Architecture for Explainable Organizational Decision Systems
by Nikolay Hinov and Maria Ivanova
AI 2026, 7(3), 102; https://doi.org/10.3390/ai7030102 - 10 Mar 2026
Viewed by 779
Abstract
Algorithmic management systems increasingly coordinate work, allocate resources, and support decisions in corporate, public sector, and research environments. Yet many such systems remain opaque: they optimize and score effectively but struggle to communicate rationales that are contextual, auditable, and defensible under emerging governance [...] Read more.
Algorithmic management systems increasingly coordinate work, allocate resources, and support decisions in corporate, public sector, and research environments. Yet many such systems remain opaque: they optimize and score effectively but struggle to communicate rationales that are contextual, auditable, and defensible under emerging governance expectations. Large language models (LLMs) can help bridge this gap by translating quantitative signals into human-readable explanations and enabling interactive clarification. However, LLM integration also introduces new risks—hallucinated rationales, bias amplification, prompt-based security failures, and automation dependence—that must be governed rather than merely engineered. This article proposes a governance-oriented architecture for LLM-augmented algorithmic management. The model combines the following elements: an algorithmic decision core; an LLM-based cognitive interface for explanation and dialogue, and a verification and governance layer that enforces policy constraints, provenance, audit trails, and human-in-command oversight. The framework is developed through targeted conceptual synthesis and normative alignment with key governance instruments (e.g., the EU AI Act, GDPR, and ISO/IEC 42001). It is illustrated through cross-domain scenarios and complemented by a demonstrative synthetic-trace simulation that highlights transparency–latency trade-offs under verification controls. Using the demonstrative simulation (n = 120 decision events), the framework illustrates a mean baseline latency of 100.3 ms and a mean LLM-augmented latency of 115.8 ms (≈15.5% increase), a mean explanation validity proxy of 85.6%, and a simulated constraint-satisfaction rate of 94.2% (113/120 events), with failed cases routed to review. These values are presented as design-level indicators of operational plausibility and governance trade-offs, not empirical performance benchmarks or state-of-the-art comparisons. The paper contributes a conceptual and governance-oriented architectural blueprint for integrating generative AI into organisational decision systems without sacrificing accountability, compliance, or operational reliability. Full article
Show Figures

Figure 1

30 pages, 1502 KB  
Article
Forecasting the Development of Renewable Energy Sources in Poland in the Context of Energy Policy of the European Union
by Piotr Bórawski, Rafał Wyszomierski, Aneta Bełdycka-Bórawska, Mariola Grzybowska-Brzezińska and Rafał Warżała
Energies 2026, 19(5), 1340; https://doi.org/10.3390/en19051340 - 6 Mar 2026
Viewed by 355
Abstract
Renewable energy sources (RES) will be the main source of energy in the future. The main goal of this study was to analyze and elaborate a prognosis for the development of renewable energy sources in Poland. Specific objectives included: evaluation of the prognosis [...] Read more.
Renewable energy sources (RES) will be the main source of energy in the future. The main goal of this study was to analyze and elaborate a prognosis for the development of renewable energy sources in Poland. Specific objectives included: evaluation of the prognosis developed as part of Poland’s energy policy (PEP), development of our own forecast of the share of renewable energy sources, and comparison of the forecast developed for Poland’s energy policy with our own forecast. We have also elaborated a hypothesis that the prognosis for the development of renewable energy sources for Poland prepared by PEP, and our own prognosis based on Autoregressive Moving Average (ARIMA) models, are both promising and confirm the development of the renewable energy sector in the future. We used the Augmented Dickey–Fuller (ADF) test as well as ARIMA models. Moreover, we compared own RES prognosis with prognoses proposed by the European Union. Cumulative capital expenditures from 2021 to 2040, including financing costs, will amount to PLN 300 billion, of which PLN 195 billion go towards renewable energy sources alone. Photovoltaics (PV) will account for the largest share of energy production, reaching 16 GW of achievable capacity, followed by onshore wind farms with 9.7 GW. Solid biomass accounts for the largest share of renewable energy consumption in heating and cooling, although its share is gradually decreasing from 98.6% in 2005 to a projected 75% in 2040. Heat pumps, which had no share in 2005, are expected to increase their share to a projected 11.8% in 2040. Solar technology has also increased from 0% in 2005 to a projected 5.6% in 2040. The share of renewable energy in this energy sector is increasing from 22.1% in 2020 to 31.8% in 2030 and 39.7% in 2040. The prognosis elaborated by PEP for 2025–2040 are very optimistic and the prognosis elaborated based on ARIMA models is also promising. Both prognoses predict the development of RES in the future and the transformation of the energy sector in Poland. Full article
(This article belongs to the Special Issue Energy Policies and Sustainable Development)
Show Figures

Figure 1

25 pages, 25575 KB  
Article
Sea Ice Classification Enhancement Using Calibration-Focused Loss Functions
by Nima Ahmadian, Matthew Hamilton and Weimin Huang
Remote Sens. 2026, 18(5), 810; https://doi.org/10.3390/rs18050810 - 6 Mar 2026
Viewed by 259
Abstract
Deep learning has become a key approach for automated sea ice mapping in the AI4Arctic Sea Ice Challenge dataset, yet most studies focus on accuracy metrics and rarely evaluate whether predicted probabilities are reliable for operational use. This paper investigates calibration-aware training for [...] Read more.
Deep learning has become a key approach for automated sea ice mapping in the AI4Arctic Sea Ice Challenge dataset, yet most studies focus on accuracy metrics and rarely evaluate whether predicted probabilities are reliable for operational use. This paper investigates calibration-aware training for multi-task sea ice segmentation of sea ice concentration (SIC), stage of development (SOD), and floe size (FLOE) using the U-Net model. We train the network with cross-entropy (CE) and augment the objective with focal loss, Brier loss, and an entropy-regularization term to reduce overconfidence and improve calibration. Experiments follow a scene-level Monte Carlo cross-validation protocol on the ready-to-train AI4Arctic Sea Ice Challenge dataset (AI4Arctic) dataset and are evaluated using R2 for SIC, F1 for SOD and FLOE, a weighted combined score, and expected calibration error (ECE) and reliability diagrams. Results show that calibration-aware loss functions improve test performance relative to the CE loss, and the full objective (CE + Brier + focal + entropy) achieves the highest combined score of 84.73% and reduces FLOE ECE to 0.044. Qualitative comparisons further indicate cleaner spatial structures and fewer scattered errors, particularly for FLOE. Overall, the proposed loss design improves both segmentation quality and confidence reliability, supporting more trustworthy sea ice products for decision-making. Full article
Show Figures

Figure 1

30 pages, 6071 KB  
Article
Artificial Intelligence and Blockchain as Enablers of Resilient and Sustainable Multimodal Transport Chains: Evidence from a Multi-Actor Qualitative Study
by Badr Machkour, Naoufal Rouky, Ahmed Abriane and Othmane Benmoussa
Sustainability 2026, 18(5), 2381; https://doi.org/10.3390/su18052381 - 1 Mar 2026
Viewed by 449
Abstract
This research analyses how the joint integration of artificial intelligence and blockchain can contribute to the resilience and sustainability of multimodal transport chains. We adopt an interpretivist and constructivist stance in order to understand the modalities of appropriation, negotiation, and deployment of AI–blockchain [...] Read more.
This research analyses how the joint integration of artificial intelligence and blockchain can contribute to the resilience and sustainability of multimodal transport chains. We adopt an interpretivist and constructivist stance in order to understand the modalities of appropriation, negotiation, and deployment of AI–blockchain mechanisms at the port–rail–road interfaces. The data come from 29 semi-structured interviews conducted with four categories of actors involved in multimodal corridors: digital-solution start-ups, transport–logistics SMEs, industrial shippers, and infrastructure managers. The thematic analysis, conducted through an abductive approach, highlights that the expected effects of AI and blockchain do not manifest directly on sustainability, but mainly pass through four mediating organizational mechanisms. First, shared logistics visibility appears as the decisive entry point. Inter-organizational coordination, supported by augmented governance mechanisms, conditions the translation of visibility into joint decisions. Third, distributed trust is built around shared evidence. Transactional automation unfolds gradually, with an ambivalence between efficiency gains and risks of rigidity in crisis situations. These mechanisms jointly fuel resilience as well as sustainability. The study proposes an integrated conceptual model and opens the way to a confirmatory phase by suggesting avenues for operationalizing the constructs. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

17 pages, 1189 KB  
Article
Prediction of Reverse Osmosis Membrane Fouling Using Machine Learning: MLR, ANN, and SVM at a Seawater Desalination Plant
by Siham Kherraf, Fatima-Zahra Abahdou, Maria Benbouzid, Zakaria Izouaouen, Abdellatif Aarfane, Abdoullatif Baraket, Hamid Nasrellah, Meryem Bensemlali, Soumia Ziti, Najoua Labjar and Souad El Hajjaji
Eng 2026, 7(3), 106; https://doi.org/10.3390/eng7030106 - 28 Feb 2026
Viewed by 582
Abstract
Membrane fouling remains a major obstacle to the performance of the reverse osmosis (RO) desalination processes. Artificial intelligence (AI) is now a promising approach for the reliable modeling of these complex systems. This study evaluates three modeling techniques—multiple linear regression (MLR), artificial neural [...] Read more.
Membrane fouling remains a major obstacle to the performance of the reverse osmosis (RO) desalination processes. Artificial intelligence (AI) is now a promising approach for the reliable modeling of these complex systems. This study evaluates three modeling techniques—multiple linear regression (MLR), artificial neural networks (ANNs), and support vector regression (SVR)—for predicting transmembrane pressure (TMP) at the Boujdour desalination plant, based on five input parameters: temperature, turbidity, pH, conductivity, and feedflow. The analysis is based on an original dataset of 195 daily measurements, and due to the absence of timestamps, the study focuses on state-to-TMP prediction rather than chronological forecasting, with no temporal generalization claimed. Approximately 2000 augmented training samples generated using a conservative SMOGN approach were used for model development, while performance evaluation relied exclusively on 39 independent real test observations. Two modeling strategies were adopted: (i) a minimalist approach based on significant variables identified by an ordinary least squares (OLS) model (pH and conductivity), and (ii) a multivariate approach integrating all parameters to capture non-linear interactions. A rigorous validation framework was put in place to avoid information leakage and ensure the robustness and generalizability of the models. Performance was evaluated using R2, RMSE, and MAE metrics, supplemented by robustness and significance analyses including bootstrap confidence intervals, paired statistical comparisons, and interpretability analyses based on permutation importance, partial dependence plots (PDPs), and individual conditional expectation (ICE) curves. The results indicate that the SVR model achieves the best average predictive accuracy among the tested models, albeit with moderate explanatory power. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Figure 1

18 pages, 1352 KB  
Protocol
Codesigning a Nurse-Led, Large Language Model-Empowered Agent to Increase Hepatitis B Screening and Vaccination for Inclusion Health Populations: A Research Protocol
by Caixia Li, Wei Xia, Zheng Zhu, Marques Shek Nam Ng and Xia Fu
Nurs. Rep. 2026, 16(2), 74; https://doi.org/10.3390/nursrep16020074 - 19 Feb 2026
Viewed by 642
Abstract
Background/Objectives: We aim to codesign and test a nurse-led, large language model-empowered agent to increase hepatitis B screening and vaccination for inclusion health populations. Methods: This study employs a double diamond model-guided codesign methodology. It includes four phases: (i) Discover: To identify [...] Read more.
Background/Objectives: We aim to codesign and test a nurse-led, large language model-empowered agent to increase hepatitis B screening and vaccination for inclusion health populations. Methods: This study employs a double diamond model-guided codesign methodology. It includes four phases: (i) Discover: To identify intervention targets, a systematic review was undertaken that synthesized 51 factors influencing hepatitis B screening and vaccination among inclusion health populations. A qualitative study will later be conducted to further elucidate specific cultural barriers in the Chinese context. (ii) Define: To delineate effective intervention designs, two systematic reviews were performed, informing the integration of nurse-led intervention components (e.g., counseling, case management, and care coordination) and adaptation of a large language model to address identified intervention targets. (iii) Develop: To codesign an agent, hepatitis B prevention datasets will be constructed with subsequent model adaptations through fine-tuning and retrieval-augmented generation, as well as collaborations among diverse stakeholders. It will facilitate human–agent interactive consultation, intelligent case management, and care coordination, as well as collaborate with a nurse-led multidisciplinary team to manage hepatitis B screening, vaccination, and care linkage. (iv) Deliver: To evaluate and refine the agent, a mixed-methodology will be adopted, encompassing quantitative evaluation of model response, as well as qualitative evaluation of user experience, technical barriers, and potential benefits. Discussion: This intervention is expected to improve hepatitis B screening and vaccination rates among inclusion health populations, thereby enhancing diagnosis, immunity, and care linkage. It will establish a codesign framework for nursing-specific large language models, broadening the impact of nurses on preventive health equity. Full article
Show Figures

Figure 1

28 pages, 1177 KB  
Article
Context-Aware Code Review Automation: A Retrieval-Augmented Approach
by Büşra İçöz and Göksel Biricik
Appl. Sci. 2026, 16(4), 1875; https://doi.org/10.3390/app16041875 - 13 Feb 2026
Viewed by 922
Abstract
Manual code review is essential for software quality, but often slows down development cycles due to the high time demands on developers. In this study, we propose an automated solution for Python (version 3.13) projects that generates code review comments by combining Large [...] Read more.
Manual code review is essential for software quality, but often slows down development cycles due to the high time demands on developers. In this study, we propose an automated solution for Python (version 3.13) projects that generates code review comments by combining Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG). To achieve this, we first curated a dataset from GitHub pull requests (PRs) using the GitHub REST Application Programming Interface (API) (version 2022-11-28) and classified comments into semantic categories using a semi-supervised Support Vector Machine (SVM) model. During the review process, our system uses a vector database to retrieve the top-k most relevant historical comments, providing context for a diverse spectrum of open-weights LLMs, including DeepSeek-Coder-33B, Qwen2.5-Coder-32B, Codestral-22B, CodeLlama-13B, Mistral-Instruct-7B, and Phi-3-Mini. We evaluated the system using a multi-step validation that combined standard metrics (BLEU-4, ROUGE-L, cosine similarity) with an LLM-as-a-Judge approach, and verified the results through targeted human review to ensure consistency with expert standards. The findings show that retrieval augmentation improves feedback relevance for larger models, with DeepSeek-Coder’s alignment score increasing by 17.9% at a retrieval depth of k = 3. In contrast, smaller models such as Phi-3-Mini suffered from context collapse, where too much context reduced accuracy. To manage this trade-off, we built a hybrid expert system that routes each task to the most suitable model. Our results indicate that the proposed approach improved performance by 13.2% compared to the zero-shot baseline (k = 0). In addition, our proposed system reduces hallucinations and generates comments that closely align with the standards expected from the experts. Full article
(This article belongs to the Special Issue Artificial Intelligence in Software Engineering)
Show Figures

Figure 1

27 pages, 536 KB  
Article
Efficient EM Estimation for the Pogit Model via Polya-Gamma Augmentation
by Iván Gutiérrez, Sandra Ramírez and Leonardo Jofré
Entropy 2026, 28(2), 207; https://doi.org/10.3390/e28020207 - 11 Feb 2026
Viewed by 442
Abstract
The Poisson-logistic (pogit) model is widely used for count data with latent intensities, with applications including under-reporting correction and share-of-wallet estimation, yet existing estimation methods do not scale well to large datasets. We propose a new expectation-maximization (EM) algorithm for the standard pogit [...] Read more.
The Poisson-logistic (pogit) model is widely used for count data with latent intensities, with applications including under-reporting correction and share-of-wallet estimation, yet existing estimation methods do not scale well to large datasets. We propose a new expectation-maximization (EM) algorithm for the standard pogit model based on Polya-Gamma data augmentation, which yields a conditionally Gaussian complete-data likelihood with closed-form EM-updates. The resulting EM algorithm has low per-iteration cost and naturally accommodates computational enhancements, including quasi-Newton acceleration and mini-batch implementations. These features enable efficient inference on datasets with millions of observations. Simulation studies and real-data applications demonstrate substantial computational improvements without loss of statistical accuracy, and comparisons with direct maximum-likelihood optimization routines show that the proposed method provides a scalable and competitive alternative for large-scale pogit estimation. Full article
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)
Show Figures

Figure 1

31 pages, 706 KB  
Article
Applying Action Research to Developing a GPT-Based Assistant for Construction Cost Code Verification in State-Funded Projects in Vietnam
by Quan T. Nguyen, Thuy-Binh Pham, Hai Phong Bui and Po-Han Chen
Buildings 2026, 16(3), 499; https://doi.org/10.3390/buildings16030499 - 26 Jan 2026
Viewed by 407
Abstract
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based [...] Read more.
Cost code verification in state-funded construction projects remains a labor-intensive and error-prone task, particularly given the structural heterogeneity of project estimates and the prevalence of malformed codes, inconsistent units of measurement (UoMs), and locally modified price components. This study evaluates a deterministic GPT-based assistant designed to automate Vietnam’s regulatory verification. The assistant was developed and iteratively refined across four Action Research cycles. Also, the system enforces strict rule sequencing and dataset grounding via Python-governed computations. Rather than relying on probabilistic or semantic reasoning, the system performs strictly deterministic checks on code validity, UoM alignment, and unit price conformity in material (MTR), labor (LBR), and machinery (MCR), given the provincial unit price books (UPBs). Deterministic equality is evaluated either on raw numerical values or on values transformed through explicitly declared, rule-governed operations, preserving auditability without introducing tolerance-based or inferential reasoning. A dedicated exact-match mechanism, which is activated only when a code is invalid, enables the recovery of typographical errors only when a project item’s full price vector well matches a normative entry. Using twenty real construction estimates (16,100 rows) and twelve controlled error-injection cases, the study demonstrates that the assistant executes verification steps with high reliability across diverse spreadsheet structures, avoiding ambiguity and maintaining full auditability. Deterministic extraction and normalization routines facilitate robust handling of displaced headers, merged cells, and non-standard labeling, while structured reporting provides line-by-line traceability aligned with professional verification workflows. Practitioner feedback confirms that the system reduces manual tracing effort, improves evaluation consistency, and supports documentation compliance with human judgment. This research contributes a framework for large language model (LLM)-orchestrated verification, demonstrating how Action Research can align AI tools with domain expectations. Furthermore, it establishes a methodology for deploying LLMs in safety-critical and regulation-driven environments. Limitations—including narrow diagnostic scope, unlisted quotation exclusion, single-province UPB compliance, and sensitivity to extreme spreadsheet irregularities—define directions for future deterministic extensions. Overall, the findings illustrate how tightly constrained LLM configurations can augment, rather than replace, professional cost verification practices in public-sector construction. Full article
(This article belongs to the Special Issue Knowledge Management in the Building and Construction Industry)
Show Figures

Figure 1

19 pages, 7451 KB  
Article
PPE-EYE: A Deep Learning Approach to Personal Protective Equipment Compliance Detection
by Atta Rahman, Mohammed Salih Ahmed, Khaled Naif AlBugami, Abdullah Yousef Alabbad, Abdullah Abdulaziz AlFantoukh, Yousef Hassan Alshaikhahmed, Ziyad Saleh Alzahrani, Mohammad Aftab Alam Khan, Mustafa Youldash and Saeed Matar Alshahrani
Computers 2026, 15(1), 45; https://doi.org/10.3390/computers15010045 - 11 Jan 2026
Cited by 2 | Viewed by 1391
Abstract
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries. [...] Read more.
Safety on construction sites is an essential yet challenging issue due to the inherently hazardous nature of these sites. Workers are expected to wear Personal Protective Equipment (PPE), such as helmets, vests, and safety glasses, to prevent or minimize their exposure to injuries. However, ensuring compliance remains difficult, particularly in large or complex sites, which require a time-consuming and usually error-prone manual inspection process. The research proposes an automated PPE detection system utilizing the deep learning model YOLO11, which is trained on the CHVG dataset, to identify in real-time whether workers are adequately equipped with the necessary gear. The proposed PPE-EYE method, using YOLO11x, achieved a mAP50 of 96.9% and an inference time of 7.3 ms, which is sufficient for real-time PPE detection systems, in contrast to previous approaches involving the same dataset, which required 170 ms. The model achieved these results by employing data augmentation and fine-tuning. The proposed solution provides continuous monitoring with reduced human oversight and ensures timely alerts if non-compliance is detected, allowing the site manager to act promptly. It further enhances the effectiveness and reliability of safety inspections, overall site safety, and reduces accidents, ensuring consistency in follow-through of safety procedures to create a safer and more productive working environment for all involved in construction activities. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

20 pages, 4726 KB  
Article
Enhancing SeeGround with Relational Depth Text for 3D Visual Grounding
by Hyun-Sik Jeon, Seong-Hui Kang and Jong-Eun Ha
Appl. Sci. 2026, 16(2), 652; https://doi.org/10.3390/app16020652 - 8 Jan 2026
Viewed by 529
Abstract
Three-dimensional visual grounding is a core technology that identifies specific objects within complex 3D scenes based on natural language instructions, enhancing human–machine interactions in robotics and augmented reality domains. Traditional approaches have focused on supervised learning, which relies on annotated data; however, zero-shot [...] Read more.
Three-dimensional visual grounding is a core technology that identifies specific objects within complex 3D scenes based on natural language instructions, enhancing human–machine interactions in robotics and augmented reality domains. Traditional approaches have focused on supervised learning, which relies on annotated data; however, zero-shot methodologies are emerging due to the high costs of data construction and limitations in generalization. SeeGround achieves state-of-the-art performance by integrating 2D rendered images and spatial text descriptions. Nevertheless, SeeGround exhibits vulnerabilities in clearly discerning relative depth relationships owing to its implicit depth representations in 2D views. This study proposes the relational depth text (RDT) technique to overcome these limitations, utilizing a Monocular Depth Estimation model to extract depth maps from rendered 2D images and applying the K-Nearest Neighbors algorithm to convert inter-object relative depth relations into natural language descriptions, thereby incorporating them into Vision–Language Model (VLM) prompts. This method distinguishes itself by augmenting spatial reasoning capabilities while preserving SeeGround’s existing pipeline, demonstrating a 3.54% improvement in the Acc@0.25 metric on the Nr3D dataset in a 7B VLM environment that is approximately 10.3 times lighter than the original model, along with a 6.74% increase in Unique cases on the ScanRefer dataset, albeit with a 1.70% decline in Multiple cases. The proposed technique enhances the robustness of grounding through viewpoint anchoring and candidate discrimination in complex query scenarios, and is expected to improve efficiency in practical applications through future multi-view fusion and conditional execution optimizations. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

29 pages, 2200 KB  
Article
Statistical Analysis and Forecasting of the Number of Students, Teachers and Graduates in Romania’s Pre-University Education System
by Liviu Popescu, Vlad Ducu, Laurențiu-Stelian Mihai, Magdalena Mihai, Daniel Militaru and Valeri Sitnikov
Educ. Sci. 2026, 16(1), 73; https://doi.org/10.3390/educsci16010073 - 5 Jan 2026
Viewed by 667
Abstract
This study examines the evolution and main trends in the number of students, teaching staff and graduates in Romania’s pre-university education system over the period 1990–2024 (and 1990–2023 for graduates), employing ARIMA models to generate forecasts up to the year 2027. The research [...] Read more.
This study examines the evolution and main trends in the number of students, teaching staff and graduates in Romania’s pre-university education system over the period 1990–2024 (and 1990–2023 for graduates), employing ARIMA models to generate forecasts up to the year 2027. The research is grounded in the premise of profound structural transformations within the Romanian educational system, driven by demographic decline, external migration, recurrent reforms, and shifts in resource allocation. The descriptive analysis highlights a pronounced downward trend for all three variables (students, teaching staff and graduates), reflecting the continuous reduction in the school-age population and the restructuring of the educational network. The statistical tests employed, such as Shapiro–Wilk, Augmented Dickey–Fuller, Durbin–Watson, Breusch–Godfrey and ARCH, validate the selected optimal ARIMA models: ARIMA(1,1,1) for teaching staff, ARIMA(4,1,3) for students, and ARIMA(3,1,5) for graduates. The forecasting results indicate that this declining trend is expected to persist through 2027: the number of teaching staff is estimated to decrease to approximately 178,700 individuals, the number of students is estimated to decrease to around 2.78 million, and the number of graduates is projected to fall until 2026, followed by a potential slight stabilization in 2027. The Spearman correlation analysis indicates strong associations among all variables, suggesting that their dynamics are predominantly shaped by demographic and migratory factors. Granger causality analysis shows that changes in birth rates lead to rapid adjustments in teaching staff within 2–3 years. No significant short-term causality is found for the number of students or graduates, though demographic effects appear after 5–6 years for students, indicating long-term impacts on the school population. This study underscores the importance of econometric methods in informing educational policy, particularly in the context of the marked contraction of the school-age population. It also highlights the need for strategic planning regarding human resources in education, per-student funding, the reorganization of the school network, and curriculum adaptation. Full article
Show Figures

Figure 1

35 pages, 4409 KB  
Article
Hybrid Object-Based Augmentation and Histogram Matching for Cross-Domain Building Segmentation in Remote Sensing
by Chulsoo Ye and Youngman Ahn
Appl. Sci. 2026, 16(1), 543; https://doi.org/10.3390/app16010543 - 5 Jan 2026
Viewed by 538
Abstract
Cross-domain building segmentation in high-resolution remote sensing imagery underpins urban change monitoring, disaster assessment, and exposure mapping. However, differences in sensors, regions, and imaging conditions create structural and radiometric domain gaps that degrade model generalization. Most existing methods adopt model-centric domain adaptation with [...] Read more.
Cross-domain building segmentation in high-resolution remote sensing imagery underpins urban change monitoring, disaster assessment, and exposure mapping. However, differences in sensors, regions, and imaging conditions create structural and radiometric domain gaps that degrade model generalization. Most existing methods adopt model-centric domain adaptation with additional networks or losses, complicating training and deployment. We propose a data-centric framework, Hybrid Object-Based Augmentation and Histogram Matching (Hybrid OBA–HM), which improves cross-domain building segmentation without modifying the backbone architecture or using target-domain labels. The proposed framework comprises two stages: (i) object-based augmentation to increase structural diversity and building coverage, and (ii) histogram-based normalization to mitigate radiometric discrepancies across domains. Experiments on OpenEarthMap and cross-city transfer among three KOMPSAT-3A scenes show that Hybrid OBA–HM improves F1-scores from 0.808 to 0.840 and from 0.455 to 0.652, respectively, while maintaining an object-level intersection over union of 0.89 for replaced buildings. Domain-indicator analysis further reveals larger gains under stronger radiometric and geometric mismatches, indicating that the proposed framework strengthens cross-domain generalization and provides practical guidance by relating simple domain diagnostics (e.g., brightness/color and orientation mismatch indicators) to the expected benefits of augmentation and normalization when adapting to new domains. Full article
Show Figures

Figure 1

Back to TopTop