Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,121)

Search Parameters:
Keywords = manual work

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3829 KB  
Article
Time–Frequency and Spectral Analysis of Welding Arc Sound for Automated SMAW Quality Classification
by Alejandro García Rodríguez, Christian Camilo Barriga Castellanos, Jair Eduardo Rocha-Gonzalez and Everardo Bárcenas
Sensors 2026, 26(8), 2357; https://doi.org/10.3390/s26082357 (registering DOI) - 11 Apr 2026
Abstract
This study investigates the feasibility of acoustic signal analysis for the assessment of weld bead quality in the shielded metal arc welding (SMAW) process. The work focuses on comparing time-domain acoustic signals and time–frequency spectrogram representations for the classification of welds as accepted [...] Read more.
This study investigates the feasibility of acoustic signal analysis for the assessment of weld bead quality in the shielded metal arc welding (SMAW) process. The work focuses on comparing time-domain acoustic signals and time–frequency spectrogram representations for the classification of welds as accepted or rejected according to standard welding inspection criteria. Two key acoustic descriptors, the fundamental frequency (F0) and the harmonics-to-noise ratio (HNR), were extracted and analyzed to evaluate statistical differences between the two weld quality classes. Statistical tests, including Anderson–Darling, Levene, ANOVA, and Kruskal–Wallis (α = 0.05), revealed significant differences between accepted and rejected welds. Accepted welds exhibited a bimodal HNR distribution associated with transient arc instability at the beginning and end of the bead, whereas rejected welds showed more uniform acoustic behavior throughout the process. Subsequently, the acoustic data were represented using both audio signals and spectrograms and used as inputs for ten supervised machine learning models, including Support Vector Classifier (SVC), Logistic Regression (LR), k-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest (RF), Extra Trees (ET), Gradient Boosting (GB), and Naïve Bayes (NB). The results demonstrate that spectrogram-based representations significantly outperform time-domain signals, achieving accuracies of 0.95–0.96, ROC-AUC values above 0.95, and false positive and false negative rates below 6%. These findings indicate that, while scalar acoustic descriptors provide statistically significant insight into weld quality, time–frequency representations combined with machine learning enable a more robust and reliable framework for automated non-destructive evaluation, particularly in manual SMAW processes under realistic operating conditions. Full article
(This article belongs to the Section Sensor Materials)
Show Figures

Figure 1

28 pages, 15639 KB  
Article
An Automated AI-Based Vision Inspection System for Bee Mite and Deformed Bee Detection Using YOLO Models
by Jeong-Yong Shin, Hong-Gu Lee, Su-bae Kim and Changyeun Mo
Agriculture 2026, 16(8), 840; https://doi.org/10.3390/agriculture16080840 - 10 Apr 2026
Abstract
Varroa destructor (Bee mite) and Deformed Wing Virus are primary causes of honeybee colony collapse. This study developed an automated AI-based vision inspection system for detecting bee mites and deformed bees using the YOLO algorithm. The system integrates an RGB camera, a beecomb [...] Read more.
Varroa destructor (Bee mite) and Deformed Wing Virus are primary causes of honeybee colony collapse. This study developed an automated AI-based vision inspection system for detecting bee mites and deformed bees using the YOLO algorithm. The system integrates an RGB camera, a beecomb rotation motor, and an image transmission module to enable automated dual-sided image acquisition of the beecomb. The image characteristics of normal bees, bee mites, and deformed bees were analyzed, and YOLO-based object detection models were developed to classify them. Six YOLO models—based on YOLOv8 and YOLOv11 architectures across three model sizes (nano, small, and large)—were evaluated on 405 test images (6441 objects). The proposed system reduced the inspection time from 240 s required for manual method to 20 s per beecomb, achieving 12-fold efficiency improvement. Comparative analysis showed model-task specialization: YOLOv8l excelled in detecting small bee mites (F1: 92.5%, mAP[0.5]: 92.1%), while YOLOv11s achieved the highest performance for morphologically diverse deformed bees (F1: 95.1%). Error analysis indicated that detection performance was influenced by morphological characteristics. Deformed bee detection errors correlated with overlap in wing-to-body ratio: DB Type II exhibited 18.6% miss rate, while DB Type III achieved perfect detection. In bee mite detection, a sensitivity–specificity trade-off was observed: YOLOv11l had the lowest false negatives (2.5%) but highest false positives, while YOLOv8l demonstrated superior discrimination. These results demonstrate the practical potential of the proposed system for field deployment in apiaries, supporting early pest diagnosis and improved colony health management. The model-task specialization framework provides guidance for architecture selection based on object characteristics. Future work will focus on multi-location validation and real-time monitoring integration. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
20 pages, 3204 KB  
Article
Eye-Tracking for Human Performance Assessment in Industry 5.0 Research
by Dana Hamarsheh, Caden Edwards and Mary Fendley
Theor. Appl. Ergon. 2026, 2(2), 5; https://doi.org/10.3390/tae2020005 - 10 Apr 2026
Abstract
In the new industrial revolution 5.0 era, manufacturing facilities with manual assembly have higher expectations, higher mass customization, and more human involvement, as well as including new digital technologies in smart workstations. Given these expectations, the cognitive load of manual assembly workers is [...] Read more.
In the new industrial revolution 5.0 era, manufacturing facilities with manual assembly have higher expectations, higher mass customization, and more human involvement, as well as including new digital technologies in smart workstations. Given these expectations, the cognitive load of manual assembly workers is increasing. Cognitive assessment systems are being added to manufacturing facilities to work in parallel with physical and sensory assistance systems to establish better work conditions for workers and better overall system performance. This paper presents an exploratory study using eye-tracking as an assessment system to identify potential locations of increased cognitive workload and errors to better understand where and how to employ assistance for workers to improve the manual assembly and inspection process. The results of this study indicate that the highest workload occurs with measuring and inspection tasks, and most errors occur during the assembly of parts, where their geometry impacts placement. It also demonstrates the feasibility of eye-tracking as a low-cost, integral part of the human–computer system in the assembly environment. Full article
Show Figures

Figure 1

32 pages, 7135 KB  
Article
Evolutionary Multi-Objective Prompt Learning for Synthetic Text Data Generation with Black-Box Large Language Models
by Diego Pastrián, Nicolás Hidalgo, Víctor Reyes and Erika Rosas
Appl. Sci. 2026, 16(8), 3623; https://doi.org/10.3390/app16083623 - 8 Apr 2026
Viewed by 179
Abstract
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are [...] Read more.
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are scarce or difficult to obtain. Large Language Models (LLMs) provide powerful capabilities for synthetic text generation, yet the quality of generated data strongly depends on the design of input prompts. Prompt engineering is therefore critical, but it remains largely manual and difficult to scale, particularly in black-box settings where model internals are inaccessible. This work introduces EVOLMD-MO, a multi-objective evolutionary framework for automated prompt learning aimed at generating high-quality synthetic text datasets using black-box LLMs. The proposed approach formulates prompt optimization as a multi-objective search problem in which candidate prompts evolve through genetic operators guided by two complementary objectives: semantic fidelity to reference data and generative diversity of the produced samples. To support scalable optimization, the framework integrates a modular multi-agent architecture that decouples prompt evolution, LLM interaction, and evaluation mechanisms. The evolutionary process is implemented using the NSGA-II algorithm, enabling the discovery of diverse Pareto-optimal prompts that balance semantic preservation and diversity. Experimental evaluation using large-scale disaster-related social media data demonstrates that the proposed approach consistently improves prompt quality across generations while maintaining a stable trade-off between fidelity and diversity. Compared with a single-objective baseline, EVOLMD-MO explores a significantly broader semantic search space and produces more diverse yet semantically coherent synthetic datasets. These results indicate that multi-objective evolutionary prompt learning constitutes a promising strategy for black-box LLM-driven data generation, with potential applicability to adaptive data analytics and real-time decision-support systems in highly dynamic environments, pending broader validation across domains and models. Full article
(This article belongs to the Special Issue Resource Management for AI-Centric Computing Systems)
Show Figures

Figure 1

30 pages, 1924 KB  
Article
TinyML for Sustainable Edge Intelligence: Practical Optimization Under Extreme Resource Constraints
by Mohamed Echchidmi and Anas Bouayad
Technologies 2026, 14(4), 215; https://doi.org/10.3390/technologies14040215 - 7 Apr 2026
Viewed by 134
Abstract
Deep learning has emerged as an effective tool for automatic waste classification, supporting cleaner cities and more sustainable recycling systems. Because environmental protection is central to the United Nations Sustainable Development Goals (SDGs), improving the sorting and processing of everyday waste is a [...] Read more.
Deep learning has emerged as an effective tool for automatic waste classification, supporting cleaner cities and more sustainable recycling systems. Because environmental protection is central to the United Nations Sustainable Development Goals (SDGs), improving the sorting and processing of everyday waste is a practical step toward this broader objective. In many real-world settings, however, waste is still sorted manually, which is slow, labor-intensive, and prone to human error. Although convolutional neural networks (CNNs) can automate this task with high accuracy, many state-of-the-art models remain too large and computationally demanding for low-cost edge devices intended for deployment in homes, schools, and small recycling facilities. In this work, we investigate lightweight waste-classification models suitable for TinyML deployment while preserving competitive accuracy. We first benchmark multiple CNN architectures to establish a strong baseline, then apply complementary compression strategies including quantization, pruning, singular value decomposition (SVD) low-rank approximation, and knowledge distillation. In addition, we evaluate an RL-guided multi-teacher selection benchmark that adaptively chooses one teacher per minibatch during distillation to improve student training stability, achieving up to 85% accuracy with only 0.496 M parameters (FP32 ≈ 1.89 MB; INT8 ≈ 0.47 MB). Across all experiments, the best accuracy–size trade-off is obtained by combining knowledge distillation with post-training quantization, reducing the model footprint from approximately 16 MB to 281 KB while maintaining 82% accuracy. The resulting model is feasible for deployment on mobile applications and resource-constrained embedded devices based on model size and TensorFlow Lite Micro compatibility. Full article
Show Figures

Figure 1

14 pages, 2118 KB  
Article
AI Method for Classification of Diagnosis of Near-Infrared Breast Lesion Images
by Kaiquan Chen, Fangyang Shen, Honggang Wang, Zhengchao Dong, Jizhong Xiao, Ming Ma, Afroza Aktar, Christopher Chow and Wenxiong Zhang
AI 2026, 7(4), 133; https://doi.org/10.3390/ai7040133 - 7 Apr 2026
Viewed by 204
Abstract
In near-infrared optical breast lesion screening and diagnosis systems, high-speed four-dimensional scanners can dynamically acquire tens of thousands of lesion images within a five-minute period. Currently, manual computer annotation is required to generate standard samples from these scanned breast lesion images, a process [...] Read more.
In near-infrared optical breast lesion screening and diagnosis systems, high-speed four-dimensional scanners can dynamically acquire tens of thousands of lesion images within a five-minute period. Currently, manual computer annotation is required to generate standard samples from these scanned breast lesion images, a process that depends heavily on physicians with clinical expertise. On average, a single physician can annotate only approximately ten samples per working day. As a result, this process is time-consuming and labor-intensive, and the collected samples often suffer from low accuracy, large variability, and limited diagnostic reliability. Several AI-based annotation tools, such as QuPath, HALO AI™, and X-AnyLabeling, have been developed to assist this process. However, these tools are primarily manual or semi-automated and are unable to provide rapid and high-precision recognition. To address these limitations, this study proposes a new AI-based method for the rapid, accurate, and fully automated detection and diagnosis of breast lesions. The proposed approach complements existing AI-based annotation and diagnostic methods by enabling automated detection and classification of breast lesion samples. The proposed system employs a deep learning–based classification framework to construct a professional-level AI diagnostic model. The system automatically generates diagnostic outputs based on the annotation criteria used by professional physicians, including positive/negative classification and accuracy metrics. Compared with conventional manual diagnostic methods, the proposed approach provides faster and more reliable diagnostic estimates for new patients. These results demonstrate the potential of the proposed AI-based method to advance automated breast lesion screening and diagnosis and to contribute to future research and clinical applications in this field. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

13 pages, 2293 KB  
Article
Operating Table Height Optimization Reduces Surgeon Postural Load During Total Knee Arthroplasty: An Ergonomic Simulation Study
by Marina Sánchez-Robles, Carmelo Marín-Martínez, Vicente J. León-Muñoz, Joaquín Moya-Angeler and Francisco Lajara-Marco
J. Clin. Med. 2026, 15(7), 2782; https://doi.org/10.3390/jcm15072782 - 7 Apr 2026
Viewed by 131
Abstract
Background: Work-related musculoskeletal disorders (WMSDs) are prevalent among orthopaedic surgeons as a result of prolonged exposure to non-neutral postures and forceful manual tasks during surgery. Although working height is a key determinant of trunk and upper-limb posture, the systematic evaluation of ergonomic [...] Read more.
Background: Work-related musculoskeletal disorders (WMSDs) are prevalent among orthopaedic surgeons as a result of prolonged exposure to non-neutral postures and forceful manual tasks during surgery. Although working height is a key determinant of trunk and upper-limb posture, the systematic evaluation of ergonomic working-height recommendations in orthopaedic surgery remains limited. Methods: A simulated left total knee arthroplasty (TKA) was divided into twelve critical surgical steps and analysed across four commonly used surgeon positions (A–D). Two conditions were compared: uncorrected working height (N) and working height corrected according to Canadian Centre for Occupational Health and Safety (CCOHS) recommendations (C). Joint angles were measured from standardized photographs using Kinovea software, and postural load was quantified with the Rapid Entire Body Assessment (REBA) method. Two trained evaluators conducted three independent assessments, yielding 288 REBA scores. Results: Mean REBA scores decreased across all surgeon positions following ergonomic correction, with statistically significant reductions observed in positions A, B, and D. When pooled across all position–step combinations (n = 48), the mean reduction was 0.92 REBA points (95% CI 0.50–1.33; p < 0.001). Notably, 27 of the 48 position–step comparisons exceeded the minimal detectable change threshold. The largest reductions occurred during force-intensive surgical steps, including bone cutting, drilling, and implant impaction. Conclusions: Adjusting working height in accordance with CCOHS ergonomic recommendations reduces surgeons’ postural load during TKA. These findings support the integration of evidence-based ergonomic adjustments into routine orthopaedic surgical practice. Full article
Show Figures

Figure 1

20 pages, 11231 KB  
Article
YOLO-Based Shading Artifact Reduction for CBCT-to-MDCT Translation Using Two-Stage Learning
by Yangheon Lee and Hyun-Cheol Park
Mathematics 2026, 14(7), 1223; https://doi.org/10.3390/math14071223 - 6 Apr 2026
Viewed by 263
Abstract
Cone-beam computed tomography (CBCT) offers advantages of low radiation dose and rapid acquisition but suffers from scatter-induced shading artifacts that limit diagnostic value compared to multi-detector CT (MDCT). While CycleGAN enables unpaired image translation, its uniform loss application struggles with localized artifact removal. [...] Read more.
Cone-beam computed tomography (CBCT) offers advantages of low radiation dose and rapid acquisition but suffers from scatter-induced shading artifacts that limit diagnostic value compared to multi-detector CT (MDCT). While CycleGAN enables unpaired image translation, its uniform loss application struggles with localized artifact removal. We propose a two-stage learning framework with YOLO-based region correction loss. Stage 1 trains a standard CycleGAN to establish stable CBCT-MDCT domain mapping. Stage 2 fine-tunes the model by applying gradient magnitude minimization loss selectively to artifact regions detected by a pretrained YOLO detector, enabling focused correction while preserving anatomical structures. Using 11,000 2D CBCT slices from 17 patients (14 training, 3 testing) and 23,500 2D MDCT slices from 50 patients, our method achieves a 14.0% reduction in artifact score compared to baseline CycleGAN while maintaining high structural similarity (SSIM > 0.96). Independent evaluation using integral nonuniformity (INU) and shading index (SI) confirms consistent improvement across physics-based metrics. The self-regulating mechanism, where YOLO detection confidence naturally decreases as artifacts diminish, provides automatic adjustment without manual intervention. This work demonstrates that combining staged learning with object detection offers an effective solution for localized artifact removal in medical image translation, potentially improving diagnostic accuracy while preserving the low-dose benefits of CBCT. Full article
Show Figures

Figure 1

32 pages, 6103 KB  
Article
An Optimal Deep Hybrid Framework with Selective Kernel U-Net for Skin Lesion Detection and Classification
by Guzal Gulmirzaeva, Robert Hudec, Baxtiyorjon Akbaraliev and Batirbek Samandarov
Bioengineering 2026, 13(4), 427; https://doi.org/10.3390/bioengineering13040427 - 6 Apr 2026
Viewed by 303
Abstract
Early and accurate detection of skin cancer is critical for reducing mortality rates, particularly for malignant melanoma. Automated analysis of dermoscopic images has gained significant attention due to its potential to support clinical diagnosis and overcome the limitations of manual inspection. Motivated by [...] Read more.
Early and accurate detection of skin cancer is critical for reducing mortality rates, particularly for malignant melanoma. Automated analysis of dermoscopic images has gained significant attention due to its potential to support clinical diagnosis and overcome the limitations of manual inspection. Motivated by challenges such as image noise, low contrast, lesion variability, and redundant feature representation, this study proposes an optimal deep hybrid framework for skin lesion detection and classification. The objective of this work is to design a robust and efficient system that integrates advanced preprocessing, precise segmentation, optimal feature selection, and accurate classification. Initially, contrast enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE) and noise reduction using Wiener filtering are applied to improve image quality. Lesion regions are then segmented using a Selective Kernel U-Net (SK-UNet), which adaptively captures multi-scale spatial information. Subsequently, discriminative color, texture, and shape features are extracted and optimized using the Fossa Optimization Algorithm (FOA) to eliminate redundancy. A hybrid one-dimensional Convolutional Neural Network–Gated Recurrent Unit (1D-CNN–GRU) classifier is employed for final classification, learning both spatial and sequential feature patterns. Experimental evaluation on the ISIC and DermMNIST datasets demonstrates that the proposed framework achieves classification accuracies of 97.6% and 95.6%, respectively, outperforming several existing methods. The results confirm that the proposed hybrid framework provides reliable, accurate, and scalable skin cancer diagnosis, highlighting its potential for assisting clinical decision-making and early detection. Full article
(This article belongs to the Special Issue Deep Learning for Medical Applications: Challenges and Opportunities)
Show Figures

Figure 1

52 pages, 18820 KB  
Article
Multimodal Industrial Scene Characterisation for Pouring Process Monitoring Using a Mixture of Experts
by Javier Nieves, Javier Selva, Guillermo Elejoste-Rementeria, Jorge Angulo-Pines, Jon Leiñena, Xuban Barberena and Fátima A. Saiz
Appl. Sci. 2026, 16(7), 3430; https://doi.org/10.3390/app16073430 - 1 Apr 2026
Viewed by 276
Abstract
Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated [...] Read more.
Industrial pouring processes operate under highly dynamic conditions where small deviations can lead to defects, scrap, and production losses. Although modern foundries are equipped with multiple sensors and visual inspection systems, most monitoring approaches remain fragmented, unimodal, and difficult to interpret. Furthermore, annotated anomalous samples in industrial settings are scarce, hindering the development of traditional methods. As a result, many critical pouring anomalies are detected too late or lack sufficient contextual information for effective decision making. In this work, we propose a multimodal framework for industrial scene characterisation that combines visual information and process signals through an explainable Mixture-of-Experts (MoE)-style expert-fusion strategy. First, we deploy an ensemble of specialised modules that collaborate to identify regions of interest, assess pouring quality, and contextualise events within the production process, thereby generating an interpretable description of pouring events. Second, we introduce a novel anomaly detection method for multimodal video data, combining a self-supervised transformer with an outlier-aware clustering algorithm. Our approach effectively identifies rare anomalies without requiring extensive manual labelling. The resulting information is structured into a digital twin-ready representation, supporting synchronisation between the physical system and its virtual counterpart. This solution provides a scalable, deployable pathway to transform heterogeneous industrial data into actionable knowledge, supporting advanced monitoring, anomaly detection, and quality control in real foundry environments. Full article
Show Figures

Figure 1

26 pages, 738 KB  
Review
Analyzing Bias in LLM-Augmented Knowledge Graph Systems: Taxonomy, Interaction Mechanisms, and Evaluation
by Paria Zabihi, Dina Nawara, Ahmed Ibrahim and Rasha Kashef
Appl. Sci. 2026, 16(7), 3410; https://doi.org/10.3390/app16073410 - 1 Apr 2026
Viewed by 535
Abstract
Large Language Models (LLMs) are increasingly integrated into Knowledge Graph (KG) construction and augmentation pipelines to reduce manual effort and enable scalable knowledge extraction, completion, and reasoning. While this integration offers substantial benefits, it also introduces new forms of bias and unreliability that [...] Read more.
Large Language Models (LLMs) are increasingly integrated into Knowledge Graph (KG) construction and augmentation pipelines to reduce manual effort and enable scalable knowledge extraction, completion, and reasoning. While this integration offers substantial benefits, it also introduces new forms of bias and unreliability that extend beyond those observed in standalone LLMs or traditional knowledge graphs. In particular, biases originating from language models, such as social and representational bias, hallucination, prompt sensitivity, and domain coverage limitations that interact with structural and content biases inherent to knowledge graphs, result in compounded distortions that propagate across the pipeline. This paper provides a structured and comprehensive analysis of bias in LLM-augmented knowledge graph systems. We first review bias mechanisms in LLMs and standalone KGs, and then examine how these biases interact and amplify during key stages of LLM-based entity extraction, relation generation, graph completion, and reasoning. Based on this analysis, we introduce a unified taxonomy that characterizes bias as a pipeline-level phenomenon rather than an isolated model. We further consolidate recent evaluation metrics adapted for LLM-generated graphs, including semantic and soft lexical measures. Additionally, we survey representative datasets and benchmarks used to study bias in LLMs, KGs, and hybrid LLM–KG systems and identify open research gaps in developing pipeline-aware evaluation frameworks. This work aims to support the design of more reliable, accurate, and fair LLM-augmented knowledge graphs for engineering and domain-specific applications. Full article
(This article belongs to the Special Issue Robust and Reliable Neural Networks for Real-World Data)
Show Figures

Figure 1

31 pages, 6750 KB  
Article
Measurement of Soil Moisture Using Capacitance Measurements: Development of a Low-Cost Device for Environmental and Very-Low-Enthalpy Geothermal Energy Applications
by Joaquín del Pino Fernández, Miguel A. Martínez Bohórquez, José Manuel Andújar Márquez, Manuel Jesús Roca Prieto and Juan M. Enrique Gómez
Electronics 2026, 15(7), 1453; https://doi.org/10.3390/electronics15071453 - 31 Mar 2026
Viewed by 344
Abstract
Measuring soil moisture is crucial for optimizing agricultural irrigation, but also, from an energy efficiency standpoint, for the proper design of very-low-enthalpy geothermal energy (VLEGE) facilities. VLEGE represents a renewable energy resource with great potential for residential and industrial applications, as it can [...] Read more.
Measuring soil moisture is crucial for optimizing agricultural irrigation, but also, from an energy efficiency standpoint, for the proper design of very-low-enthalpy geothermal energy (VLEGE) facilities. VLEGE represents a renewable energy resource with great potential for residential and industrial applications, as it can provide heating and cooling with high energy efficiency and minimal environmental impact. Soil moisture plays a decisive role in the thermal performance of VLEGE facilities, where small variations in water content can significantly alter the thermal conductivity of the soil and, consequently, the efficiency of their horizontal heat exchangers. This paper presents a low-cost capacitive soil moisture sensor featuring optimized interdigitated electrodes and a controlled dielectric coating that ensures mechanical and electrical stability in subsurface environments. The novelty of this work lies in the validated integration of optimized IDE design, dielectric protection, embedded capacitance acquisition, and gravimetric calibration into a low-cost soil water content measurement device for environmental, agricultural, and VLEGE applications. The developed system converts capacitance variations into direct estimates of soil water content through an integrated microcontroller-based signal-conditioning stage. The developed device is robust, reliable, and readily reproducible. Furthermore, given its low cost (around €50 if manufactured manually; mass-produced it would be much cheaper) and its excellent sensitivity and precision, it is ideal for setting up continuous monitoring networks, even for domestic applications, both in VLEGE installations and in other application domains, such as agriculture and environmental monitoring, where soil moisture measurement is a crucial parameter. This work contributes to the development of more efficient and accessible solutions for harnessing geothermal energy, particularly in installations where dynamic tracking of soil moisture is essential to ensure stable long-term performance. Full article
Show Figures

Figure 1

27 pages, 5640 KB  
Article
An Integrated Hardware–Software Platform for Automated Thermodynamic Characterization of Gas–Solid Interfaces Using a Resonant Microcantilever
by Chunfeng Luo, Haitao Yu, Naidong Wang, Fan Long, Hua Hong, Weijie Zhou and Chang Chen
Micromachines 2026, 17(4), 428; https://doi.org/10.3390/mi17040428 - 31 Mar 2026
Viewed by 290
Abstract
Measurement of material thermodynamic parameters plays a crucial role in understanding the interactions between host materials and guest species. Therefore, developing a general-purpose system for thermodynamic parameter measurement is of great significance. In this work, a complete gas–solid interface thermodynamic parameter measurement platform [...] Read more.
Measurement of material thermodynamic parameters plays a crucial role in understanding the interactions between host materials and guest species. Therefore, developing a general-purpose system for thermodynamic parameter measurement is of great significance. In this work, a complete gas–solid interface thermodynamic parameter measurement platform was developed based on isothermal adsorption and a resonant microcantilever testing platform. Unlike conventional adsorption measurement systems that rely on manual, multi-cycle adsorption–desorption processes, the proposed platform integrates an automated hardware–software architecture together with a stepwise concentration-gradient protocol and on-chip thermal desorption, enabling continuous and efficient acquisition of adsorption isotherms. The study includes: (i) construction of an improved thermodynamic parameter extraction model based on the Sips model, (ii) development of an integrated resonant microcantilever control and acquisition module using a modified Fourier algorithm, and (iii) implementation of an automated testing and data analysis software framework developed in LabVIEW based on the Queued Message Handler (QMH) architecture. The system was validated from both hardware performance and material testing perspectives using CO2 adsorption on H-SSZ-13 as a representative case. The results show that the system achieves a maximum sampling rate of 10,000 pts (points per second), with minimum root-mean-square (RMS) noise levels of 0.0083 Hz for frequency and 0.0109 °C for temperature. The PID temperature-control settling time (0.1%) is 24.9 ms, and the frequency-response settling time (0.01%) is 9.6 ms. Thermodynamic parameters including entropy change (ΔS), enthalpy change (ΔH), and Gibbs free energy change (ΔG) were successfully extracted during CO2 adsorption at 294.15 K under different relative uptakes. Reproducibility was verified across three independent samples, yielding a standard deviation of 9.1 J·mol−1 for ΔS at 2% relative uptake and relative standard deviations of 6.85% and 8.12% for ΔH and ΔG, respectively. These results demonstrate that the proposed thermodynamic measurement platform features a simple architecture, superior performance, and high reproducibility in gas–solid interface thermodynamic studies, showing strong potential for future commercialization. Full article
Show Figures

Figure 1

37 pages, 6776 KB  
Article
Semantic Mapping and Cross-Model Data Integration in BIM: A Lightweight and Scalable Schedule-Level Workflow
by Tianjiao Zhao and Ri Na
Buildings 2026, 16(7), 1347; https://doi.org/10.3390/buildings16071347 - 28 Mar 2026
Viewed by 324
Abstract
Despite the widespread adoption of BIM, information exchange across disciplines remains hindered by heterogeneous structures at the tabular data level, particularly when integrating data across multiple discipline-specific models. Manual mapping, rigid templates, or one-off programming scripts are labor-intensive and difficult to scale, limiting [...] Read more.
Despite the widespread adoption of BIM, information exchange across disciplines remains hindered by heterogeneous structures at the tabular data level, particularly when integrating data across multiple discipline-specific models. Manual mapping, rigid templates, or one-off programming scripts are labor-intensive and difficult to scale, limiting automated querying, cross-model aggregation, and schedule-level analytics. This study proposes a lightweight, workflow-driven approach for semantic normalization and cross-model integration of BIM schedule data, with optional script-supported workflow configuration used only to assist the configuration of deterministic, rule-guided mapping logic, rather than serving as a core analytical method. By introducing a customizable subcategory layer, the workflow enables fine-grained semantic alignment and efficient normalization across diverse schedule datasets, implemented through lightweight Python scripting and rule-guided semantic matching used solely as a supporting mechanism for deterministic field mapping. Using structural, architectural, and HVAC models, we demonstrate a stepwise process including data cleaning, hierarchical classification, consistency checking, batch analytics, and automated computation of cross-model metrics such as opening-to-wall ratios. Sample-based validation confirms the workflow’s reliability, achieving semantic mapping agreement rates above 95% and reducing manual processing time by more than 85%. The workflow is readily extensible to other disciplines and modeling conventions, supporting high-throughput data integration for tasks such as design coordination, semantic alignment, RFI reduction, accelerated design reviews, and data-driven decision making. Overall, rather than introducing a new algorithm, the contribution of this work lies in formalizing a reusable, schedule-level workflow abstraction that enables consistent semantic alignment and automated cross-model aggregation without relying on rigid ontologies or training-intensive learning-based models. Any optional tooling used during workflow configuration is auxiliary and does not constitute a standalone learning-based method requiring model training or performance benchmarking. This provides a reusable methodological foundation for scalable, schedule-level BIM data integration and cross-model analytics. Full article
Show Figures

Figure 1

23 pages, 6950 KB  
Article
Under-Canopy Archaeological Mapping Using LiDAR Data and AI Methods
by Gabriele Mazzacca and Fabio Remondino
Heritage 2026, 9(4), 134; https://doi.org/10.3390/heritage9040134 - 27 Mar 2026
Viewed by 386
Abstract
Airborne laser scanning (ALS) and UAV-mounted LiDAR sensors have become well-established tools for identifying and mapping archaeological features across varying scales and contexts. Numerous algorithms have been developed over the years for generating Digital Terrain or Features Models (DTMs/DFMs), which provide an accurate [...] Read more.
Airborne laser scanning (ALS) and UAV-mounted LiDAR sensors have become well-established tools for identifying and mapping archaeological features across varying scales and contexts. Numerous algorithms have been developed over the years for generating Digital Terrain or Features Models (DTMs/DFMs), which provide an accurate representation of the ground or structures’ surface, serving as the foundation for subsequent archaeological analyses. In this study, we report the developed multi-level multi-resolution (MLMR) methodology, based on machine/deep learning methods, for DFM generation through point cloud semantic segmentation. The work also compares different approaches and the impact of the resolution on their performance. To this end, each approach’s performance is evaluated with a series of quantitative and qualitative analyses, with an eye on hardware limitations and time constraints. Three test sites from Mediterranean and Alpine environments, with manually annotated ground truth data, are used for the evaluation of each methodological approach. Full article
Show Figures

Figure 1

Back to TopTop