Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (919)

Search Parameters:
Keywords = early visual system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 287 KB  
Brief Report
Diabetic Retinopathy Screening in Primary Care Real Practice: Study Procedures and Baseline Characteristics from the RETINAvalid Project
by Víctor-Miguel López-Lifante, Maria Palau-Antoja, Noemí Lamonja-Vicente, Cecilia Herrero-Alonso, Josefina Sala-Leal, Rosa García-Sierra, Adrià Prior-Rovira, Marina Alventosa-Zaidin, Meritxell Carmona-Cervelló, Erik Isusquiza Garcia, Idoia Besada and Pere Torán-Monserrat
Healthcare 2026, 14(3), 334; https://doi.org/10.3390/healthcare14030334 - 28 Jan 2026
Abstract
Background/Objectives: With rising diabetes rates, early detection of complications such as diabetic retinopathy (DR), a leading cause of visual impairment, is crucial. Incorporating DR screening into primary care has shown positive results, and integrating technological advances and artificial intelligence (AI) into these [...] Read more.
Background/Objectives: With rising diabetes rates, early detection of complications such as diabetic retinopathy (DR), a leading cause of visual impairment, is crucial. Incorporating DR screening into primary care has shown positive results, and integrating technological advances and artificial intelligence (AI) into these processes offers promising potential. The overall study aims to evaluate the agreement between primary care physicians, ophthalmologists, and an AI system in DR screening and referral decisions within a real-world primary care setting. Methods: In this brief report, we present the study protocol and provide an initial overview and description of our sample. A total of 1517 retinographies, obtained by a non-mydriatic retinal camera, were retrospectively collected from 301 patients with diabetes. Results: Primary care physicians referred 34.5% of the patients to ophthalmology, primarily due to opacification, suspicion of DR, or other retinal diseases. Overall, 13.62% of the participants were suspected of having DR, with 9.63% having a definitive diagnosis. Conclusions: These initial descriptive findings will be further explored in the next phase of the study through the analysis of concordance between primary care physicians, the AI-based software, and ophthalmology specialists. Future results are expected to provide valuable insights into the reliability of DR screening across different evaluators and support the integration of effective DR screening strategies into real-world clinical practice. Full article
(This article belongs to the Special Issue The Latest Advances in Visual Health)
16 pages, 3576 KB  
Article
An Automated Parametric Design Tool to Expand Mass-Timber Utilization Based on Embodied Carbon
by Edward A. Barnett, David W. Dinehart and Steven M. Anastasio
Buildings 2026, 16(3), 527; https://doi.org/10.3390/buildings16030527 - 28 Jan 2026
Abstract
The building sector accounts for a large percentage of global greenhouse gas emissions, largely from the embodied carbon in common building materials like concrete and steel. Embodied carbon (EC) refers to the greenhouse gases released during the manufacturing, transportation, installation, maintenance, and disposal [...] Read more.
The building sector accounts for a large percentage of global greenhouse gas emissions, largely from the embodied carbon in common building materials like concrete and steel. Embodied carbon (EC) refers to the greenhouse gases released during the manufacturing, transportation, installation, maintenance, and disposal of building materials. Although growing in popularity, mass timber is still not nearly as common as other building materials. During the early building design stages, engineers often do not have the time or resources to holistically optimize material selection; consequently, concrete and steel remain the materials of choice. This research focused on the development of a fully automated parametric design tool, APDT, to showcase the viability of evaluating and optimizing mass timber in building construction. The APDT was developed using Autodesk’s Revit 2022 and the visual-based programming tool housed within Revit: Dynamo. The automated designer uses parametric inputs of a building, including size, number of stories, and loading, to create a model of a mass timber building with designed glulam columns and beams and cross-laminated timber floor panels. The designer calculates overall material quantities, which are then used to determine the building’s overall embodied carbon impact. Discussed herein is the development of a building design tool that highlights the benefits of optimized mass timber using existing software and databases. The tool allows the designer to expediently provide an estimate of the amount of material and embodied carbon values, thereby making it easier to consider mass timber when determining the structural system at the infancy stage of the project. The methodology outlined herein provides a replicable methodology for creating an APDT that bridges a critical gap in early-stage design, enabling rapid embodied carbon comparisons and fostering consideration of mass timber as a viable low-carbon alternative. Full article
Show Figures

Figure 1

12 pages, 2780 KB  
Article
A Deep-Learning-Enhanced Ultrasonic Biosensing System for Artifact Suppression in Sow Pregnancy Diagnosis
by Xiaoying Wang, Jundong Wang, Ziming Gao, Xinjie Luo, Zitong Ding, Yiyang Chen, Zhe Zhang, Hao Yin, Yifan Zhang, Xuan Liang and Qiangqiang Ouyang
Biosensors 2026, 16(2), 75; https://doi.org/10.3390/bios16020075 - 27 Jan 2026
Abstract
The integration of artificial intelligence (AI) with ultrasonic biosensing presents a transformative opportunity for enhancing diagnostic accuracy in agricultural and biomedical applications. This study develops a data-driven deep learning model to address the challenge of acoustic artifacts in B-mode ultrasound imaging, specifically for [...] Read more.
The integration of artificial intelligence (AI) with ultrasonic biosensing presents a transformative opportunity for enhancing diagnostic accuracy in agricultural and biomedical applications. This study develops a data-driven deep learning model to address the challenge of acoustic artifacts in B-mode ultrasound imaging, specifically for sow pregnancy diagnosis. We designed a biosensing system centered on a mechanical sector-scanning ultrasound probe (5.0 MHz) as the core biosensor for data acquisition. To overcome the limitations of traditional filtering methods, we introduced a lightweight Deep Neural Network (DNN) based on the YOLOv8 architecture, which was data-driven and trained on a purpose-built dataset of sow pregnancy ultrasound images featuring typical artifacts like reverberation and acoustic shadowing. The AI model functions as an intelligent detection layer that identifies and masks artifact regions while simultaneously detecting and annotating key anatomical features. This combined detection–masking approach enables artifact-aware visualization enhancement, where artifact regions are suppressed and diagnostic structures are highlighted for improved clinical interpretation. Experimental results demonstrate the superiority of our AI-enhanced approach, achieving a mean Intersection over Union (IOU) of 0.89, a Peak Signal-to-Noise Ratio (PSNR) of 34.2 dB, a Structural Similarity Index (SSIM) of 0.92, and clinically tested early gestation accuracy of 98.1%, significantly outperforming traditional methods (IoU: 0.65, PSNR: 28.5 dB, SSIM: 0.72, accuracy: 76.4). Crucially, the system maintains a single-image processing time of 22 ms, fulfilling the requirement for real-time clinical diagnosis. This research not only validates a robust AI-powered ultrasonic biosensing system for improving reproductive management in livestock but also establishes a reproducible, scalable framework for intelligent signal enhancement in broader biosensor applications. Full article
Show Figures

Figure 1

24 pages, 5872 KB  
Article
Quantitative Characterization of Microfiltration Membrane Fouling Using Optical Coherence Tomography with Optimized Image Analysis
by Song Lee, Hyongrak Cho, Yongjun Choi, Juyoung Andrea Lee and Sangho Lee
Membranes 2026, 16(2), 50; https://doi.org/10.3390/membranes16020050 - 26 Jan 2026
Abstract
Membrane fouling reduces permeate flux and treatment efficiency, yet most diagnostic methods are destructive and require offline analysis. Optical coherence tomography (OCT) enables in situ, real-time visualization; however, quantitative image extraction of thin foulant layers is often limited by manual processing and subjective [...] Read more.
Membrane fouling reduces permeate flux and treatment efficiency, yet most diagnostic methods are destructive and require offline analysis. Optical coherence tomography (OCT) enables in situ, real-time visualization; however, quantitative image extraction of thin foulant layers is often limited by manual processing and subjective thresholding. Here, we develop a reproducible OCT image-analysis workflow that combines band-pass filtering, Gaussian smoothing, and unsharp masking with a dual-threshold subtraction strategy for automated fouling-layer segmentation. Seventeen global thresholding algorithms in ImageJ (289 threshold pairs) were benchmarked against SEM-measured cake thickness, identifying Triangle–Moments as the most robust combination. For humic-acid fouling, the OCT-derived endpoint thickness (14.23 ± 1.18 µm) closely agreed with SEM (15.29 ± 1.54 µm). The method was then applied to other microfiltration foulants, including kaolin and sodium alginate, to quantify thickness evolution alongside flux decline. OCT with the optimized image analysis captured rapid early deposition and revealed periods where flux loss continued despite minimal additional thickness growth, consistent with changes in layer permeability and compaction. The proposed framework advances OCT from qualitative visualization to quantitative, real-time fouling diagnostics and supports mechanistic interpretation and improved operational control of membrane systems. Full article
23 pages, 3420 KB  
Article
Design of a Wireless Monitoring System for Cooling Efficiency of Grid-Forming SVG
by Liqian Liao, Jiayi Ding, Guangyu Tang, Yuanwei Zhou, Jie Zhang, Hongxin Zhong, Ping Wang, Bo Yin and Liangbo Xie
Electronics 2026, 15(3), 520; https://doi.org/10.3390/electronics15030520 - 26 Jan 2026
Viewed by 48
Abstract
The grid-forming static var generator (SVG) is a key device that supports the stable operation of power grids with a high penetration of renewable energy. The cooling efficiency of its forced water-cooling system directly determines the reliability of the entire unit. However, existing [...] Read more.
The grid-forming static var generator (SVG) is a key device that supports the stable operation of power grids with a high penetration of renewable energy. The cooling efficiency of its forced water-cooling system directly determines the reliability of the entire unit. However, existing wired monitoring methods suffer from complex cabling and limited capacity to provide a full perception of the water-cooling condition. To address these limitations, this study develops a wireless monitoring system based on multi-source information fusion for real-time evaluation of cooling efficiency and early fault warning. A heterogeneous wireless sensor network was designed and implemented by deploying liquid-level, vibration, sound, and infrared sensors at critical locations of the SVG water-cooling system. These nodes work collaboratively to collect multi-physical field data—thermal, acoustic, vibrational, and visual information—in an integrated manner. The system adopts a hybrid Wireless Fidelity/Bluetooth (Wi-Fi/Bluetooth) networking scheme with electromagnetic interference-resistant design to ensure reliable data transmission in the complex environment of converter valve halls. To achieve precise and robust diagnosis, a three-layer hierarchical weighted fusion framework was established, consisting of individual sensor feature extraction and preliminary analysis, feature-level weighted fusion, and final fault classification. Experimental validation indicates that the proposed system achieves highly reliable data transmission with a packet loss rate below 1.5%. Compared with single-sensor monitoring, the multi-source fusion approach improves the diagnostic accuracy for pump bearing wear, pipeline micro-leakage, and radiator blockage to 98.2% and effectively distinguishes fault causes and degradation tendencies of cooling efficiency. Overall, the developed wireless monitoring system overcomes the limitations of traditional wired approaches and, by leveraging multi-source fusion technology, enables a comprehensive assessment of cooling efficiency and intelligent fault diagnosis. This advancement significantly enhances the precision and reliability of SVG operation and maintenance, providing an effective solution to ensure the safe and stable operation of both grid-forming SVG units and the broader power grid. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

14 pages, 2030 KB  
Article
A Modular AI Workflow for Architectural Facade Style Transfer: A Deep-Style Synergy Approach Based on ComfyUI and Flux Models
by Chong Xu and Chongbao Qu
Buildings 2026, 16(3), 494; https://doi.org/10.3390/buildings16030494 - 25 Jan 2026
Viewed by 128
Abstract
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by [...] Read more.
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by deep perception, encompassing key stages such as style feature extraction, depth information extraction, positive prompt input, and style image generation. The core innovation of this study lies in two aspects: Methodologically, a modular low-code visual workflow has been established. Through the coordinated operation of different modules, it ensures the visual stability of architectural forms during style conversion. In response to the novel challenges posed by generative AI in altering architectural forms, the evaluation framework innovatively introduces a “semantic inheritance degree” assessment system. This elevates the evaluation perspective beyond traditional “geometric similarity” to a new level of “semantic and imagery inheritance.” It should be clarified that the framework proposed by this research primarily provides innovative tools for architectural education, early design exploration, and visualization analysis. This workflow introduces an efficient “style-space” cognitive and generative tool for teaching architectural design. Students can use this tool to rapidly conduct comparative experiments to generate multiple stylistic facades, intuitively grasping the intrinsic relationships among different styles and architectural volumes/spatial structures. This approach encourages bold formal exploration and deepens understanding of architectural formal language. Full article
Show Figures

Figure 1

14 pages, 788 KB  
Article
Anatomical and Systemic Predictors of Early Response to Subthreshold Micropulse Laser in Diabetic Macular Edema: A Retrospective Cohort Study
by Oscar Matteo Gagliardi, Giulia Gregori, Alessio Muzi, Lorenzo Mangoni, Veronica Mogetta, Jay Chhablani, Gregorio Pompucci, Clara Rizzo, Danilo Iannetta, Cesare Mariotti and Marco Lupidi
J. Clin. Med. 2026, 15(3), 955; https://doi.org/10.3390/jcm15030955 - 24 Jan 2026
Viewed by 144
Abstract
Background/Objectives: The aim of this study was to identify anatomical and systemic predictors of early (≤2 months) response to subthreshold micropulse laser (SMPL) in center-involving diabetic macular edema (DME) using automated AI-based OCT biomarker quantification. Methods: Retrospective observational study of 65 [...] Read more.
Background/Objectives: The aim of this study was to identify anatomical and systemic predictors of early (≤2 months) response to subthreshold micropulse laser (SMPL) in center-involving diabetic macular edema (DME) using automated AI-based OCT biomarker quantification. Methods: Retrospective observational study of 65 eyes. Spectral-domain optical coherence tomography (SD-OCT) volumes were analyzed with a CE-marked software (Ophthal v1.0; Mr. Doc s.r.l., Rome, Italy) to quantify intraretinal fluid (IRF) and subretinal fluid (SRF) volumes and outer retinal integrity (external limiting membrane, ELM; ellipsoid zone, EZ). SMPL (577 nm; 5% duty cycle; 200 ms; 150 µm; 250 mW) was applied in a high-density macular grid, sparing the foveal avascular zone. The primary endpoint was absolute and percentage change in IRF volume from baseline to follow-up; predictors of %IRF reduction were assessed by multivariable linear regression. Results: At 52 days (IQR 41–60), best-corrected visual acuity improved from 0.22 to 0.15 logMAR (p < 0.001). IRF volume decreased (median −0.045 mm3; p = 0.034) despite stable central subfield thickness. All eyes with baseline SRF (n = 5; median 0.026 mm3 [0.020–0.046]) achieved complete SRF resolution. Treatment-naïve eyes had greater %IRF reduction than pretreated eyes (59.6% vs. 11.5%; p = 0.029). High responders showed shorter diabetes duration than low responders (14.5 vs. 17 years; p = 0.025); however, treatment-naïve status was the strongest independent predictor of %IRF reduction (p = 0.028). Conclusions: AI-derived fluid volumetrics capture early SMPL response despite unchanged thickness. Treatment-naïve status and shorter diabetes duration may define a metabolic window for optimal early response in DME. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

36 pages, 3544 KB  
Article
Distinguishing a Drone from Birds Based on Trajectory Movement and Deep Learning
by Andrii Nesteruk, Valerii Nikitin, Yosyp Albrekht, Łukasz Ścisło, Damian Grela and Paweł Król
Sensors 2026, 26(3), 755; https://doi.org/10.3390/s26030755 - 23 Jan 2026
Viewed by 113
Abstract
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a [...] Read more.
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a model-driven simulation pipeline that generates synthetic data with a controlled camera model, atmospheric background and realistic motion of three aerial target types: multicopter, fixed-wing UAV and bird. From these sequences, each track is encoded as a time series of image-plane coordinates and apparent size, and a bidirectional long short-term memory (LSTM) network is trained to classify trajectories as drone-like or bird-like. The model learns characteristic differences in smoothness, turning behavior and velocity fluctuations, and to achieve reliable separation between drone and bird motion patterns on synthetic test data. Motion-trajectory cues alone can support early distinguishing of drones from birds when visual details are scarce, providing a complementary signal to conventional image-based detection. The proposed synthetic data and sequence classification pipeline forms a reproducible testbed that can be extended with real trajectories from radar or video tracking systems and used to prototype and benchmark trajectory-based recognizers for integrated surveillance solutions. The proposed method is designed to generalize naturally to real surveillance systems, as it relies on trajectory-level motion patterns rather than appearance-based features that are sensitive to sensor quality, illumination, or weather conditions. Full article
(This article belongs to the Section Industrial Sensors)
18 pages, 3064 KB  
Article
Non-Destructive Detection of Elasmopalpus lignosellus Infestation in Fresh Asparagus Using VIS–NIR Hyperspectral Imaging and Machine Learning
by André Rodríguez-León, Jimy Oblitas, Jhonsson Luis Quevedo-Olaya, William Vera, Grimaldo Wilfredo Quispe-Santivañez and Rebeca Salvador-Reyes
Foods 2026, 15(2), 355; https://doi.org/10.3390/foods15020355 - 19 Jan 2026
Viewed by 256
Abstract
The early detection of internal damage caused by Elasmopalpus lignosellus in fresh asparagus constitutes a challenge for the agro-export industry due to the limited sensitivity of traditional visual inspection. This study evaluated the potential of VIS–NIR hyperspectral imaging (390–1036 nm) combined with machine-learning [...] Read more.
The early detection of internal damage caused by Elasmopalpus lignosellus in fresh asparagus constitutes a challenge for the agro-export industry due to the limited sensitivity of traditional visual inspection. This study evaluated the potential of VIS–NIR hyperspectral imaging (390–1036 nm) combined with machine-learning models to discriminate between infested (PB) and sound (SB) asparagus spears. A balanced dataset of 900 samples was acquired, and preprocessing was performed using Savitzky–Golay and SNV. Four classifiers (SVM, MLP, Elastic Net, and XGBoost) were compared. The optimized SVM model achieved the best results (CV Accuracy = 0.9889; AUC = 0.9997). The spectrum was reduced to 60 bands while LOBO and RFE were used to maintain high performance. In external validation (n = 3000), the model achieved an accuracy of 97.9% and an AUC of 0.9976. The results demonstrate the viability of implementing non-destructive systems based on VIS–NIR to improve the quality control of asparagus destined for export. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

17 pages, 4792 KB  
Article
A Deep Learning-Based Graphical User Interface for Predicting Corneal Ectasia Scores from Raw Optical Coherence Tomography Data
by Maziar Mirsalehi and Achim Langenbucher
Diagnostics 2026, 16(2), 310; https://doi.org/10.3390/diagnostics16020310 - 18 Jan 2026
Viewed by 147
Abstract
Background/Objectives: Keratoconus, a condition in which the cornea becomes thinner and steeper, can cause visual problems, particularly when it is progressive. Early diagnosis is important for preserving visual acuity. Raw data, unlike preprocessed data, are unaffected by software modifications. They retain their [...] Read more.
Background/Objectives: Keratoconus, a condition in which the cornea becomes thinner and steeper, can cause visual problems, particularly when it is progressive. Early diagnosis is important for preserving visual acuity. Raw data, unlike preprocessed data, are unaffected by software modifications. They retain their native structure across versions, providing consistency for analytical purposes. The objective of this study was to design a deep learning-based graphical user interface for predicting the corneal ectasia score using raw optical coherence tomography data. Methods: The graphical user interface was developed using Tkinter, a Python library for building graphical user interfaces. The user is allowed to select raw data from the cornea/anterior segment optical coherence tomography Casia2, which is generated in the 3dv format, from the local system. To view the predicted corneal ectasia score, the user must determine whether the selected 3dv file corresponds to the left or right eye. Extracted optical coherence tomography images are cropped, resized to 224 × 224 pixels and processed by the modified EfficientNet-B0 convolutional neural network to predict the corneal ectasia score. The predicted corneal ectasia score value is displayed along with a diagnosis: ‘No detectable ectasia pattern’ or ‘Suspected ectasia’ or ‘Clinical ectasia’. Performance metric values were rounded to four decimal places, and the mean absolute error value was rounded to two decimal places. Results: The modified EfficientNet-B0 obtained a mean absolute error of 6.65 when evaluated on the test dataset. For the two-class classification, it achieved an accuracy of 87.96%, a sensitivity of 82.41%, a specificity of 96.69%, a positive predictive value of 97.52% and an F1 score of 89.33%. For the three-class classification, it attained a weighted-average F1 score of 84.95% and an overall accuracy of 84.75%. Conclusions: The graphical user interface outputs numerical ectasia scores, which improves other categorical labels. The graphical user interface enables consistent diagnostics, regardless of software updates, by using raw data from the Casia2. The successful use of raw optical coherence tomography data indicates the potential for raw optical coherence tomography data to be used, rather than preprocessed optical coherence tomography data, for diagnosing keratoconus. Full article
(This article belongs to the Special Issue Diagnosis of Corneal and Retinal Diseases)
Show Figures

Figure 1

16 pages, 8303 KB  
Article
Structural Vibration Analysis of UAVs Under Ground Engine Test Conditions
by Sara Isabel González-Cabrera, Nahum Camacho-Zamora, Sergio-Raul Rojas-Ramirez, Arantxa M. Gonzalez-Aguilar, Marco-Osvaldo Vigueras-Zuniga and Maria Elena Tejeda-del-Cueto
Sensors 2026, 26(2), 583; https://doi.org/10.3390/s26020583 - 15 Jan 2026
Viewed by 212
Abstract
Monitoring mechanical vibration is crucial for ensuring the structural integrity and optimal performance of unmanned aerial vehicles (UAVs). This study introduces a portable and low-cost system that enables integrated acquisition and analysis of UAV vibration data in a single step, using a Raspberry [...] Read more.
Monitoring mechanical vibration is crucial for ensuring the structural integrity and optimal performance of unmanned aerial vehicles (UAVs). This study introduces a portable and low-cost system that enables integrated acquisition and analysis of UAV vibration data in a single step, using a Raspberry Pi 4B, data acquisition (DAQ) through a MCC128 DAQ HAT card, and six accelerometers positioned at strategic structural points. Ground-based engine tests at 2700 RPM allowed vibration data to be recorded under conditions similar to those of real operation. Data was processed with a Kalman filter, a Hann window function application, and frequency analysis via Fast Fourier Transform (FFT). The first and second wing bending natural frequencies were identified at 12.3 Hz and 17.5 Hz, respectively, as well as a significant component around 23 Hz, which is a subharmonic of the propulsion system excitation frequency near 45 Hz. The results indicate that the highest vibration amplitudes are concentrated at the wingtips and near the engine. The proposed system offers an accessible and flexible alternative to commercial equipment, integrating acquisition, processing, and real-time visualization. Moreover, its implementation facilitates the early detection of structural anomalies and improves the reliability and safety of UAVs. Full article
Show Figures

Figure 1

23 pages, 1486 KB  
Article
AI-Based Emoji Recommendation for Early Childhood Education Using Deep Learning Techniques
by Shaya A. Alshaya
Computers 2026, 15(1), 59; https://doi.org/10.3390/computers15010059 - 15 Jan 2026
Viewed by 247
Abstract
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper [...] Read more.
The integration of emojis into Early Childhood Education (ECE) presents a promising avenue for enhancing student engagement, emotional expression, and comprehension. While prior studies suggest the benefit of visual aids in learning, systematic frameworks for pedagogically aligned emoji recommendation remain underdeveloped. This paper presents EduEmoji-ECE, a pedagogically annotated dataset of early-childhood learning text segments. Specifically, the proposed model incorporates Bidirectional Encoder Representations from Transformers (BERTs) for contextual embedding extraction, Gated Recurrent Units (GRUs) for sequential pattern recognition, Deep Neural Networks (DNNs) for classification and emoji recommendation, and DECOC for improving emoji class prediction robustness. This hybrid BERT-GRU-DNN-DECOC architecture effectively captures textual semantics, emotional tone, and pedagogical intent, ensuring the alignment of emoji class recommendation with learning objectives. The experimental results show that the system is effective, with an accuracy of 95.3%, a precision of 93%, a recall of 91.8%, and an F1-score of 92.3%, outperforming baseline models in terms of contextual understanding and overall accuracy. This work helps fill a gap in AI-based education by combining learning with visual support for young children. The results suggest an association between emoji-enhanced materials and improved engagement/comprehension indicators in our exploratory classroom setting; however, causal attribution to the AI placement mechanism is not supported by the current study design. Full article
Show Figures

Figure 1

23 pages, 1740 KB  
Article
Print Exposure Interaction with Neural Tuning on Letter/Non-Letter Processing During Literacy Acquisition: An ERP Study on Dyslexic and Typically Developing Children
by Elizaveta Galperina, Olga Kruchinina, Polina Boichenkova and Alexander Kornev
Languages 2026, 11(1), 15; https://doi.org/10.3390/languages11010015 - 14 Jan 2026
Viewed by 210
Abstract
Background/Objectives: The first step in learning an alphabetic writing system is to establish letter–sound associations. This process is more difficult for children with dyslexia (DYS) than for typically developing (TD) children. Cerebral mechanisms underlying these associations are not fully understood and are [...] Read more.
Background/Objectives: The first step in learning an alphabetic writing system is to establish letter–sound associations. This process is more difficult for children with dyslexia (DYS) than for typically developing (TD) children. Cerebral mechanisms underlying these associations are not fully understood and are expected to change during the training course. This study aimed to identify the neurophysiological correlates and developmental changes of visual letter processing in children with DYS compared to TD children, using event-related potentials (ERPs) during a letter/non-letter classification task. Methods: A total of 71 Russian-speaking children aged 7–11 years participated in the study, including 38 with dyslexia and 33 TD children. The participants were divided into younger (7–8 y.o.) and older (9–11 y.o.) subgroups. EEG recordings were taken while participants classified letters and non-letter characters. We analyzed ERP components (N/P150, N170, P260, P300, N320, and P600) in left-hemisphere regions of interest related to reading: the ventral occipito-temporal cortex (VWFA ROI) and the inferior frontal cortex (frontal ROI). Results: Behavioral differences, specifically lower accuracy in children with dyslexia, were observed only in the younger subgroup. ERP analysis indicated that both groups displayed common stimulus effects, such as a larger N170 for letters in younger children. However, their developmental trajectories diverged. The DYS group showed an age-related increase in the amplitude of early components (N/P150 in VWFA ROI), which contrasts with the typical decrease observed in TD children. In contrast, the late P600 component in the frontal ROI revealed an age-related decrease in the DYS group, along with overall reduced amplitudes compared to their TD peers. Additionally, the N320 component differentiated stimuli exclusively in the DYS group. Conclusions: The data obtained in this study confirmed that the mechanisms of letter recognition in children with dyslexia differ in some ways from those of their TD peers. This atypical developmental pattern involves a failure to efficiently specialize early visual processing, as evidenced by the increasing N/P150. Additionally, there is a progressive reduction in the cognitive resources available for higher-order reanalysis and control, indicated by the decreasing frontal P600. This disruption in neural specialization and automation ultimately hinders the development of fluent reading. Full article
Show Figures

Figure 1

11 pages, 868 KB  
Article
Physiological Effects of Far-Infrared-Emitting Garments on Sleep, Thermoregulation, and Autonomic Function Assessed Using Wearable Sensors
by Masaki Nishida, Taku Nishii, Shutaro Suyama and Sumi Youn
Sensors 2026, 26(2), 550; https://doi.org/10.3390/s26020550 - 14 Jan 2026
Viewed by 339
Abstract
Far-infrared (FIR)-emitting textiles are increasingly used in sleepwear; however, their influence on sleep physiology has not been comprehensively evaluated with multi-modal wearable sensing. This randomized, double-blind, placebo-controlled crossover study examined whether FIR-emitting garments modulate nocturnal thermoregulation, autonomic activity, and sleep architecture. Fifteen healthy [...] Read more.
Far-infrared (FIR)-emitting textiles are increasingly used in sleepwear; however, their influence on sleep physiology has not been comprehensively evaluated with multi-modal wearable sensing. This randomized, double-blind, placebo-controlled crossover study examined whether FIR-emitting garments modulate nocturnal thermoregulation, autonomic activity, and sleep architecture. Fifteen healthy young men completed two overnight laboratory sleep sessions wearing either FIR-emitting garments or visually matched polyester controls. Tympanic membrane temperature (TMT), sweating rate, skin temperature, and humidity were continuously monitored using wearable sensors, and sleep stages and heart rate variability (HRV) were assessed using validated portable systems. Compared with control garments, FIR garments produced consistently lower TMT across the night (p = 0.004) and reduced mid-sleep sweating (condition × time interaction: p = 0.026). The proportion of rapid eye movement (REM) sleep was higher in the FIR condition (22.2% ± 6.5% vs. 18.6% ± 6.5%, p = 0.027), despite no changes in total sleep time or sleep efficiency. A transient increase in low-frequency power during early sleep (p = 0.027) suggested baroreflex-related thermal adjustments without sympathetic activation. These findings indicate that FIR-emitting garments facilitate mild nocturnal heat dissipation and support REM expression, demonstrating their potential as a passive intervention to improve sleep-related thermal environments. Full article
(This article belongs to the Special Issue State of the Art in Wearable Sensors for Health Monitoring)
Show Figures

Figure 1

14 pages, 1359 KB  
Proceeding Paper
Non-Parametric Model for Curvature Classification of Departure Flight Trajectory Segments
by Lucija Žužić, Ivan Štajduhar, Jonatan Lerga and Renato Filjar
Eng. Proc. 2026, 122(1), 1; https://doi.org/10.3390/engproc2026122001 - 13 Jan 2026
Viewed by 180
Abstract
This study introduces a novel approach for classifying flight trajectory curvature, focusing on early-stage flight characteristics to detect anomalies and deviations. The method intentionally avoids direct coordinate data and instead leverages a combination of trajectory-derived and meteorological features. This research analysed 9849 departure [...] Read more.
This study introduces a novel approach for classifying flight trajectory curvature, focusing on early-stage flight characteristics to detect anomalies and deviations. The method intentionally avoids direct coordinate data and instead leverages a combination of trajectory-derived and meteorological features. This research analysed 9849 departure flight trajectories originating from 14 different airports. Two distinct trajectory classes were established through manual visual inspection, differentiated by curvature patterns. This categorisation formed the ground truth for evaluating trained machine learning (ML) classifiers from different families. The comparative analysis demonstrates that the Random Forest (RF) algorithm provides the most effective classification model. RF excels at summarising complex trajectory information and identifying non-linear relationships within the early-flight data. A key contribution of this work is the validation of specific predictors. The theoretical definitions of direction change (using vector values to capture dynamic movement) and diffusion distance (using scalar values to represent static displacement) proved highly effective. Their selection as primary predictors is supported by their ability to represent the essential static and dynamic properties of the trajectory, enabling the model to accurately classify flight paths and potential deviations before the flight is complete. This approach offers significant potential for enhancing real-time air traffic monitoring and safety systems. Full article
Show Figures

Figure 1

Back to TopTop