Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,885)

Search Parameters:
Keywords = attention and awareness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 6746 KB  
Article
Cross-Attentive CNNs for Joint Specral and Pitch Feature Learning in Predominant Instrument Recognition from Polyphonic Music
by Lekshmi Chandrika Reghunath, Rajeev Rajan, Christian Napoli and Cristian Randieri
Technologies 2026, 14(1), 3; https://doi.org/10.3390/technologies14010003 (registering DOI) - 19 Dec 2025
Abstract
Identifying instruments in polyphonic audio is challenging due to overlapping spectra and variations in timbre and playing styles. This task is central to music information retrieval, with applications in transcription, recommendation, and indexing. We propose a dual-branch Convolutional Neural Network (CNN) that processes [...] Read more.
Identifying instruments in polyphonic audio is challenging due to overlapping spectra and variations in timbre and playing styles. This task is central to music information retrieval, with applications in transcription, recommendation, and indexing. We propose a dual-branch Convolutional Neural Network (CNN) that processes Mel-spectrograms and binary pitch masks, fused through a cross-attention mechanism to emphasize pitch-salient regions. On the IRMAS dataset, the model achieves competitive performance with state-of-the-art methods, reaching a micro F1 of 0.64 and a macro F1 of 0.57 with only 0.878M parameters. Ablation studies and t-SNE analyses further highlight the benefits of cross-modal attention for robust predominant instrument recognition. Full article
Show Figures

Figure 1

21 pages, 1244 KB  
Article
Dynamic Evolution and Relation Perception for Temporal Knowledge Graph Reasoning
by Yuan Huang, Pengwei Shi, Xiaozheng Zhou and Ruizhi Yin
Future Internet 2026, 18(1), 3; https://doi.org/10.3390/fi18010003 - 19 Dec 2025
Abstract
Temporal knowledge graphs (TKGs) incorporate temporal information into traditional triplets, enhancing the dynamic representation of real-world events. Temporal knowledge graph reasoning aims to infer unknown quadruples at future timestamps through dynamic modeling and learning of nodes and edges in the knowledge graph. Existing [...] Read more.
Temporal knowledge graphs (TKGs) incorporate temporal information into traditional triplets, enhancing the dynamic representation of real-world events. Temporal knowledge graph reasoning aims to infer unknown quadruples at future timestamps through dynamic modeling and learning of nodes and edges in the knowledge graph. Existing TKG reasoning approaches often suffer from two main limitations: neglecting the influence of temporal information during entity embedding and insufficient or unreasonable processing of relational structures. To address these issues, we propose DERP, a relation-aware reasoning model with dynamic evolution mechanisms. The model enhances entity embeddings by jointly encoding time-varying and static features. It processes graph-structured data through relational graph convolutional layers, which effectively capture complex relational patterns between entities. Notably, it introduces an innovative relational-aware attention mechanism (RAGAT) that dynamically adapts the importance weights of relations between entities. This facilitates enhanced information aggregation from neighboring nodes and strengthens the model’s ability to capture local structural features. Subsequently, prediction scores are generated utilizing a convolutional decoder. The proposed model significantly enhances the accuracy of temporal knowledge graph reasoning and effectively handles dynamically evolving entity relationships. Experimental results on four public datasets demonstrate the model’s superior performance, as evidenced by strong results on standard evaluation metrics, including Mean Reciprocal Rank (MRR), Hits@1, Hits@3, and Hits@10. Full article
24 pages, 7857 KB  
Article
MTFM: Multi-Teacher Feature Matching for Cross-Dataset and Cross-Architecture Adversarial Robustness Transfer in Remote Sensing Applications
by Ravi Kumar Rogannagari and Kazi Aminul Islam
Remote Sens. 2026, 18(1), 8; https://doi.org/10.3390/rs18010008 - 19 Dec 2025
Abstract
Remote sensing plays a critical role in environmental monitoring, land use analysis, and disaster response by enabling large-scale, data-driven observation of Earth’s surface. Image classification models are central to interpreting remote sensing data, yet they remain vulnerable to adversarial attacks that can mislead [...] Read more.
Remote sensing plays a critical role in environmental monitoring, land use analysis, and disaster response by enabling large-scale, data-driven observation of Earth’s surface. Image classification models are central to interpreting remote sensing data, yet they remain vulnerable to adversarial attacks that can mislead predictions and compromise reliability. While adversarial training improves robustness, the challenge of transferring this robustness across models and domains remains underexplored. This study investigates robustness transfer as a defense strategy, aiming to enhance the resilience of remote sensing classifiers against adversarial patch attacks. We propose a novel Multi-Teacher Feature Matching (MTFM) framework to align feature spaces between clean and adversarially robust teacher models and the student model, aiming to achieve an optimal trade-off between accuracy and robustness against adversarial patch attacks. The proposed method consistently outperforms traditional standard models and matches—or in some cases, surpasses—conventional defense strategies across diverse datasets and architectures. The MTFM approach also supersedes the self-attention module-based adversarial robustness transfer. Importantly, it achieves these gains with less training effort compared to traditional adversarial defenses. These results highlight the potential of robustness-aware knowledge transfer as a scalable and efficient solution for building resilient geospatial AI systems. Full article
Show Figures

Figure 1

14 pages, 17578 KB  
Article
A Two-Stage High-Precision Recognition and Localization Framework for Key Components on Industrial PCBs
by Li Wang, Liu Ouyang, Huiying Weng, Xiang Chen, Anna Wang and Kexin Zhang
Mathematics 2026, 14(1), 4; https://doi.org/10.3390/math14010004 - 19 Dec 2025
Abstract
Precise recognition and localization of electronic components on printed circuit boards (PCBs) are crucial for industrial automation tasks, including robotic disassembly, high-precision assembly, and quality inspection. However, strong visual interference from silkscreen characters, copper traces, solder pads, and densely packed small components often [...] Read more.
Precise recognition and localization of electronic components on printed circuit boards (PCBs) are crucial for industrial automation tasks, including robotic disassembly, high-precision assembly, and quality inspection. However, strong visual interference from silkscreen characters, copper traces, solder pads, and densely packed small components often degrades the accuracy of deep learning-based detectors, particularly under complex industrial imaging conditions. This paper presents a two-stage, coarse-to-fine PCB component localization framework based on an optimized YOLOv11 architecture and a sub-pixel geometric refinement module. The proposed method enhances the backbone with a Convolutional Block Attention Module (CBAM) to suppress background noise and strengthen discriminative features. It also integrates a tiny-object detection branch and a weighted Bi-directional Feature Pyramid Network (BiFPN) for more effective multi-scale feature fusion, and it employs a customized hybrid loss with vertex-offset supervision to enable pose-aware bounding box regression. In the second stage, the coarse predictions guide contour-based sub-pixel fitting using template geometry to achieve industrial-grade precision. Experiments show significant improvements over baseline YOLOv11, particularly for small and densely arranged components, indicating that the proposed approach meets the stringent requirements of industrial robotic disassembly. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

22 pages, 1099 KB  
Article
Cross-Attention Diffusion Model for Semantic-Aware Short-Term Urban OD Flow Prediction
by Hongxiang Li, Zhiming Gui and Zhenji Gao
ISPRS Int. J. Geo-Inf. 2026, 15(1), 2; https://doi.org/10.3390/ijgi15010002 - 19 Dec 2025
Abstract
Origin–destination (OD) flow prediction is fundamental to intelligent transportation systems, yet existing diffusion-based models face two critical limitations. First, they inadequately exploit spatial semantics, focusing primarily on temporal dependencies or topological correlations while neglecting urban functional heterogeneity encoded in Points of Interest (POIs). [...] Read more.
Origin–destination (OD) flow prediction is fundamental to intelligent transportation systems, yet existing diffusion-based models face two critical limitations. First, they inadequately exploit spatial semantics, focusing primarily on temporal dependencies or topological correlations while neglecting urban functional heterogeneity encoded in Points of Interest (POIs). Second, static embedding fusion cannot dynamically capture semantic importance variations during denoising—particularly during traffic surges in POI-dense areas. To address these gaps, we propose the Cross-Attention Diffusion Model (CADM), a semantically conditioned framework for short-term OD flow forecasting. CADM integrates POI embeddings as spatial semantic priors and employs cross-attention to enable semantic-guided denoising, facilitating dynamic spatiotemporal feature fusion. This design adaptively reweights regional representations throughout reverse diffusion, enhancing the model’s capacity to capture complex mobility patterns. Experiments on real-world datasets demonstrate that CADM achieves balanced performance across multiple metrics. At the 30 min horizon, CADM attains the lowest RMSE of 5.77, outperforming iTransformer by 1.9%, while maintaining competitive performance at the 15 min horizon. Ablation studies confirm that removing POI features increases prediction errors by 15–20%, validating the critical role of semantic conditioning. These findings advance semantic-aware generative modeling for spatiotemporal prediction and provide practical insights for intelligent transportation systems, particularly for newly established transportation hubs or functional zone reconfigurations where semantic understanding is essential. Full article
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)
Show Figures

Figure 1

23 pages, 6967 KB  
Article
Semantics- and Physics-Guided Generative Network for Radar HRRP Generalized Zero-Shot Recognition
by Jiaqi Zhou, Tao Zhang, Siyuan Mu, Yuze Gao, Feiming Wei and Wenxian Yu
Remote Sens. 2026, 18(1), 4; https://doi.org/10.3390/rs18010004 - 19 Dec 2025
Abstract
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks [...] Read more.
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks have emerged as the dominant approaches. Nevertheless, these traditional closed-set recognition methods require labeled data for every class in training, while in reality, seen classes and unseen classes coexist. Therefore, it is necessary to explore methods that can identify both seen and unseen classes simultaneously. To this end, a semantic- and physical-guided generative network (SPGGN) was innovatively proposed for HRRP generalized zero-shot recognition; it combines a constructed knowledge graph with attribute vectors to comprehensively represent semantics and reconstructs strong scattering points to introduce physical constraints. Specifically, to boost the robustness, we reconstructed the strong scattering points from deep features of HRRPs, where class-aware contrastive learning in the middle layer effectively mitigates the influence of target-aspect variations. In the classification stage, discriminative features are produced through attention-based feature fusion to capture multi-faceted information, while the design of balancing loss abates the bias towards seen classes. Experiments on two measured aircraft HRRP datasets validated the superior recognition performance of our method. Full article
Show Figures

Figure 1

20 pages, 16618 KB  
Article
Title Walking the Soundscape: Creative Learning Pathways to Environmental Education in Chilean Schools
by André Rabello-Mestre, Felipe Otondo and Gabriel Morales
Sustainability 2026, 18(1), 21; https://doi.org/10.3390/su18010021 - 19 Dec 2025
Abstract
This article explores the pedagogical potential of soundscapes as creative learning tools for advancing environmental education in Chilean primary schools. Drawing on the Soundlapse project, we designed and implemented a school workshop that combined activity sheets, an online bird-sound repository, structured soundwalks, and [...] Read more.
This article explores the pedagogical potential of soundscapes as creative learning tools for advancing environmental education in Chilean primary schools. Drawing on the Soundlapse project, we designed and implemented a school workshop that combined activity sheets, an online bird-sound repository, structured soundwalks, and immersive audio concerts with teachers and students in Valdivia. The study employed a qualitative, participatory design, analyzing teacher interviews through reflexive thematic analysis. Four themes emerged: (1) listening as pedagogical practice, (2) learning through place and the senses, (3) creativity and cross-disciplinarity, and (4) implementation challenges and opportunities. Teachers emphasized the transformative role of attentive listening, which reconfigured classroom dynamics through shared silence and cultivated students’ capacity for self-regulation. Soundwalks and sensory encounters with local wetlands positioned the environment as a ‘living laboratory,’ fostering ecological awareness, attachment to place, and intergenerational knowledge. Creative activities such as sound mapping legitimized symbolic and artistic modes of representation, while interdisciplinary collaborations between science and music expanded curricular possibilities. At the same time, institutional rigidity and lack of resources highlighted the importance of teacher agency, co-designed materials, and flexible frameworks to sustain these practices. We argue that soundscape-based education offers a timely opportunity to integrate sensory, creative, and ecological dimensions into school curricula, aligning with national and international calls for interdisciplinary sustainability education. By treating listening and creativity as core rather than peripheral, such approaches may open new pathways for cultivating ecological awareness, cultural belonging, and pedagogical innovation. Full article
Show Figures

Figure 1

28 pages, 4151 KB  
Article
FANet: Frequency-Aware Attention-Based Tiny-Object Detection in Remote Sensing Images
by Zixiao Wen, Peifeng Li, Yuhan Liu, Jingming Chen, Xiantai Xiang, Yuan Li, Huixian Wang, Yongchao Zhao and Guangyao Zhou
Remote Sens. 2025, 17(24), 4066; https://doi.org/10.3390/rs17244066 - 18 Dec 2025
Abstract
In recent years, deep learning-based remote sensing object detection has achieved remarkable progress, yet the detection of tiny objects remains a significant challenge. Tiny objects in remote sensing images typically occupy only a few pixels, resulting in low contrast, poor resolution, and high [...] Read more.
In recent years, deep learning-based remote sensing object detection has achieved remarkable progress, yet the detection of tiny objects remains a significant challenge. Tiny objects in remote sensing images typically occupy only a few pixels, resulting in low contrast, poor resolution, and high sensitivity to localization errors. Their diverse scales and appearances, combined with complex backgrounds and severe class imbalance, further complicate the detection tasks. Conventional spatial feature extraction methods often struggle to capture the discriminative characteristics of tiny objects, especially in the presence of noise and occlusion. To address these challenges, we propose a frequency-aware attention-based tiny-object detection network with two plug-and-play modules that leverage frequency-domain information to enhance the targets. Specifically, we introduce a Multi-Scale Frequency Feature Enhancement Module (MSFFEM) to adaptively highlight the contour and texture details of tiny objects while suppressing background noise. Additionally, a Channel Attention-based RoI Enhancement Module (CAREM) is proposed to selectively emphasize high-frequency responses within RoI features, further improving object localization and classification. Furthermore, to mitigate sample imbalance, we employ multi-directional flip sample augmentation and redundancy filtering strategies, which significantly boost detection performance for few-shot categories. Extensive experiments on public object detection datasets, i.e., AI-TOD, VisDrone2019, and DOTA-v1.5, demonstrate that the proposed FANet consistently improves detection performance for tiny objects, outperforming existing methods and providing new insights into the integration of frequency-domain analysis and attention mechanisms for robust tiny-object detection in remote sensing applications. Full article
(This article belongs to the Special Issue Deep Learning-Based Small-Target Detection in Remote Sensing)
Show Figures

Figure 1

25 pages, 8304 KB  
Article
STAIR-DETR: A Synergistic Transformer Integrating Statistical Attention and Multi-Scale Dynamics for UAV Small Object Detection
by Linna Hu, Penghao Xue, Bin Guo, Yiwen Chen, Weixian Zha and Jiya Tian
Sensors 2025, 25(24), 7681; https://doi.org/10.3390/s25247681 - 18 Dec 2025
Abstract
Detecting small objects in unmanned aerial vehicle (UAV) imagery remains a challenging task due to the limited target scale, cluttered backgrounds, severe occlusion, and motion blur commonly observed in dynamic aerial environments. This study presents STAIR-DETR, a real-time synergistic detection framework derived from [...] Read more.
Detecting small objects in unmanned aerial vehicle (UAV) imagery remains a challenging task due to the limited target scale, cluttered backgrounds, severe occlusion, and motion blur commonly observed in dynamic aerial environments. This study presents STAIR-DETR, a real-time synergistic detection framework derived from RT-DETR, featuring comprehensive enhancements in feature extraction, resolution transformation, and detection head design. A Statistical Feature Attention (SFA) module is incorporated into the neck to replace the original AIFI, enabling token-level statistical modeling that strengthens fine-grained feature representation while effectively suppressing background interference. The backbone is reinforced with a Diverse Semantic Enhancement Block (DSEB), which employs multi-branch pathways and dynamic convolution to enrich semantic expressiveness without sacrificing spatial precision. To mitigate information loss during scale transformation, an Adaptive Scale Transformation Operator (ASTO) is proposed by integrating Context-Guided Downsampling (CGD) and Dynamic Sampling (DySample), achieving context-aware compression and content-adaptive reconstruction across resolutions. In addition, a high-resolution P2 detection head is introduced to leverage shallow-layer features for accurate classification and localization of extremely small targets. Extensive experiments conducted on the VisDrone2019 dataset demonstrate that STAIR-DETR attains 41.7% mAP@50 and 23.4% mAP@50:95, outperforming contemporary state-of-the-art (SOTA) detectors while maintaining real-time inference efficiency. These results confirm the effectiveness and robustness of STAIR-DETR for precise small object detection in complex UAV-based imaging scenarios. Full article
(This article belongs to the Special Issue Dynamics and Control System Design for Robotics)
Show Figures

Figure 1

17 pages, 866 KB  
Article
Dual Routing Mixture-of-Experts for Multi-Scale Representation Learning in Multimodal Emotion Recognition
by Da-Eun Chae and Seok-Pil Lee
Electronics 2025, 14(24), 4972; https://doi.org/10.3390/electronics14244972 - 18 Dec 2025
Abstract
Multimodal emotion recognition (MER) often relies on single-scale representations that fail to capture the hierarchical structure of emotional signals. This paper proposes a Dual Routing Mixture-of-Experts (MoE) model that dynamically selects between local (fine-grained) and global (contextual) representations extracted from speech and text [...] Read more.
Multimodal emotion recognition (MER) often relies on single-scale representations that fail to capture the hierarchical structure of emotional signals. This paper proposes a Dual Routing Mixture-of-Experts (MoE) model that dynamically selects between local (fine-grained) and global (contextual) representations extracted from speech and text encoders. The framework first obtains local–global embeddings using WavLM and RoBERTa, then employs a scale-aware routing mechanism to activate the most informative expert before bidirectional cross-attention fusion. Experiments on the IEMOCAP dataset show that the proposed model achieves stable performance across all folds, reaching an average unweighted accuracy (UA) of 75.27% and weighted accuracy (WA) of 74.09%. The model consistently outperforms single-scale baselines and simple concatenation methods, confirming the importance of dynamic multi-scale cue selection. Ablation studies highlight that neither local-only nor global-only representations are sufficient, while routing behavior analysis reveals emotion-dependent scale preferences—such as strong reliance on local acoustic cues for anger and global contextual cues for low-arousal emotions. These findings demonstrate that emotional expressions are inherently multi-scale and that scale-aware expert activation provides a principled approach beyond conventional single-scale fusion. Full article
31 pages, 4844 KB  
Article
GAME-YOLO: Global Attention and Multi-Scale Enhancement for Low-Visibility UAV Detection with Sub-Pixel Localization
by Ruohai Di, Hao Fan, Yuanzheng Ma, Jinqiang Wang and Ruoyu Qian
Entropy 2025, 27(12), 1263; https://doi.org/10.3390/e27121263 - 18 Dec 2025
Abstract
Detecting low-altitude, slow-speed, small (LSS) UAVs is especially challenging in low-visibility scenes (low light, haze, motion blur), where inherent uncertainties in sensor data and object appearance dominate. We propose GAME-YOLO, a novel detector that integrates a Bayesian-inspired probabilistic reasoning framework with Global Attention [...] Read more.
Detecting low-altitude, slow-speed, small (LSS) UAVs is especially challenging in low-visibility scenes (low light, haze, motion blur), where inherent uncertainties in sensor data and object appearance dominate. We propose GAME-YOLO, a novel detector that integrates a Bayesian-inspired probabilistic reasoning framework with Global Attention and Multi-Scale Enhancement to improve small-object perception and sub-pixel-level localization. Built on YOLOv11, our framework comprises: (i) a visibility restoration front-end that probabilistically infers and enhances latent image clarity; (ii) a global-attention-augmented backbone that performs context-aware feature selection; (iii) an adaptive multi-scale fusion neck that dynamically weights feature contributions; (iv) a sub-pixel-aware small-object detection head (SOH) that leverages high-resolution feature grids to model sub-pixel offsets; and (v) a novel Shape-Aware IoU loss combined with focal loss. Extensive experiments on the LSS2025-DET dataset demonstrate that GAME-YOLO achieves state-of-the-art performance, with an AP@50 of 52.0% and AP@[0.50:0.95] of 32.0%, significantly outperforming strong baselines such as LEAF-YOLO (48.3% AP@50) and YOLOv11 (36.2% AP@50). The model maintains high efficiency, operating at 48 FPS with only 7.6 M parameters and 19.6 GFLOPs. Ablation studies confirm the complementary gains from our probabilistic design choices, including a +10.5 pp improvement in AP@50 over the baseline. Cross-dataset evaluation on VisDrone-DET2021 further validates its generalization capability, achieving 39.2% AP@50. These results indicate that GAME-YOLO offers a practical and reliable solution for vision-based UAV surveillance by effectively marrying the efficiency of deterministic detectors with the robustness principles of Bayesian inference. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
Show Figures

Figure 1

25 pages, 9939 KB  
Article
RAC-RTDETR: A Lightweight, Efficient Real-Time Small-Object Detection Algorithm for Steel Surface Defect Detection
by Zhenping Xu and Nengxi Wang
Electronics 2025, 14(24), 4968; https://doi.org/10.3390/electronics14244968 - 18 Dec 2025
Abstract
Steel, a fundamental material in modern industry, is widely used across manufacturing, construction, and energy sectors. Steel surface defects exhibit characteristics such as multiple classes, multi-scale features, small detection targets, and low-contrast backgrounds, making detection difficult. We propose RAC-RTDETR, a lightweight real-time detection [...] Read more.
Steel, a fundamental material in modern industry, is widely used across manufacturing, construction, and energy sectors. Steel surface defects exhibit characteristics such as multiple classes, multi-scale features, small detection targets, and low-contrast backgrounds, making detection difficult. We propose RAC-RTDETR, a lightweight real-time detection algorithm designed for accurately identifying small surface defects on steel. Key improvements include: (1) The ARNet network, combining the ADown module and the RepNCSPELAN4-CAA module with a CAA-based attention mechanism, results in a lighter backbone network with better feature extraction and enhanced small-object detection by integrating contextual information; (2) The novel AIFI-ASMD module, composed of Adaptive Sparse Self-Attention (ASSA), Spatially Enhanced Feedforward Network (SEFN), Multi-Cognitive Visual Adapter (Mona), and Dynamic Tanh (DyT), optimizes feature interactions at different scales, reduces noise interference, and improves spatial awareness and long-range dependency modeling for better detection of multi-scale objects; (3) The Converse2D upsampling module replaces traditional upsampling methods, preserving details and enhancing small-object recognition in low-contrast, sparse feature scenarios. Experimental results on the NEU-DET and GC10-DET datasets show that RAC-RTDETR outperforms baseline models with MAP improvements of 3.56% and 3.47%, a 36.18% reduction in Parameters, a 40.70% decrease in GFLOPs, and a 7.96% increase in FPS. Full article
(This article belongs to the Special Issue Advances in Real-Time Object Detection and Tracking)
Show Figures

Figure 1

12 pages, 213 KB  
Article
Patient Satisfaction with the Expanded Nurses Service in Primary Health Care: Evidence from Kazakhstan
by Indira Abdikadirova, Lyudmila Yermukhanova, Aurelija Blaževičiene, Zhanar Dostanova, Zaure Baigozhina, Maiya Taushanova, Gulnar Sultanova and Kauysheva Almagul
Healthcare 2025, 13(24), 3314; https://doi.org/10.3390/healthcare13243314 - 18 Dec 2025
Abstract
Background/Objectives: The implementation of advanced practice nursing in Kazakhstan is aimed at improving the accessibility and quality of primary healthcare. One of the key indicators of the effectiveness of this model is patient satisfaction, which reflects the perceived quality of care and directly [...] Read more.
Background/Objectives: The implementation of advanced practice nursing in Kazakhstan is aimed at improving the accessibility and quality of primary healthcare. One of the key indicators of the effectiveness of this model is patient satisfaction, which reflects the perceived quality of care and directly influences treatment adherence. The aim of the study was to assess patient satisfaction with nurse-led consultations in primary healthcare institutions in Kazakhstan. Methods: A cross-sectional study was conducted using a questionnaire developed on the basis of Karin Bergman’s instrument and adapted to the Kazakhstani context. A total of 621 patients who attended independent nursing consultations in polyclinics in Aktobe, Almaty, Astana, and the village of Merke participated in the survey. Descriptive statistics and Pearson’s χ2 test were applied, with statistical significance set at p < 0.05. Results: The majority of respondents were women, with a median age of 61 years. Awareness of independent consultations was higher among patients who regularly visited nurses (97.1% vs. 86.9%; p < 0.006). High satisfaction levels were associated with service accessibility, quality of examination, and clarity of recommendations. Among regular visitors, 99.2% reported satisfaction with the nurse’s work, and 76.6% rated the service as “excellent”. In contrast, patients with irregular visits more often reported dissatisfaction due to insufficient attention and limited knowledge of nurses. Conclusions: The findings confirm a high level of patient satisfaction with advanced practice nursing services and highlight the importance of this model in strengthening primary healthcare in Kazakhstan. Full article
(This article belongs to the Special Issue Patient Experience and the Quality of Health Care)
24 pages, 2210 KB  
Article
Deep Transfer Learning for UAV-Based Cross-Crop Yield Prediction in Root Crops
by Suraj A. Yadav, Yanbo Huang, Kenny Q. Zhu, Rayyan Haque, Wyatt Young, Lorin Harvey, Mark Hall, Xin Zhang, Nuwan K. Wijewardane, Ruijun Qin, Max Feldman, Haibo Yao and John P. Brooks
Remote Sens. 2025, 17(24), 4054; https://doi.org/10.3390/rs17244054 - 17 Dec 2025
Abstract
Limited annotated data often constrain accurate yield prediction in underrepresented crops. To address this challenge, we developed a cross-crop deep transfer learning (TL) framework that leverages potato (Solanum tuberosum L.) as the source domain to predict sweet potato (Ipomoea batatas L.) [...] Read more.
Limited annotated data often constrain accurate yield prediction in underrepresented crops. To address this challenge, we developed a cross-crop deep transfer learning (TL) framework that leverages potato (Solanum tuberosum L.) as the source domain to predict sweet potato (Ipomoea batatas L.) yield using multi-temporal uncrewed aerial vehicle (UAV)-based multispectral imagery. A hybrid convolutional–recurrent neural network (CNN–RNN–Attention) architecture was implemented with a robust parameter-based transfer strategy to ensure temporal alignment and feature-space consistency across crops. Cross-crop feature migration analysis showed that predictors capturing canopy vigor, structure, and soil–vegetation contrast exhibited the highest distributional similarity between potato and sweet potato. In comparison, pigment-sensitive and agronomic predictors were less transferable. These robustness patterns were reflected in model performance, as all architectures showed substantial improvement when moving from the minimal 3 predictor subset to the 5–7 predictor subsets, where the most transferable indices were introduced. The hybrid CNN–RNN–Attention model achieved peak accuracy (R20.64 and RMSE ≈ 18%) using time-series data up to the tuberization stage with only 7 predictors. In contrast, convolutional neural network (CNN), bidirectional gated recurrent unit (BiGRU), and bidirectional long short-term memory (BiLSTM) baseline models required 11–13 predictors to achieve comparable performance and often showed reduced or unstable accuracy at higher dimensionality due to redundancy and domain-shift amplification. Two-way ANOVA further revealed that cover crop type significantly influenced yield, whereas nitrogen rate and the interaction term were not significant. Overall, this study demonstrates that combining robustness-aware feature design with hybrid deep TL model enables accurate, data-efficient, and physiologically interpretable yield prediction in sweet potato, offering a scalable pathway for applying TL in other underrepresented root and tuber crops. Full article
(This article belongs to the Special Issue Application of UAV Images in Precision Agriculture)
Show Figures

Figure 1

29 pages, 5168 KB  
Article
Effects of Dual-Operator Modes on Team Situation Awareness: A Non-Dyadic HMI Perspective in Intelligent Coal Mines
by Xiaofang Yuan, Xinxiang Zhang, Jiawei He and Linhui Sun
Appl. Sci. 2025, 15(24), 13222; https://doi.org/10.3390/app152413222 - 17 Dec 2025
Abstract
Under the context of non-dyadic human–machine interaction in intelligent coal mines, this study investigates the impact of different dyadic collaboration modes on Team Situation Awareness (TSA). Based on a simulated coal mine monitoring task, the experiment compares four working modes—Individual Operation, Supervised Operation, [...] Read more.
Under the context of non-dyadic human–machine interaction in intelligent coal mines, this study investigates the impact of different dyadic collaboration modes on Team Situation Awareness (TSA). Based on a simulated coal mine monitoring task, the experiment compares four working modes—Individual Operation, Supervised Operation, Cooperative Operation, and Divided-task Operation—across tasks of varying complexity. TSA was assessed using both objective (SAGAT) and subjective (SART) measures, alongside parallel evaluations of task performance and workload (NASA-TLX). The results demonstrate that, compared to Individual or Supervised Operation, both Cooperative and Divided-task Operation significantly enhance TSA and task performance. Cooperative Operation improves information integration and comprehension, while Divided-task Operation enhances response efficiency by enabling focused attention on role-specific demands. Moreover, dyadic collaboration reduces cognitive workload, with the task-sharing mode showing the lowest cognitive and temporal demands. The findings indicate that clear task structuring and real-time information exchange can alleviate cognitive bottlenecks and promote accurate environmental perception. Theoretically, this study extends the application of non-dyadic interaction theory to intelligent coal mine scenarios and empirically validates a “Collaboration Mode–TSA–Performance” model. Practically, it provides design implications for adaptive collaboration frameworks in high-risk, high-complexity industrial systems, highlighting the value of dynamic role allocation in optimizing cognitive resource utilization and enhancing operational safety. Full article
Show Figures

Figure 1

Back to TopTop