Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,122)

Search Parameters:
Keywords = dual-attention

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4587 KiB  
Article
FAMNet: A Lightweight Stereo Matching Network for Real-Time Depth Estimation in Autonomous Driving
by Jingyuan Zhang, Qiang Tong, Na Yan and Xiulei Liu
Symmetry 2025, 17(8), 1214; https://doi.org/10.3390/sym17081214 (registering DOI) - 1 Aug 2025
Abstract
Accurate and efficient stereo matching is fundamental to real-time depth estimation from symmetric stereo cameras in autonomous driving systems. However, existing high-accuracy stereo matching networks typically rely on computationally expensive 3D convolutions, which limit their practicality in real-world environments. In contrast, real-time methods [...] Read more.
Accurate and efficient stereo matching is fundamental to real-time depth estimation from symmetric stereo cameras in autonomous driving systems. However, existing high-accuracy stereo matching networks typically rely on computationally expensive 3D convolutions, which limit their practicality in real-world environments. In contrast, real-time methods often sacrifice accuracy or generalization capability. To address these challenges, we propose FAMNet (Fusion Attention Multi-Scale Network), a lightweight and generalizable stereo matching framework tailored for real-time depth estimation in autonomous driving applications. FAMNet consists of two novel modules: Fusion Attention-based Cost Volume (FACV) and Multi-scale Attention Aggregation (MAA). FACV constructs a compact yet expressive cost volume by integrating multi-scale correlation, attention-guided feature fusion, and channel reweighting, thereby reducing reliance on heavy 3D convolutions. MAA further enhances disparity estimation by fusing multi-scale contextual cues through pyramid-based aggregation and dual-path attention mechanisms. Extensive experiments on the KITTI 2012 and KITTI 2015 benchmarks demonstrate that FAMNet achieves a favorable trade-off between accuracy, efficiency, and generalization. On KITTI 2015, with the incorporation of FACV and MAA, the prediction accuracy of the baseline model is improved by 37% and 38%, respectively, and a total improvement of 42% is achieved by our final model. These results highlight FAMNet’s potential for practical deployment in resource-constrained autonomous driving systems requiring real-time and reliable depth perception. Full article
Show Figures

Figure 1

26 pages, 8736 KiB  
Article
Uncertainty-Aware Fault Diagnosis of Rotating Compressors Using Dual-Graph Attention Networks
by Seungjoo Lee, YoungSeok Kim, Hyun-Jun Choi and Bongjun Ji
Machines 2025, 13(8), 673; https://doi.org/10.3390/machines13080673 (registering DOI) - 1 Aug 2025
Abstract
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a [...] Read more.
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a Bayesian GAT method specifically tailored for vibration-based compressor fault diagnosis. The approach integrates domain-specific digital-twin simulations built with Rotordynamic software (1.3.0), and constructs dual adjacency matrices to encode both physically informed and data-driven sensor relationships. Additionally, a hybrid forecasting-and-reconstruction objective enables the model to capture short-term deviations as well as long-term waveform fidelity. Monte Carlo dropout further decomposes prediction uncertainty into aleatoric and epistemic components, providing a more robust and interpretable model. Comparative evaluations against conventional Long Short-Term Memory (LSTM)-based autoencoder and forecasting methods demonstrate that the proposed framework achieves superior fault-detection performance across multiple fault types, including misalignment, bearing failure, and unbalance. Moreover, uncertainty analyses confirm that fault severity correlates with increasing levels of both aleatoric and epistemic uncertainty, reflecting heightened noise and reduced model confidence under more severe conditions. By enhancing GAT fundamentals with a domain-tailored dual-graph strategy, specialized Bayesian inference, and digital-twin data generation, this research delivers a comprehensive and interpretable solution for compressor fault diagnosis, paving the way for more reliable and risk-aware predictive maintenance in complex rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

20 pages, 994 KiB  
Article
Analyzing Influencing Factors of Low-Carbon Technology Adoption in Hospital Construction Projects Based on TAM-TOE Framework
by Lei Jin, Dezhi Li, Yubin Zhang and Yi Zhao
Buildings 2025, 15(15), 2703; https://doi.org/10.3390/buildings15152703 (registering DOI) - 31 Jul 2025
Abstract
Hospitals rank among the most energy-intensive public building typologies and offer substantial potential for carbon mitigation. However, their construction phase has received limited scholarly attention within China’s ‘dual carbon’ agenda. To address this research gap, this study develops and empirically validates an integrated [...] Read more.
Hospitals rank among the most energy-intensive public building typologies and offer substantial potential for carbon mitigation. However, their construction phase has received limited scholarly attention within China’s ‘dual carbon’ agenda. To address this research gap, this study develops and empirically validates an integrated Technology Acceptance Model and Technology-Organization-Environment framework tailored for hospital construction projects. The study not only identifies 12 critical adoption factors but also offers recommendations and discusses the relevance to multiple Sustainable Development Goals. This research provides both theoretical and practical insights for promoting sustainable hospital construction practices. Full article
(This article belongs to the Special Issue Urban Infrastructure and Resilient, Sustainable Buildings)
Show Figures

Figure 1

25 pages, 10331 KiB  
Article
Forest Fire Detection Method Based on Dual-Branch Multi-Scale Adaptive Feature Fusion Network
by Qinggan Wu, Chen Wei, Ning Sun, Xiong Xiong, Qingfeng Xia, Jianmeng Zhou and Xingyu Feng
Forests 2025, 16(8), 1248; https://doi.org/10.3390/f16081248 (registering DOI) - 31 Jul 2025
Abstract
There are significant scale and morphological differences between fire and smoke features in forest fire detection. This paper proposes a detection method based on dual-branch multi-scale adaptive feature fusion network (DMAFNet). In this method, convolutional neural network (CNN) and transformer are used to [...] Read more.
There are significant scale and morphological differences between fire and smoke features in forest fire detection. This paper proposes a detection method based on dual-branch multi-scale adaptive feature fusion network (DMAFNet). In this method, convolutional neural network (CNN) and transformer are used to form a dual-branch backbone network to extract local texture and global context information, respectively. In order to overcome the difference in feature distribution and response scale between the two branches, a feature correction module (FCM) is designed. Through space and channel correction mechanisms, the adaptive alignment of two branch features is realized. The Fusion Feature Module (FFM) is further introduced to fully integrate dual-branch features based on the two-way cross-attention mechanism and effectively suppress redundant information. Finally, the Multi-Scale Fusion Attention Unit (MSFAU) is designed to enhance the multi-scale detection capability of fire targets. Experimental results show that the proposed DMAFNet has significantly improved in mAP (mean average precision) indicators compared with existing mainstream detection methods. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

23 pages, 1422 KiB  
Article
Large Vision Language Model: Enhanced-RSCLIP with Exemplar-Image Prompting for Uncommon Object Detection in Satellite Imagery
by Taiwo Efunogbon, Abimbola Efunogbon, Enjie Liu, Dayou Li and Renxi Qiu
Electronics 2025, 14(15), 3071; https://doi.org/10.3390/electronics14153071 (registering DOI) - 31 Jul 2025
Abstract
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in [...] Read more.
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in satellite imagery. Our approach introduces a key innovation where an exemplar-image preprocessing module using crop-based or attention-based algorithms extracts focused object features which are fed as a dual stream to a contrastive learning framework that fuses textual descriptions with visual exemplar embeddings. We evaluated our method on a custom dataset of 260 satellite images across UK and Nigerian regions. Enhanced-RSCLIP with crop-based exemplar processing achieved 72% accuracy in cattle detection and 56.2% overall accuracy on cross-domain transfer tasks, significantly outperforming text-only CLIP (31% overall accuracy). The dual-prompt architecture enables effective few-shot learning and cross-regional transfer from data-rich (UK) to data-sparse (Nigeria) environments, demonstrating a 41% improvement over baseline approaches for uncommon object detection in satellite imagery. Full article
25 pages, 21950 KiB  
Article
ESL-YOLO: Edge-Aware Side-Scan Sonar Object Detection with Adaptive Quality Assessment
by Zhanshuo Zhang, Changgeng Shuai, Chengren Yuan, Buyun Li, Jianguo Ma and Xiaodong Shang
J. Mar. Sci. Eng. 2025, 13(8), 1477; https://doi.org/10.3390/jmse13081477 (registering DOI) - 31 Jul 2025
Abstract
Focusing on the problem of insufficient detection accuracy caused by blurred target boundaries, variable scales, and severe noise interference in side-scan sonar images, this paper proposes a high-precision detection network named ESL-YOLO, which integrates edge perception and adaptive quality assessment. Firstly, an Edge [...] Read more.
Focusing on the problem of insufficient detection accuracy caused by blurred target boundaries, variable scales, and severe noise interference in side-scan sonar images, this paper proposes a high-precision detection network named ESL-YOLO, which integrates edge perception and adaptive quality assessment. Firstly, an Edge Fusion Module (EFM) is designed, which integrates the Sobel operator into depthwise separable convolution. Through a dual-branch structure, it realizes effective fusion of edge features and spatial features, significantly enhancing the ability to recognize targets with blurred boundaries. Secondly, a Self-Calibrated Dual Attention (SCDA) Module is constructed. By means of feature cross-calibration and multi-scale channel attention fusion mechanisms, it achieves adaptive fusion of shallow details and deep-rooted semantic content, improving the detection accuracy for small-sized targets and targets with elaborate shapes. Finally, a Location Quality Estimator (LQE) is introduced, which quantifies localization quality using the statistical characteristics of bounding box distribution, effectively reducing false detections and missed detections. Experiments on the SIMD dataset show that the mAP@0.5 of ESL-YOLO reaches 84.65%. The precision and recall rate reach 87.67% and 75.63%, respectively. Generalization experiments on additional sonar datasets further validate the effectiveness of the proposed method across different data distributions and target types, providing an effective technical solution for side-scan sonar image target detection. Full article
(This article belongs to the Section Ocean Engineering)
29 pages, 15488 KiB  
Article
GOFENet: A Hybrid Transformer–CNN Network Integrating GEOBIA-Based Object Priors for Semantic Segmentation of Remote Sensing Images
by Tao He, Jianyu Chen and Delu Pan
Remote Sens. 2025, 17(15), 2652; https://doi.org/10.3390/rs17152652 (registering DOI) - 31 Jul 2025
Abstract
Geographic object-based image analysis (GEOBIA) has demonstrated substantial utility in remote sensing tasks. However, its integration with deep learning remains largely confined to image-level classification. This is primarily due to the irregular shapes and fragmented boundaries of segmented objects, which limit its applicability [...] Read more.
Geographic object-based image analysis (GEOBIA) has demonstrated substantial utility in remote sensing tasks. However, its integration with deep learning remains largely confined to image-level classification. This is primarily due to the irregular shapes and fragmented boundaries of segmented objects, which limit its applicability in semantic segmentation. While convolutional neural networks (CNNs) excel at local feature extraction, they inherently struggle to capture long-range dependencies. In contrast, Transformer-based models are well suited for global context modeling but often lack fine-grained local detail. To overcome these limitations, we propose GOFENet (Geo-Object Feature Enhanced Network)—a hybrid semantic segmentation architecture that effectively fuses object-level priors into deep feature representations. GOFENet employs a dual-encoder design combining CNN and Swin Transformer architectures, enabling multi-scale feature fusion through skip connections to preserve both local and global semantics. An auxiliary branch incorporating cascaded atrous convolutions is introduced to inject information of segmented objects into the learning process. Furthermore, we develop a cross-channel selection module (CSM) for refined channel-wise attention, a feature enhancement module (FEM) to merge global and local representations, and a shallow–deep feature fusion module (SDFM) to integrate pixel- and object-level cues across scales. Experimental results on the GID and LoveDA datasets demonstrate that GOFENet achieves superior segmentation performance, with 66.02% mIoU and 51.92% mIoU, respectively. The model exhibits strong capability in delineating large-scale land cover features, producing sharper object boundaries and reducing classification noise, while preserving the integrity and discriminability of land cover categories. Full article
Show Figures

Figure 1

23 pages, 1396 KiB  
Article
Unsupervised Anomaly Detection Method for Electrical Equipment Based on Audio Latent Representation and Parallel Attention Mechanism
by Wei Zhou, Shaoping Zhou, Yikun Cao, Junkang Yang and Hongqing Liu
Appl. Sci. 2025, 15(15), 8474; https://doi.org/10.3390/app15158474 - 30 Jul 2025
Abstract
The stable operation of electrical equipment is critical for industrial safety, yet traditional anomaly detection methods often suffer from limitations, such as high resource demands, dependency on expert knowledge, and lack of real-world capabilities. To address these challenges, this article proposes an unsupervised [...] Read more.
The stable operation of electrical equipment is critical for industrial safety, yet traditional anomaly detection methods often suffer from limitations, such as high resource demands, dependency on expert knowledge, and lack of real-world capabilities. To address these challenges, this article proposes an unsupervised anomaly detection method for electrical equipment, utilizing audio latent representation and a parallel attention mechanism. The framework employs an autoencoder to extract low-dimensional features from audio signals and introduces a phase-aware parallel attention block to dynamically weight these features for an improved anomaly sensitivity. With adversarial training and a dual-encoding mechanism, the proposed method demonstrates robust performance in complex scenarios. Using public datasets (MIMII and ToyADMOS) and our collected real-world wind turbine data, it achieves high AUC scores, surpassing the best baselines, which demonstrates our framework design is suitable for industrial applications. Full article
Show Figures

Figure 1

21 pages, 2267 KiB  
Article
Dual-Branch Network for Blind Quality Assessment of Stereoscopic Omnidirectional Images: A Spherical and Perceptual Feature Integration Approach
by Zhe Wang, Yi Liu and Yang Song
Electronics 2025, 14(15), 3035; https://doi.org/10.3390/electronics14153035 - 30 Jul 2025
Abstract
Stereoscopic omnidirectional images (SOIs) have gained significant attention for their immersive viewing experience by providing binocular depth with panoramic scenes. However, evaluating their visual quality remains challenging due to its unique spherical geometry, binocular disparity, and viewing conditions. To address these challenges, this [...] Read more.
Stereoscopic omnidirectional images (SOIs) have gained significant attention for their immersive viewing experience by providing binocular depth with panoramic scenes. However, evaluating their visual quality remains challenging due to its unique spherical geometry, binocular disparity, and viewing conditions. To address these challenges, this paper proposes a dual-branch deep learning framework that integrates spherical structural features and perceptual binocular cues to assess the quality of SOIs without reference. Specifically, the global branch leverages spherical convolutions to capture wide-range spatial distortions, while the local branch utilizes a binocular difference module based on discrete wavelet transform to extract depth-aware perceptual information. A feature complementarity module is introduced to fuse global and local representations for final quality prediction. Experimental evaluations on two public SOIQA datasets—NBU-SOID and SOLID—demonstrate that the proposed method achieves state-of-the-art performance, with PLCC/SROCC values of 0.926/0.918 and 0.918/0.891, respectively. These results validate the effectiveness and robustness of our approach in stereoscopic omnidirectional image quality assessment tasks. Full article
(This article belongs to the Special Issue AI in Signal and Image Processing)
Show Figures

Figure 1

16 pages, 5301 KiB  
Article
TSINet: A Semantic and Instance Segmentation Network for 3D Tomato Plant Point Clouds
by Shanshan Ma, Xu Lu and Liang Zhang
Appl. Sci. 2025, 15(15), 8406; https://doi.org/10.3390/app15158406 - 29 Jul 2025
Abstract
Accurate organ-level segmentation is essential for achieving high-throughput, non-destructive, and automated plant phenotyping. To address the challenge of intelligent acquisition of phenotypic parameters in tomato plants, we propose TSINet, an end-to-end dual-task segmentation network designed for effective and precise semantic labeling and instance [...] Read more.
Accurate organ-level segmentation is essential for achieving high-throughput, non-destructive, and automated plant phenotyping. To address the challenge of intelligent acquisition of phenotypic parameters in tomato plants, we propose TSINet, an end-to-end dual-task segmentation network designed for effective and precise semantic labeling and instance recognition of tomato point clouds, based on the Pheno4D dataset. TSINet adopts an encoder–decoder architecture, where a shared encoder incorporates four Geometry-Aware Adaptive Feature Extraction Blocks (GAFEBs) to effectively capture local structures and geometric relationships in raw point clouds. Two parallel decoder branches are employed to independently decode shared high-level features for the respective segmentation tasks. Additionally, a Dual Attention-Based Feature Enhancement Module (DAFEM) is introduced to further enrich feature representations. The experimental results demonstrate that TSINet achieves superior performance in both semantic and instance segmentation, particularly excelling in challenging categories such as stems and large-scale instances. Specifically, TSINet achieves 97.00% mean precision, 96.17% recall, 96.57% F1-score, and 93.43% IoU in semantic segmentation and 81.54% mPrec, 81.69% mRec, 81.60% mCov, and 86.40% mWCov in instance segmentation. Compared with state-of-the-art methods, TSINet achieves balanced improvements across all metrics, significantly reducing false positives and false negatives while enhancing spatial completeness and segmentation accuracy. Furthermore, we conducted ablation studies and generalization tests to systematically validate the effectiveness of each TSINet component and the overall robustness of the model. This study provides an effective technological approach for high-throughput automated phenotyping of tomato plants, contributing to the advancement of intelligent agricultural management. Full article
Show Figures

Figure 1

21 pages, 2965 KiB  
Article
Inspection Method Enabled by Lightweight Self-Attention for Multi-Fault Detection in Photovoltaic Modules
by Shufeng Meng and Tianxu Xu
Electronics 2025, 14(15), 3019; https://doi.org/10.3390/electronics14153019 - 29 Jul 2025
Abstract
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity [...] Read more.
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity concurrent detection in existing robotic inspection systems, while stringent onboard compute budgets also preclude the adoption of bulky detectors. To resolve this accuracy–efficiency trade-off for dual-defect detection, we present YOLOv8-SG, a lightweight yet powerful framework engineered for mobile PV inspectors. First, a rigorously curated multi-modal dataset—RGB for stains and long-wave infrared for hotspots—is assembled to enforce robust cross-domain representation learning. Second, the HSV color space is leveraged to disentangle chromatic and luminance cues, thereby stabilizing appearance variations across sensors. Third, a single-head self-attention (SHSA) block is embedded in the backbone to harvest long-range dependencies at negligible parameter cost, while a global context (GC) module is grafted onto the detection head to amplify fine-grained semantic cues. Finally, an auxiliary bounding box refinement term is appended to the loss to hasten convergence and tighten localization. Extensive field experiments demonstrate that YOLOv8-SG attains 86.8% mAP@0.5, surpassing the vanilla YOLOv8 by 2.7 pp while trimming 12.6% of parameters (18.8 MB). Grad-CAM saliency maps corroborate that the model’s attention consistently coincides with defect regions, underscoring its interpretability. The proposed method, therefore, furnishes PV operators with a practical low-latency solution for concurrent bird-dropping and hotspot surveillance. Full article
Show Figures

Figure 1

22 pages, 2525 KiB  
Article
mmHSE: A Two-Stage Framework for Human Skeleton Estimation Using mmWave FMCW Radar Signals
by Jiake Tian, Yi Zou and Jiale Lai
Appl. Sci. 2025, 15(15), 8410; https://doi.org/10.3390/app15158410 - 29 Jul 2025
Abstract
We present mmHSE, a two-stage framework for human skeleton estimation using dual millimeter-Wave (mmWave) Frequency-Modulated Continuous-Wave (FMCW) radar signals. To enable data-driven model design and evaluation, we collect and process over 30,000 range–angle maps from 12 users across three representative indoor environments using [...] Read more.
We present mmHSE, a two-stage framework for human skeleton estimation using dual millimeter-Wave (mmWave) Frequency-Modulated Continuous-Wave (FMCW) radar signals. To enable data-driven model design and evaluation, we collect and process over 30,000 range–angle maps from 12 users across three representative indoor environments using a dual-node radar acquisition platform. Leveraging the collected data, we develop a two-stage neural architecture for human skeleton estimation. The first stage employs a dual-branch network with depthwise separable convolutions and self-attention to extract multi-scale spatiotemporal features from dual-view radar inputs. A cross-modal attention fusion module is then used to generate initial estimates of 21 skeletal keypoints. The second stage refines these estimates using a skeletal topology module based on graph convolutional networks, which captures spatial dependencies among joints to enhance localization accuracy. Experiments show that mmHSE achieves a Mean Absolute Error (MAE) of 2.78 cm. In cross-domain evaluations, the MAE remains at 3.14 cm, demonstrating the method’s generalization ability and robustness for non-intrusive human pose estimation from mmWave FMCW radar signals. Full article
Show Figures

Figure 1

28 pages, 7240 KiB  
Article
MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery
by Qiang Guo, Bo Han, Pengyu Chu, Yiping Wan and Jingjing Zhang
Agriculture 2025, 15(15), 1639; https://doi.org/10.3390/agriculture15151639 (registering DOI) - 29 Jul 2025
Viewed by 9
Abstract
To improve the identification of drought-affected areas in winter wheat, this paper proposes a lightweight network called MF-FusionNet based on multimodal fusion of RGB images and vegetation indices (NDVI and EVI). A multimodal dataset covering various drought levels in winter wheat was constructed. [...] Read more.
To improve the identification of drought-affected areas in winter wheat, this paper proposes a lightweight network called MF-FusionNet based on multimodal fusion of RGB images and vegetation indices (NDVI and EVI). A multimodal dataset covering various drought levels in winter wheat was constructed. To enable deep fusion of modalities, a Lightweight Multimodal Fusion Block (LMFB) was designed, and a Dual-Coordinate Attention Feature Extraction module (DCAFE) was introduced to enhance semantic feature representation and improve drought region identification. To address differences in scale and semantics across network layers, a Cross-Stage Feature Fusion Strategy (CFFS) was proposed to integrate multi-level features and enhance overall performance. The effectiveness of each module was validated through ablation experiments. Compared to traditional single-modal methods, MF-FusionNet achieved higher accuracy, recall, and F1-score—improved by 1.35%, 1.43%, and 1.29%, respectively—reaching 96.71%, 96.71%, and 96.64%. A basis for real-time monitoring and precise irrigation management under winter wheat drought stress was provided by this study. Full article
Show Figures

Figure 1

20 pages, 307 KiB  
Article
Curious and Critical: A Delphi Study of Middle School Teachers’ Competencies in Support, Literacy, and Technology
by Kristian Blomberg Kjellström, Petra Magnusson and Daniel Östlund
Educ. Sci. 2025, 15(8), 973; https://doi.org/10.3390/educsci15080973 - 29 Jul 2025
Viewed by 17
Abstract
Providing inclusive education and engaging all students in reading and writing activities presents an ongoing challenge for teachers, not necessarily resolved by implementing digital technology. This study addresses the need to better understand teacher competencies within the digitally infused classroom, specifically in relation [...] Read more.
Providing inclusive education and engaging all students in reading and writing activities presents an ongoing challenge for teachers, not necessarily resolved by implementing digital technology. This study addresses the need to better understand teacher competencies within the digitally infused classroom, specifically in relation to inclusive education and reading and writing practices. The study investigates the competencies and supportive strategies of middle school teachers who perceive themselves as successful in this area. The study employs the Delphi technique, using iterative surveys through which these teachers describe and rate aspects of their competencies and strategies. The results are analyzed through a modified version of the Technological, Pedagogical, and Content Knowledge (TPACK) framework, with particular attention to how teachers support students using their content knowledge and digital competency. Findings reveal a range of strategies and competency aspects related to both proactive accessibility and reactive individualization, using a variety of digital tools and text modalities. The teachers describe a dual orientation in their ability to curiously explore digital tools while simultaneously being able to critically appraise their usefulness. The findings contribute insights on what can support teachers when collaboratively developing knowledge of local practices and their agency in relation to available digital tools. Full article
(This article belongs to the Special Issue Students with Special Educational Needs in Reading and Writing)
21 pages, 2831 KiB  
Review
IL-20 Subfamily Biological Effects: Mechanistic Insights and Therapeutic Perspectives in Cancer
by Valentina Maggisano, Maria D’Amico, Saveria Aquila, Francesca Giordano, Anna Martina Battaglia, Adele Chimento, Flavia Biamonte, Diego Russo, Vincenzo Pezzi, Stefania Bulotta and Francesca De Amicis
Int. J. Mol. Sci. 2025, 26(15), 7320; https://doi.org/10.3390/ijms26157320 - 29 Jul 2025
Viewed by 41
Abstract
The interleukin-20 (IL-20) cytokine subfamily, a subset of the IL-10 superfamily, includes IL-19, IL-20, IL-22, IL-24, and IL-26. Recently, their involvement in cancer biology has gained attention, particularly due to their impact on the tumor microenvironment (TME). Notably, IL-20 subfamily cytokines can exert [...] Read more.
The interleukin-20 (IL-20) cytokine subfamily, a subset of the IL-10 superfamily, includes IL-19, IL-20, IL-22, IL-24, and IL-26. Recently, their involvement in cancer biology has gained attention, particularly due to their impact on the tumor microenvironment (TME). Notably, IL-20 subfamily cytokines can exert both pro-tumorigenic and anti-tumorigenic effects, depending on the context. For example, IL-22 promotes tumor growth by enhancing cancer cell proliferation and protecting against apoptosis, whereas IL-24 demonstrates anti-tumor activity by inducing cancer cell death and inhibiting metastasis. Additionally, these cytokines influence macrophage polarization—an essential factor in the immune landscape of tumors—thereby modulating the inflammatory environment and immune evasion strategies. Understanding the dual role of IL-20 subfamily cytokines within the TME and their interactions with cancer cell hallmarks presents a promising avenue for therapeutic development. Interleukin-20 receptor antagonists are being researched for their role in cancer therapy, since they potentially inhibit tumor growth and progression. This review explores the relationship between IL-20 cytokines and key cancer-related processes, including growth and proliferative advantages, angiogenesis, invasion, metastasis, and TME support. Further research is necessary to unravel the specific mechanisms underlying their contributions to tumor progression and to determine their potential for targeted therapeutic strategies. Full article
(This article belongs to the Special Issue Advanced Research on Immune Cells and Cytokines (2nd Edition))
Show Figures

Figure 1

Back to TopTop