Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,273)

Search Parameters:
Keywords = feature similarity attention

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1589 KiB  
Article
EEG-Based Attention Classification for Enhanced Learning Experience
by Madiha Khalid Syed, Hong Wang, Awais Ahmad Siddiqi, Shahnawaz Qureshi and Mohamed Amin Gouda
Appl. Sci. 2025, 15(15), 8668; https://doi.org/10.3390/app15158668 (registering DOI) - 5 Aug 2025
Abstract
This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration [...] Read more.
This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration levels are recorded while participants engage in quizzes to learn and memorize Chinese characters. The attention levels are determined based on performance metrics derived from the quiz results. Following extensive preprocessing, the EEG data undergoes severmal feature extraction steps: removal of artifacts due to eye blinks and facial movements, segregation of waves based on their frequencies, similarity indexing with respect to delay, binary thresholding, and (PCA). These extracted features are then fed into a k-NN classifier, which accurately distinguishes between high and low attention brain wave patterns, with the labels derived from the quiz performance indicating high or low attention. During the implementation phase, the system continuously monitors the user’s EEG signals while studying. When low attention levels are detected, the system increases the repetition frequency and reduces the difficulty of the flashcards to refocus the user’s attention. Conversely, when high concentration levels are identified, the system escalates the difficulty level of the flashcards to maximize the learning challenge. This adaptive approach ensures a more effective learning experience by maintaining optimal cognitive engagement, resulting in improved learning rates, reduced stress, and increased overall learning efficiency. This adaptive approach ensures a more effective learning experience by maintaining optimal cognitive engagement, resulting in improved learning rates, reduced stress, and increased overall learning efficiency. Our results indicate that this EEG-based adaptive learning system holds significant potential for personalized education, fostering better retention and understanding of Chinese characters. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

17 pages, 37081 KiB  
Article
MADet: A Multi-Dimensional Feature Fusion Model for Detecting Typical Defects in Weld Radiographs
by Shuai Xue, Wei Xu, Zhu Xiong, Jing Zhang and Yanyan Liang
Materials 2025, 18(15), 3646; https://doi.org/10.3390/ma18153646 - 3 Aug 2025
Viewed by 67
Abstract
Accurate weld defect detection is critical for ensuring structural safety and evaluating welding quality in industrial applications. Manual inspection methods have inherent limitations, including inefficiency and inadequate sensitivity to subtle defects. Existing detection models, primarily designed for natural images, struggle to adapt to [...] Read more.
Accurate weld defect detection is critical for ensuring structural safety and evaluating welding quality in industrial applications. Manual inspection methods have inherent limitations, including inefficiency and inadequate sensitivity to subtle defects. Existing detection models, primarily designed for natural images, struggle to adapt to the characteristic challenges of weld X-ray images, such as high noise, low contrast, and inter-defect similarity, particularly leading to missed detections and false positives for small defects. To address these challenges, a multi-dimensional feature fusion model (MADet), which is a multi-branch deep fusion network for weld defect detection, was proposed. The framework incorporates two key innovations: (1) A multi-scale feature fusion network integrated with lightweight attention residual modules to enhance the perception of fine-grained defect features by leveraging low-level texture information. (2) An anchor-based feature-selective detection head was used to improve the discrimination and localization accuracy for five typical defect categories. Extensive experiments on both public and proprietary weld defect datasets demonstrated that MADet achieved significant improvements over the state-of-the-art YOLO variants. Specifically, it surpassed the suboptimal model by 7.41% in mAP@0.5, indicating strong industrial applicability. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

21 pages, 4400 KiB  
Article
BFLE-Net: Boundary Feature Learning and Enhancement Network for Medical Image Segmentation
by Jiale Fan, Liping Liu and Xinyang Yu
Electronics 2025, 14(15), 3054; https://doi.org/10.3390/electronics14153054 - 30 Jul 2025
Viewed by 142
Abstract
Multi-organ medical image segmentation is essential for accurate clinical diagnosis, effective treatment planning, and reliable prognosis, yet it remains challenging due to complex backgrounds, irrelevant noise, unclear organ boundaries, and wide variations in organ size. To address these challenges, the boundary feature learning [...] Read more.
Multi-organ medical image segmentation is essential for accurate clinical diagnosis, effective treatment planning, and reliable prognosis, yet it remains challenging due to complex backgrounds, irrelevant noise, unclear organ boundaries, and wide variations in organ size. To address these challenges, the boundary feature learning and enhancement network is proposed. This model integrates a dedicated boundary learning module combined with an auxiliary loss function to strengthen the semantic correlations between boundary pixels and regional features, thus reducing category mis-segmentation. Additionally, channel and positional compound attention mechanisms are employed to selectively filter features and minimize background interference. To further enhance multi-scale representation capabilities, the dynamic scale-aware context module dynamically selects and fuses multi-scale features, significantly improving the model’s adaptability. The model achieves average Dice similarity coefficients of 81.67% on synapse and 90.55% on ACDC datasets, outperforming state-of-the-art methods. This network significantly improves segmentation by emphasizing boundary accuracy, noise reduction, and multi-scale adaptability, enhancing clinical diagnostics and treatment planning. Full article
Show Figures

Figure 1

15 pages, 4592 KiB  
Article
SSAM_YOLOv5: YOLOv5 Enhancement for Real-Time Detection of Small Road Signs
by Fatima Qanouni, Hakim El Massari, Noreddine Gherabi and Maria El-Badaoui
Digital 2025, 5(3), 30; https://doi.org/10.3390/digital5030030 - 29 Jul 2025
Viewed by 354
Abstract
Many traffic-sign detection systems are available to assist drivers with particular conditions such as small and distant signs, multiple signs on the road, objects similar to signs, and other challenging conditions. Real-time object detection is an indispensable aspect of these detection systems, with [...] Read more.
Many traffic-sign detection systems are available to assist drivers with particular conditions such as small and distant signs, multiple signs on the road, objects similar to signs, and other challenging conditions. Real-time object detection is an indispensable aspect of these detection systems, with detection speed and efficiency being critical parameters. In terms of these parameters, to enhance performance in road-sign detection under diverse conditions, we proposed a comprehensive methodology, SSAM_YOLOv5, to handle feature extraction and small-road-sign detection performance. The method was based on a modified version of YOLOv5s. First, we introduced attention modules into the backbone to focus on the region of interest within video frames; secondly, we replaced the activation function with the SwishT_C activation function to enhance feature extraction and achieve a balance between inference, precision, and mean average precision (mAP@50) rates. Compared to the YOLOv5 baseline, the proposed improvements achieved remarkable increases of 1.4% and 1.9% in mAP@50 on the Tiny LISA and GTSDB datasets, respectively, confirming their effectiveness. Full article
Show Figures

Figure 1

16 pages, 2943 KiB  
Article
Long Short-Term Memory-Based Fall Detection by Frequency-Modulated Continuous Wave Millimeter-Wave Radar Sensor for Seniors Living Alone
by Yun Seop Yu, Seongjo Wie, Hojin Lee, Jeongwoo Lee and Nam Ho Kim
Appl. Sci. 2025, 15(15), 8381; https://doi.org/10.3390/app15158381 - 28 Jul 2025
Viewed by 213
Abstract
In this study, four types of fall detection systems for seniors living alone using x-y scatter and Doppler range images measured from frequency-modulated continuous wave (FMCW) millimeter-wave (mmWave) sensors were introduced. Despite advancements in fall detection, existing long short-term memory (LSTM)-based approaches often [...] Read more.
In this study, four types of fall detection systems for seniors living alone using x-y scatter and Doppler range images measured from frequency-modulated continuous wave (FMCW) millimeter-wave (mmWave) sensors were introduced. Despite advancements in fall detection, existing long short-term memory (LSTM)-based approaches often struggle with effectively distinguishing falls from similar activities of daily living (ADLs) due to their uniform treatment of all time steps, potentially overlooking critical motion cues. To address this limitation, an attention mechanism has been integrated. Data was collected from seven participants, resulting in a dataset of 669 samples, including 285 falls and 384 ADLs with walking, lying, inactivity, and sitting. Four LSTM-based architectures for fall detection were proposed and evaluated: Raw-LSTM, Raw-LSTM-Attention, HOG-LSTM, and HOG-LSTM-Attention. The histogram of oriented gradient (HOG) method was used for feature extraction, while LSTM networks captured temporal dependencies. The attention mechanism further enhanced model performance by focusing on relevant input features. The Raw-LSTM model processed raw mmWave radar images through LSTM layers and dense layers for classification. The Raw-LSTM-Attention model extended Raw-LSTM with an added self-attention mechanism within the traditional attention framework. The HOG-LSTM model included an additional preprocessing step upon the RAW-LSTM model where HOG features were extracted and classified using an SVM. The HOG-LSTM-Attention model built upon the HOG-LSTM model by incorporating a self-attention mechanism to enhance the model’s ability to accurately classify activities. Evaluation metrics such as Sensitivity, Precision, Accuracy, and F1-Score were used to compare four architectural models. The results showed that the HOG-LSTM-Attention model achieved the highest performance, with an Accuracy of 95.3% and an F1-Score of 95.5%. Optimal self-attention configuration was found at a 2:64 ratio of number of attention heads to channels for keys and queries. Full article
Show Figures

Figure 1

23 pages, 4920 KiB  
Article
Vocative Che in Falkland Islands English: Identity, Contact, and Enregisterment
by Yliana Virginia Rodríguez and Miguel Barrientos
Languages 2025, 10(8), 182; https://doi.org/10.3390/languages10080182 - 28 Jul 2025
Viewed by 285
Abstract
Falkland Islands English (FIE) began its development in the first half of the 19th century. In part, as a consequence of its youth, FIE is an understudied variety. It shares some morphosyntactic features with other anglophone countries in the Southern Hemisphere, but it [...] Read more.
Falkland Islands English (FIE) began its development in the first half of the 19th century. In part, as a consequence of its youth, FIE is an understudied variety. It shares some morphosyntactic features with other anglophone countries in the Southern Hemisphere, but it also shares lexical features with regional varieties of Spanish, including Rioplatense Spanish. Che is one of many South American words that have entered FIE through Spanish, with its spelling ranging from “chay” and “chey” to “ché”. The word has received some marginal attention in terms of its meaning. It is said to be used in a similar way to the British dear or love and the Australian mate, and it has been compared to chum or pal, and is taken as an equivalent of the River Plate, hey!, hi!, or I say!. In this work, we explore the hypothesis that che entered FIE through historical contact with Rioplatense Spanish, drawing on both linguistic and sociohistorical evidence, and presenting survey, corpus, and ethnographic data that illustrate its current vitality, usage, and social meanings among FIE speakers. In situ observations, fieldwork, and an online survey were used to look into the vitality of che. Concomitantly, by crawling social media and the local press, enough data was gathered to build a small corpus to further study its vitality. A thorough literature review was conducted to hypothesise about the borrowing process involving its entry into FIE. The findings confirm that the word is primarily a vocative, it is commonly used, and it is indicative of a sense of belonging to the Falklands community. Although there is no consensus on the origin of che in the River Plate region, it seems to be the case that it entered FIE during the intense Spanish–English contact that took place during the second half of the 19th century. Full article
Show Figures

Figure 1

24 pages, 3480 KiB  
Article
MFPI-Net: A Multi-Scale Feature Perception and Interaction Network for Semantic Segmentation of Urban Remote Sensing Images
by Xiaofei Song, Mingju Chen, Jie Rao, Yangming Luo, Zhihao Lin, Xingyue Zhang, Senyuan Li and Xiao Hu
Sensors 2025, 25(15), 4660; https://doi.org/10.3390/s25154660 - 27 Jul 2025
Viewed by 372
Abstract
To improve semantic segmentation performance for complex urban remote sensing images with multi-scale object distribution, class similarity, and small object omission, this paper proposes MFPI-Net, an encoder–decoder-based semantic segmentation network. It includes four core modules: a Swin Transformer backbone encoder, a diverse dilation [...] Read more.
To improve semantic segmentation performance for complex urban remote sensing images with multi-scale object distribution, class similarity, and small object omission, this paper proposes MFPI-Net, an encoder–decoder-based semantic segmentation network. It includes four core modules: a Swin Transformer backbone encoder, a diverse dilation rates attention shuffle decoder (DDRASD), a multi-scale convolutional feature enhancement module (MCFEM), and a cross-path residual fusion module (CPRFM). The Swin Transformer efficiently extracts multi-level global semantic features through its hierarchical structure and window attention mechanism. The DDRASD’s diverse dilation rates attention (DDRA) block combines convolutions with diverse dilation rates and channel-coordinate attention to enhance multi-scale contextual awareness, while Shuffle Block improves resolution via pixel rearrangement and avoids checkerboard artifacts. The MCFEM enhances local feature modeling through parallel multi-kernel convolutions, forming a complementary relationship with the Swin Transformer’s global perception capability. The CPRFM employs multi-branch convolutions and a residual multiplication–addition fusion mechanism to enhance interactions among multi-source features, thereby improving the recognition of small objects and similar categories. Experiments on the ISPRS Vaihingen and Potsdam datasets show that MFPI-Net outperforms mainstream methods, achieving 82.57% and 88.49% mIoU, validating its superior segmentation performance in urban remote sensing. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 5527 KiB  
Article
SGNet: A Structure-Guided Network with Dual-Domain Boundary Enhancement and Semantic Fusion for Skin Lesion Segmentation
by Haijiao Yun, Qingyu Du, Ziqing Han, Mingjing Li, Le Yang, Xinyang Liu, Chao Wang and Weitian Ma
Sensors 2025, 25(15), 4652; https://doi.org/10.3390/s25154652 - 27 Jul 2025
Viewed by 307
Abstract
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based [...] Read more.
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN–Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet’s superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet’s exceptional accuracy and robust generalization for computer-aided dermatological diagnosis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

24 pages, 8553 KiB  
Article
DO-MDS&DSCA: A New Method for Seed Vigor Detection in Hyperspectral Images Targeting Significant Information Loss and High Feature Similarity
by Liangquan Jia, Jianhao He, Jinsheng Wang, Miao Huan, Guangzeng Du, Lu Gao and Yang Wang
Agriculture 2025, 15(15), 1625; https://doi.org/10.3390/agriculture15151625 - 26 Jul 2025
Viewed by 361
Abstract
Hyperspectral imaging for seed vigor detection faces the challenges of handling high-dimensional spectral data, information loss after dimensionality reduction, and low feature differentiation between vigor levels. To address the above issues, this study proposes an improved dynamic optimize MDS (DO-MDS) dimensionality reduction algorithm [...] Read more.
Hyperspectral imaging for seed vigor detection faces the challenges of handling high-dimensional spectral data, information loss after dimensionality reduction, and low feature differentiation between vigor levels. To address the above issues, this study proposes an improved dynamic optimize MDS (DO-MDS) dimensionality reduction algorithm based on multidimensional scaling transformation. DO-MDS better preserves key features between samples during dimensionality reduction. Secondly, a dual-stream spectral collaborative attention (DSCA) module is proposed. The DSCA module adopts a dual-modal fusion approach combining global feature capture and local feature enhancement, deepening the characterization capability of spectral features. This study selected commonly used rice seed varieties in Zhejiang Province and constructed three individual spectral datasets and a mixed dataset through aging, spectral acquisition, and germination experiments. The experiments involved using the DO-MDS processed datasets with a convolutional neural network embedded with the DSCA attention module, and the results demonstrate vigor discrimination accuracy rates of 93.85%, 93.4%, and 96.23% for the Chunyou 83, Zhongzao 39, and Zhongzu 53 datasets, respectively, achieving 94.8% for the mixed dataset. This study provides effective strategies for spectral dimensionality reduction in hyperspectral seed vigor detection and enhances the differentiation of spectral information for seeds with similar vigor levels. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

28 pages, 3794 KiB  
Article
A Robust System for Super-Resolution Imaging in Remote Sensing via Attention-Based Residual Learning
by Rogelio Reyes-Reyes, Yeredith G. Mora-Martinez, Beatriz P. Garcia-Salgado, Volodymyr Ponomaryov, Jose A. Almaraz-Damian, Clara Cruz-Ramos and Sergiy Sadovnychiy
Mathematics 2025, 13(15), 2400; https://doi.org/10.3390/math13152400 - 25 Jul 2025
Viewed by 203
Abstract
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a [...] Read more.
Deep learning-based super-resolution (SR) frameworks are widely used in remote sensing applications. However, existing SR models still face limitations, particularly in recovering contours, fine features, and textures, as well as in effectively integrating channel information. To address these challenges, this study introduces a novel residual model named OARN (Optimized Attention Residual Network) specifically designed to enhance the visual quality of low-resolution images. The network operates on the Y channel of the YCbCr color space and integrates LKA (Large Kernel Attention) and OCM (Optimized Convolutional Module) blocks. These components can restore large-scale spatial relationships and refine textures and contours, improving feature reconstruction without significantly increasing computational complexity. The performance of OARN was evaluated using satellite images from WorldView-2, GaoFen-2, and Microsoft Virtual Earth. Evaluation was conducted using objective quality metrics, such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Edge Preservation Index (EPI), and Perceptual Image Patch Similarity (LPIPS), demonstrating superior results compared to state-of-the-art methods in both objective measurements and subjective visual perception. Moreover, OARN achieves this performance while maintaining computational efficiency, offering a balanced trade-off between processing time and reconstruction quality. Full article
Show Figures

Figure 1

21 pages, 4388 KiB  
Article
An Omni-Dimensional Dynamic Convolutional Network for Single-Image Super-Resolution Tasks
by Xi Chen, Ziang Wu, Weiping Zhang, Tingting Bi and Chunwei Tian
Mathematics 2025, 13(15), 2388; https://doi.org/10.3390/math13152388 - 25 Jul 2025
Viewed by 271
Abstract
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of [...] Read more.
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of high-frequency details, high computational complexity, and insufficient adaptability to complex scenes. To address these challenges, we propose an Omni-dimensional Dynamic Convolutional Network (ODConvNet) tailored for SISR tasks. Specifically, ODConvNet comprises four key components: a Feature Extraction Block (FEB) that captures low-level spatial features; an Omni-dimensional Dynamic Convolution Block (DCB), which utilizes a multidimensional attention mechanism to dynamically reweight convolution kernels across spatial, channel, and kernel dimensions, thereby enhancing feature expressiveness and context modeling; a Deep Feature Extraction Block (DFEB) that stacks multiple convolutional layers with residual connections to progressively extract and fuse high-level features; and a Reconstruction Block (RB) that employs subpixel convolution to upscale features and refine the final HR output. This mechanism significantly enhances feature extraction and effectively captures rich contextual information. Additionally, we employ an improved residual network structure combined with a refined Charbonnier loss function to alleviate gradient vanishing and exploding to enhance the robustness of model training. Extensive experiments conducted on widely used benchmark datasets, including DIV2K, Set5, Set14, B100, and Urban100, demonstrate that, compared with existing deep learning-based SR methods, our ODConvNet method improves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the visual quality of SR images is also improved. Ablation studies further validate the effectiveness and contribution of each component in our network. The proposed ODConvNet offers an effective, flexible, and efficient solution for the SISR task and provides promising directions for future research. Full article
Show Figures

Figure 1

23 pages, 3875 KiB  
Article
Soil Water-Soluble Ion Inversion via Hyperspectral Data Reconstruction and Multi-Scale Attention Mechanism: A Remote Sensing Case Study of Farmland Saline–Alkali Lands
by Meichen Liu, Shengwei Zhang, Jing Gao, Bo Wang, Kedi Fang, Lu Liu, Shengwei Lv and Qian Zhang
Agronomy 2025, 15(8), 1779; https://doi.org/10.3390/agronomy15081779 - 24 Jul 2025
Viewed by 570
Abstract
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral [...] Read more.
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral ground-based data are valuable in soil salinization monitoring, but the acquisition cost is high, and the coverage is small. Therefore, this study proposes a two-stage deep learning framework with multispectral remote-sensing images. First, the wavelet transform is used to enhance the Transformer and extract fine-grained spectral features to reconstruct the ground-based hyperspectral data. A comparison of ground-based hyperspectral data shows that the reconstructed spectra match the measured data in the 450–998 nm range, with R2 up to 0.98 and MSE = 0.31. This high similarity compensates for the low spectral resolution and weak feature expression of multispectral remote-sensing data. Subsequently, this enhanced spectral information was integrated and fed into a novel multiscale self-attentive Transformer model (MSATransformer) to invert four water-soluble ions. Compared with BPANN, MLP, and the standard Transformer model, our model remains robust across different spectra, achieving an R2 of up to 0.95 and reducing the average relative error by more than 30%. Among them, for the strongly responsive ions magnesium and sulfate, R2 reaches 0.92 and 0.95 (with RMSE of 0.13 and 0.29 g/kg, respectively). For the weakly responsive ions calcium and carbonate, R2 stays above 0.80 (RMSE is below 0.40 g/kg). The MSATransformer framework provides a low-cost and high-accuracy solution to monitor soil salinization at large scales and supports precision farmland management. Full article
(This article belongs to the Special Issue Water and Fertilizer Regulation Theory and Technology in Crops)
Show Figures

Figure 1

9 pages, 666 KiB  
Case Report
Severe Elimination Disorders and Normal Intelligence in a Case of MAP1B Related Syndrome: A Case Report
by Aniel Jessica Leticia Brambila-Tapia, María Teresa Magaña-Torres, Luis E. Figuera, María Guadalupe Domínguez-Quezada, Thania Alejandra Aguayo-Orozco, Jesua Iván Guzmán-González, Hugo Ceja and Ingrid Patricia Dávalos-Rodríguez
Genes 2025, 16(8), 870; https://doi.org/10.3390/genes16080870 - 24 Jul 2025
Viewed by 318
Abstract
Pathogenic variants in the MAP1B gene have been associated with neurological impairment, including intellectual disability, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, brain malformations, cognitive hearing loss, short stature, and dysmorphic features. However, few cases with detailed clinical characterization have been reported. We describe [...] Read more.
Pathogenic variants in the MAP1B gene have been associated with neurological impairment, including intellectual disability, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, brain malformations, cognitive hearing loss, short stature, and dysmorphic features. However, few cases with detailed clinical characterization have been reported. We describe a 12-year-old boy carrying a loss-of-function MAP1B variant, presenting with severe elimination disorders despite normal intelligence. He was referred to the genetics service due to persistent elimination issues, including daytime urinary incontinence, nocturnal enuresis, and fecal incontinence. He had normal motor and cognitive development, with an IQ of 99; however, he also presented with ADHD, short stature, microcephaly, and myopia. Brain MRI revealed bilaterial subependymal periventricular nodular heterotopia (PVNH). Audiometry showed normal bilateral hearing. Testing fragile X syndrome (FXS) and karyotype analyses yielded normal results. Whole exome sequencing (WES) revealed a nonsense pathogenic variant in MAP1B (c.895 C>T; p.Arg299*). No other family members showed a similar phenotype; however, a great-uncle and a great-aunt had a history of nocturnal enuresis until age 10. The patient’s deceased mother had short stature and psychiatric disorders, and a history of consanguinity was reported on the maternal side. This case broadens the phenotypic spectrum associated with MAP1B syndrome, suggesting that elimination disorder, frequently reported in FXS, should also be evaluated in MAP1B pathogenic variant carriers. In addition, the presence of short stature also appears to be part of the syndrome. Full article
(This article belongs to the Special Issue Genetic Diagnostics: Precision Tools for Disease Detection)
Show Figures

Figure 1

25 pages, 5445 KiB  
Article
HyperspectralMamba: A Novel State Space Model Architecture for Hyperspectral Image Classification
by Jianshang Liao and Liguo Wang
Remote Sens. 2025, 17(15), 2577; https://doi.org/10.3390/rs17152577 - 24 Jul 2025
Viewed by 294
Abstract
Hyperspectral image classification faces challenges with high-dimensional spectral data and complex dependencies between bands. This paper proposes HyperspectralMamba, a novel architecture for hyperspectral image classification that integrates state space modeling with adaptive recalibration mechanisms. The method addresses limitations in existing techniques through three [...] Read more.
Hyperspectral image classification faces challenges with high-dimensional spectral data and complex dependencies between bands. This paper proposes HyperspectralMamba, a novel architecture for hyperspectral image classification that integrates state space modeling with adaptive recalibration mechanisms. The method addresses limitations in existing techniques through three key innovations: (1) a novel dual-stream architecture that combines SSM global modeling with parallel convolutional local feature extraction, distinguishing our approach from existing single-stream SSM methods; (2) a band-adaptive feature recalibration mechanism specifically designed for hyperspectral data that adaptively adjusts the importance of different spectral band features; and (3) an effective feature fusion strategy that integrates global and local features through residual connections. Experimental results on three benchmark datasets—Indian Pines, Pavia University, and Salinas Valley—demonstrate that the proposed method achieves overall accuracies of 95.31%, 98.60%, and 96.40%, respectively, significantly outperforming existing convolutional neural networks, attention-enhanced networks, and Transformer methods. HyperspectralMamba demonstrates an exceptional performance in small-sample class recognition and distinguishing spectrally similar terrain, while maintaining lower computational complexity, providing a new technical approach for high-precision hyperspectral image classification. Full article
Show Figures

Figure 1

25 pages, 6911 KiB  
Article
Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network
by Li Zhao, Tongyang Zhu, Chuang Wang, Feng Tian and Hongge Yao
Mathematics 2025, 13(15), 2370; https://doi.org/10.3390/math13152370 - 24 Jul 2025
Viewed by 308
Abstract
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a [...] Read more.
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a two-stage restoration paradigm: (1) Structural Prior Extraction, where adaptive edge detection algorithms identify residual contours in corrupted regions, and a transformer-enhanced network reconstructs globally consistent structural maps through contextual feature propagation; (2) Structure-Constrained Texture Synthesis, wherein a multi-scale generator with hybrid dilated convolutions and channel attention mechanisms iteratively refines high-fidelity textures under explicit structural guidance. The framework introduces three innovations: (1) a hierarchical feature fusion architecture that synergizes multi-scale receptive fields with spatial-channel attention to preserve long-range dependencies and local details simultaneously; (2) spectral-normalized Markovian discriminator with gradient-penalty regularization, enabling adversarial training stability while enforcing patch-level structural consistency; and (3) dual-branch loss formulation combining perceptual similarity metrics with edge-aware constraints to align synthesized content with both semantic coherence and geometric fidelity. Our experiments on the two benchmark datasets (Places2 and CelebA) have demonstrated that our framework achieves more unified textures and structures, bringing the restored images closer to their original semantic content. Full article
Show Figures

Figure 1

Back to TopTop