Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,347)

Search Parameters:
Keywords = ablation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1873 KB  
Article
A Multi-Scale Vision–Sensor Collaborative Framework for Small-Target Insect Pest Management
by Chongyu Wang, Yicheng Chen, Shangshan Chen, Ranran Chen, Ziqi Xia, Ruoyu Hu and Yihong Song
Insects 2026, 17(3), 281; https://doi.org/10.3390/insects17030281 - 4 Mar 2026
Abstract
In complex agricultural production environments, small-target pests—characterized by tiny scales, strong background confusion, and close dependence on environmental conditions—pose major challenges to precise monitoring and green pest control. To facilitate the transition from experience-driven to data-driven pest management, a multi-scale vision–sensor collaborative recognition [...] Read more.
In complex agricultural production environments, small-target pests—characterized by tiny scales, strong background confusion, and close dependence on environmental conditions—pose major challenges to precise monitoring and green pest control. To facilitate the transition from experience-driven to data-driven pest management, a multi-scale vision–sensor collaborative recognition method is proposed for field and protected agriculture scenarios to improve the accuracy and stability of small-target pest recognition under complex conditions. The method jointly models multi-scale visual representations and pest ecological mechanisms: a multi-scale visual feature module enhances fine-grained texture and morphological cues of small targets in deep networks, alleviating feature sparsity and scale mismatch, while environmental sensor data, including temperature, humidity, and illumination, are introduced as priors to modulate visual features and explicitly incorporate ecological constraints into the discrimination process. Stable multimodal fusion and pest category prediction are then achieved through a vision–sensor collaborative discrimination module. Experiments on a multimodal dataset collected from real farmland and greenhouse environments in Linhe District, Bayannur City, Inner Mongolia, demonstrate that the proposed method achieves approximately 93.1% accuracy, 92.0% precision, 91.2% recall, and a 91.6% F1-score on the test set, significantly outperforming traditional machine learning approaches, single-scale deep learning models, and multi-scale vision baselines without environmental priors. Category-level evaluations show balanced performance across multiple small-target pests, including aphids, thrips, whiteflies, leafhoppers, spider mites, and leaf beetles, while ablation studies confirm the critical contributions of multi-scale visual modeling, environmental prior modulation, and vision–sensor collaborative discrimination. Full article
26 pages, 20080 KB  
Article
GS-USTNet: Global–Local Adaptive Convolution with Skip-Guided Attention for Remote Sensing Image Segmentation
by Haoran Qian, Xuan Liu, Zhuang Li, Yongjie Ma and Zhenyu Lu
Remote Sens. 2026, 18(5), 785; https://doi.org/10.3390/rs18050785 - 4 Mar 2026
Abstract
Semantic segmentation of remote sensing imagery is crucial for applications such as land resource management and urban planning, yet it remains challenging due to low intra-class variation, ambiguous boundaries, and the coexistence of multi-scale geospatial features. To tackle these issues, we propose GS-USTNet, [...] Read more.
Semantic segmentation of remote sensing imagery is crucial for applications such as land resource management and urban planning, yet it remains challenging due to low intra-class variation, ambiguous boundaries, and the coexistence of multi-scale geospatial features. To tackle these issues, we propose GS-USTNet, a novel architecture that enhances both feature representation and boundary recovery. First, we introduce a Global–Local Adaptive Convolution (GLAConv) module that dynamically fuses global contextual cues with local responses to generate content-aware convolutional weights, thereby improving feature discriminability. Second, we design a Skip-Guided Attention (SGA) mechanism that leverages spatial–channel joint attention to guide the decoder, effectively mitigating attention dispersion in complex scenes or under class imbalance and significantly sharpening object boundaries. Built upon the efficient USTNet framework, our model achieves substantial performance gains without compromising computational efficiency. Extensive experiments on benchmark datasets demonstrate that GS-USTNet achieves consistent improvements over the original USTNet, with gains of approximately 3.5% in overall accuracy and 6.0% in F1-score across datasets. Ablation studies further confirm the effectiveness of the proposed GLAConv and SGA modules. This work provides an efficient and robust approach for fine-grained semantic segmentation of high-resolution remote sensing imagery. Full article
Show Figures

Figure 1

29 pages, 7884 KB  
Article
Differences on the Natural Course of Chronic Kidney Disease Progression, Induced by 5/6 Renal Ablation Model in Three Different Rat Stains: Wistar, Lewis, and Fischer 344
by Samuel de Jesus, Paloma Souza Noda, Ana Laura Rubio Francini, Flavio Teles Filho, Mariana Matera Veras, Ane Claudia Fernandes Nunes, Irene de Lourdes Noronha and Camilla Fanelli
Life 2026, 16(3), 420; https://doi.org/10.3390/life16030420 - 4 Mar 2026
Abstract
Almost 10% of the global population suffers from chronic kidney disease (CKD). The inexistence of a therapeutic to restore renal function motivates the scientific community to search for new treatments. The 5/6 nephrectomy (Nx) rat model is widely used to mimic human CKD, [...] Read more.
Almost 10% of the global population suffers from chronic kidney disease (CKD). The inexistence of a therapeutic to restore renal function motivates the scientific community to search for new treatments. The 5/6 nephrectomy (Nx) rat model is widely used to mimic human CKD, but the impact of strain-specific responses on disease progression remains unclear. Here, we aimed to compare CKD development in Wistar, Lewis, and Fischer rats submitted to the Nx model. In summary, even submitted to the same surgical procedure, the three studied rat strains presented distinct patterns of CKD progression: Wistar rats exhibited faster and sustained renal function loss, with exuberant hypertension, proteinuria, and renal inflammation, being considered as excellent models to study rapidly progressive human nephropathy. Lewis animals, in turn, presented mild low-progressive CKD, which make this rat strain especially useful to simulate intermediate degrees of human CKD and to develop long-term tests. Finally, Fischer rats submitted to Nx did not even develop hypertension, proteinuria, or glomerular damage within 30 days. Moreover, compared to Wistar rats, both Lewis and Fischer animals have a relatively higher basal number of nephrons and a lower number of whole blood leukocytes, which may have contributed to the renoprotection exhibited by these rat strains. Full article
(This article belongs to the Special Issue Research Progress in Kidney Diseases)
Show Figures

Figure 1

19 pages, 1359 KB  
Article
ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels
by Xiaoling Liang and Jiajian Li
Mathematics 2026, 14(5), 867; https://doi.org/10.3390/math14050867 (registering DOI) - 4 Mar 2026
Abstract
This paper develops an extended-state-observer (ESO)-enhanced actor–critic reinforcement learning (RL) scheme for the trajectory tracking control of 3-DOF marine vessels subject to uncertain hydrodynamics and environmental disturbances. A coordinate-consistent error construction is provided to obtain an exact strict-feedback second-order uncertain template. On this [...] Read more.
This paper develops an extended-state-observer (ESO)-enhanced actor–critic reinforcement learning (RL) scheme for the trajectory tracking control of 3-DOF marine vessels subject to uncertain hydrodynamics and environmental disturbances. A coordinate-consistent error construction is provided to obtain an exact strict-feedback second-order uncertain template. On this basis, an Hamilton–Jacobi–Bellman (HJB)-inspired optimised control structure is implemented: the critic approximates the optimal value-gradient and the actor generates the optimised control law. A key simplification is employed: rather than minimising the squared Bellman residual via complex gradients, we introduce an HJB-inspired actor–critic consistency regularisation through a weight-matching coupling. This yields computationally light online update laws and enables transparent Lyapunov-based stability analysis while not claiming exact HJB satisfaction or policy optimality. The ESO estimates lumped uncertainty and provides feedforward compensation, so the RL module learns only the observer residual. A composite Lyapunov analysis establishes the semi-global uniform ultimate boundedness of tracking errors and boundedness of all observer signals. Practical implementation with thruster allocation, explicit wind–wave–current disturbance shaping filters, and a theory-aligned ablation protocol are provided for reproducibility. Full article
Show Figures

Figure 1

25 pages, 1948 KB  
Article
VDTAR-Net: A Cooperative Dual-Path Convolutional Neural Network–Transformer Network for Robust Highlight Reflection Segmentation
by Qianlong Zhang and Yue Zeng
Computers 2026, 15(3), 168; https://doi.org/10.3390/computers15030168 - 4 Mar 2026
Abstract
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent [...] Read more.
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent “object assumption.” Conversely, pure transformer models often lose high-frequency boundary details and incur substantial computational costs. To tackle these challenges, this paper introduces VDTAR-Net, a specialized framework adapted to address the unique optical characteristics of specular reflections. Building upon hybrid architectures, our contribution focuses on two core mechanisms: (1) a Cross-architecture Fusion Module (CFM) that enables deep, bidirectional information flow, allowing the Transformer’s global illumination modeling to continuously correct the CNN’s local texture biases; and (2) a Reflective-Aware Module (RAM), which explicitly integrates the physical prior of high-intensity saturation into the attention mechanism. This task-specific design significantly enhances sensitivity to boundary details in overexposed regions. We also created the first large-scale, expert-labeled cervical white light segmentation dataset, Cervix-WL-900. High-quality ground truth labels were generated through rigorous double-blind annotation and arbitration by senior experts. Experimental results show that VDTAR-Net achieves a Dice score of 92.56% and a mean Intersection over Union (mIoU) score of 87.31% on Cervix-WL-900, demonstrating superior performance compared to methods like U-Net, DeepLabv3+, SegFormer, and PSPNet. Ablation studies further confirm the substantial contributions of dual-path collaboration, CFM deep fusion, and RAM task-specific priors. VDTAR-Net provides a robust baseline for precise highlight segmentation, laying a foundation for subsequent image quality assessment, restoration, and feature decoupling in diagnostic models. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

24 pages, 11178 KB  
Article
FLAMA: Frame-Level Alignment Margin Attack for Scene Text and Automatic Speech Recognition
by Yikun Xu, Zhiheng Xu and Pengwen Dai
Electronics 2026, 15(5), 1064; https://doi.org/10.3390/electronics15051064 - 4 Mar 2026
Abstract
Scene text recognition (STR) and automatic speech recognition (ASR) translate visual or acoustic signals into linguistic sequences and underpin many modern perception systems. Although their front-ends and decoders differ (e.g., CTC-based, attention-based, or variants), both tasks ultimately rely on aligning input frames to [...] Read more.
Scene text recognition (STR) and automatic speech recognition (ASR) translate visual or acoustic signals into linguistic sequences and underpin many modern perception systems. Although their front-ends and decoders differ (e.g., CTC-based, attention-based, or variants), both tasks ultimately rely on aligning input frames to output tokens by deep learning techniques, which exposes a shared vulnerability to adversarial perturbations. Existing attacks commonly optimize global sequence-level objectives. As a result, decisive frames are treated implicitly, and optimization can become unnecessarily diffuse over long input sequences, hindering convergence and perceptual quality. To address the above issues, we propose FLAMA, a unified Frame-Level Alignment Margin Attack, which could be used for both STR and ASR models. FLAMA explicitly targets alignment by maximizing per frame (or per step) recognition margins. The design is decoder-agnostic and applies to both CTC-based and attention-based pipelines. It employs a recognition-score-aware Step/Halt gate that concentrates updates on the most critical frames, and a stabilization stage that suppresses late-iteration oscillations to improve optimization stability and perceptual control. Ablation analyses show that stabilization consistently enhances attack success and reduces distortion. We evaluate FLAMA on STR benchmarks (SVT, CUTE80, and IC13) with CRNN, STAR, and TRBA, and on the ASR benchmark (LibriSpeech) with a Wav2Vec 2.0 model. Across modalities and architectures, FLAMA achieves near-100% attack success while substantially reducing l2 distortion and improving perceptual metrics compared with FGSM/PGD baselines. These results highlight frame-level alignment as a shared weak point across visual and audio sequence recognizers and suggest localized margin objectives as a principled route to effective sequence attacks. Full article
Show Figures

Figure 1

20 pages, 304 KB  
Review
From Feasibility to Individualization: Surgery for Breast Cancer Liver and Lung Metastases
by Martina Greco, Calogero Cipolla, Chiara Mesi, Alessio Ciminna, Daniela Sambataro, Giuseppa Scandurra, Simona Lupo, Gaspare Cannata, Luca Giacomelli, Vittorio Gebbia and Maria Rosaria Valerio
Cancers 2026, 18(5), 822; https://doi.org/10.3390/cancers18050822 - 3 Mar 2026
Abstract
Surgical resection of liver and lung metastases in breast cancer is increasingly considered a viable option for select patients with oligometastatic disease. Historically regarded as palliative, surgery is now supported by retrospective data suggesting potential survival benefits, particularly in patients with hormone receptor-positive [...] Read more.
Surgical resection of liver and lung metastases in breast cancer is increasingly considered a viable option for select patients with oligometastatic disease. Historically regarded as palliative, surgery is now supported by retrospective data suggesting potential survival benefits, particularly in patients with hormone receptor-positive or HER2-positive tumors, long disease-free intervals, and limited metastatic burden. This narrative review summarizes recent evidence on the surgical management of breast cancer metastases to the liver and lung, with a focus on patient selection, perioperative outcomes, and long-term survival. Liver metastasectomy has shown 5-year overall survival rates of up to 60% in well-selected patients, while pulmonary metastasectomy is associated with comparable outcomes when resection is complete and nodal involvement is absent. Minimally invasive techniques and non-surgical approaches, such as microwave ablation and stereotactic radiotherapy, expand treatment options for patients unfit for surgery. The review also explores emerging tools influencing surgical decision-making, including circulating tumor DNA for minimal residual disease detection, transcriptomic profiling to predict organotropism, and artificial intelligence (AI)-driven platforms that assist with surgical planning and multidisciplinary case evaluation. While prospective validation remains limited, these technologies may help redefine surgical candidacy through biologically informed algorithms. Ultimately, the integration of surgery within a multimodal, personalized treatment strategy—guided by systemic control, tumor biology, and evolving digital tools—represents an evolving and biologically informed direction for rigorously selected patients with visceral breast cancer metastases. Full article
(This article belongs to the Special Issue Surgery in Metastatic Cancer (2nd Edition))
26 pages, 3634 KB  
Article
A Multi-Temporal Agricultural Remote Sensing Framework for Sustainable Crop Yield Estimation with Economic Impact
by Shengyuan Tang, Chenlu Jiang, Jingdan Zhang, Mingran Tian, Yang Zhang, Yating Yang and Min Dong
Sustainability 2026, 18(5), 2466; https://doi.org/10.3390/su18052466 - 3 Mar 2026
Abstract
Under the intensifying impacts of climate change, tightening agricultural resource constraints, and escalating food security pressures, the development of high-accuracy and interpretable crop yield estimation methods has become a critical technical issue in sustainable agricultural engineering. In this study, multi-temporal and multi-spectral remote [...] Read more.
Under the intensifying impacts of climate change, tightening agricultural resource constraints, and escalating food security pressures, the development of high-accuracy and interpretable crop yield estimation methods has become a critical technical issue in sustainable agricultural engineering. In this study, multi-temporal and multi-spectral remote sensing imagery are utilized as the core input. A multi-scale visual feature extraction module is designed to characterize canopy texture, field structure, and regional heterogeneity, while a temporal growth modeling module captures the dynamic evolution of crops from emergence to maturity. Yield regression is further integrated with economic mapping and explainability mechanisms, thereby forming an end-to-end prediction framework. Experimental results across multiple regions and years demonstrate that the proposed method outperforms various representative models. In the primary regression experiment, the framework achieves approximately R2=0.76, with MAE reduced to 0.60 and MSE to 0.62, representing an error reduction of over 25% compared with traditional regression approaches and classical machine learning models. In classification experiments for yield-grade evaluation, the model attains an accuracy of approximately 0.85, with both precision and recall exceeding 0.82, demonstrating its effectiveness in both continuous yield prediction and stable yield-level region identification. Cross-region and cross-year validation further indicate strong generalization capability, with R2 remaining above 0.65 in unseen regions and around 0.67 under cross-year prediction settings. Ablation studies confirm the synergistic contributions of multi-scale spatial modeling, temporal growth modeling, and explainability constraints, as performance consistently declines when any individual module is removed. Overall, the results highlight that the proposed framework provides reliable data support for precision agricultural management, resource optimization, and agricultural engineering decision-making, while also offering a scalable and reproducible pathway for sustainable agricultural engineering development. Full article
(This article belongs to the Special Issue Agricultural Engineering for Sustainable Development)
25 pages, 1851 KB  
Systematic Review
Laser Energy Application in Endoscopic Kidney-Sparing Surgery for Upper Tract Urothelial Carcinoma: A Systematic Review of Oncological Outcomes and Surgical Complications
by Federico Zorzi, Pietro Scilipoti, Stefano Moretto, Carlos Gonzalez-Gonzalez, Nicola Nannola, Daniele Robesti, Andrea Folcia, Marie Chicaud, Stessy Kutchukian, Luigi Candela, Berthe Laurent, Eugenio Ventimiglia, Francesco Montorsi, Alberto Briganti, Andrea Salonia, Luca Villa, Steeve Doizi, Olivier Traxer and Frédéric Panthier
Cancers 2026, 18(5), 821; https://doi.org/10.3390/cancers18050821 - 3 Mar 2026
Abstract
Background: Endoscopic kidney-sparing surgery (eKSS) is increasingly adopted for the management of selected patients with upper tract urothelial carcinoma (UTUC). Laser energy is central to tumor ablation during eKSS; however, multiple laser platforms with distinct physical and thermal properties are currently available, and [...] Read more.
Background: Endoscopic kidney-sparing surgery (eKSS) is increasingly adopted for the management of selected patients with upper tract urothelial carcinoma (UTUC). Laser energy is central to tumor ablation during eKSS; however, multiple laser platforms with distinct physical and thermal properties are currently available, and their comparative oncological and safety profiles remain poorly defined. This systematic review aims to summarize the available evidence on oncological outcomes and perioperative complications associated with laser-based endoscopic treatment of UTUC and to explore potential differences according to laser technology. Methods: A systematic literature search identified 25 eligible studies published between 1997 and 2024, including 1344 patients treated with laser-assisted eKSS. All included studies were non-randomized, predominantly retrospective, and characterized by moderate-to-serious risk of bias. Holmium:YAG, Thulium:YAG (thu:YAG, continuous-wave and pulsed), thulium fiber laser (TFL), Neodimio:YAG (Nd:YAG), diode lasers, and combination platforms were reported. Results: Ipsilateral upper tract recurrence was common across all laser categories, with weighted proportions ranging approximately from 27% to 52% and substantial inter-study heterogeneity. Progression and conversion to radical nephroureterectomy (RNU) were relatively infrequent overall, with numerically weighted proportions observed in thu:YAG-based cohorts. Major complications (Clavien–Dindo ≥ III) were rare across all laser technologies, although a trend toward a higher weighted proportions was observed in Ho:YAG- and Nd:YAG-based series. Minor complications were more frequently reported and highly heterogeneous. Conclusions: Available evidence supporting laser selection in endoscopic kidney-sparing management of UTUC is limited and largely descriptive. Thulium:YAG and TFL platforms seem to demonstrate encouraging trends toward lower progression and conversion to-radical-nephroureterectomy rates; however, these findings are derived from heterogeneous, non-comparative studies with limited follow-up. No standard laser platform can currently be recommended over others based on existing data. Prospective, comparative, and methodologically robust studies are required to determine whether laser technologies confer clinically meaningful advantages in oncological control or safety for UTUC treated with eKSS. Full article
(This article belongs to the Special Issue Symptom Burden in Cancer: Assessment and Management: 2nd Edition)
Show Figures

Figure 1

24 pages, 2827 KB  
Article
Balanced Index-Encoding Genetic Algorithm for Extreme Prototype Reduction in k-Nearest Neighbor Classification
by Victor Ayala-Ramirez, Jose-Gabriel Aguilera-Gonzalez, Antonio Tierrasnegras-Badillo and Uriel Calderon-Uribe
Algorithms 2026, 19(3), 188; https://doi.org/10.3390/a19030188 - 3 Mar 2026
Abstract
Nearest-neighbor classifiers are accurate and easy to deploy, but their memory footprint and inference time grow with the size of the reference set. This paper studies an evolutionary prototype selection strategy for k-nearest neighbor (K-NN) classification aimed at extreme, class-balancedreduction. A compact genetic [...] Read more.
Nearest-neighbor classifiers are accurate and easy to deploy, but their memory footprint and inference time grow with the size of the reference set. This paper studies an evolutionary prototype selection strategy for k-nearest neighbor (K-NN) classification aimed at extreme, class-balancedreduction. A compact genetic algorithm (GA) evolves a fixed number of prototype indices per class drawn from a disjoint design partition; the selected prototypes are then used by a 1-NN classifier, with fitness defined as the number of correctly classified test instances. To address concerns about generality and baseline strength, we evaluate an experimental suite including synthetic 2D Gaussians (σ=0.5 and σ=1.0) and a 3D three-moons geometry, as well as public benchmarks spanning binary and multi-class settings and higher-dimensional data (Breast Cancer Wisconsin, Wine, Reduced MNIST/Digits 8 × 8, Forest CoverType with seven classes, and a 10D five-class spiral benchmark). We compare against K-NN baselines with k{1,3,5,7} using all design samples, and include GA operator ablations (GA1/GA2/GA3). Each scenario is repeated over 30 independent runs, reporting mean ± std, min/max, per-run distributions, win/tie/loss counts, and non-parametric significance tests (paired Wilcoxon with Holm correction; Friedman where applicable). Across datasets, the GA-selected prototype banks—often orders of magnitude smaller than the full design set—match or improve accuracy, with frequent statistically supported wins against strong K-NN baselines, and in the hardest cases provide substantial compression with no loss relative to the best baseline. These results establish a reproducible baseline for extreme, class-balanced prototype reduction suitable for memory- and latency-constrained deployments and for fair comparison against more elaborate prototype selection methods. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

20 pages, 955 KB  
Article
Impact of Flap Thickness on Refractive Outcomes and Corneal Biomechanics Following Myopic Femtosecond Laser-Assisted LASIK
by Joanna Wierzbowska, Marcin Smorawski, Janusz Sierdziński, Łukasz Stróżecki and Anna Maria Roszkowska
J. Clin. Med. 2026, 15(5), 1923; https://doi.org/10.3390/jcm15051923 - 3 Mar 2026
Abstract
Background/Objectives: Femtosecond laser-assisted LASIK (FS-LASIK) is currently the most commonly performed procedure for the correction of myopia and myopic astigmatism. However, it inherently weakens the biomechanical integrity of the cornea due to flap creation and stromal ablation. This prospective study aimed to [...] Read more.
Background/Objectives: Femtosecond laser-assisted LASIK (FS-LASIK) is currently the most commonly performed procedure for the correction of myopia and myopic astigmatism. However, it inherently weakens the biomechanical integrity of the cornea due to flap creation and stromal ablation. This prospective study aimed to compare refractive and corneal biomechanical parameters after myopic FS-LASIK with different flap thicknesses and to identify parameters that may influence the change in corneal biomechanics after surgery. Methods: A total of 246 eyes were enrolled and divided into two groups based on flap thickness: 110 µm (n = 129) and 140 µm (n = 117). All procedures were performed using a femtosecond LDV Ziemer laser and standardized ablation profiles with similar ablation depths. Visual acuity, refractive outcomes, and corneal biomechanical parameters—corneal hysteresis (CH) and corneal resistance factor (CRF)—were assessed preoperatively and during a 6-month follow-up using the Ocular Response Analyzer (ORA). Multivariate regression analysis was used to identify predictors of biomechanical change. Results: The groups did not differ in preoperative values of the mean refractive spherical equivalent, keratometry, central corneal thickness, CH and CRF. At 6 months, both groups achieved comparable refractive outcomes, with no significant differences in uncorrected or corrected distance visual acuity, efficacy index and safety index. However, the thicker flap group exhibited significantly greater reductions in CH (−2.89 vs. −2.04 mmHg, p < 0.05) and CRF (−3.61 vs. −2.77 mmHg, p < 0.05), as well as greater biomechanical weakening per micron of ablation. Multivariate regression identified anterior weighted biomechanical index (AWBI) and flap thickness as the strongest predictors of CH reduction, while flap thickness, residual stromal bed thickness, ablation depth, and central corneal thickness contributed to CRF changes. Conclusions: While FS-LASIK with both flap thicknesses achieved equally effective visual outcomes, thicker flaps were associated with significantly greater biomechanical weakening. Flap thickness had a stronger influence on corneal biomechanics than ablation depth. These findings support consideration of flap thickness in surgical planning to optimize corneal biomechanical stability. Full article
Show Figures

Figure 1

19 pages, 2191 KB  
Article
Mask-Aware Spatiotemporal Classification of Millimeter-Wave Radar Point Cloud Sequences Using DGCNN and Transformer for Child–Pet Recognition in Enclosed Spaces
by Yehui Shi and Jianhong Shi
Sensors 2026, 26(5), 1580; https://doi.org/10.3390/s26051580 - 3 Mar 2026
Abstract
Applications in enclosed spaces such as vehicle cabin on-site detection, human–pet separation, and pet care have put forward higher requirements for non-contact target recognition. Millimeter-wave radar point clouds have advantages such as privacy friendliness and robustness against low light and occlusion. However, their [...] Read more.
Applications in enclosed spaces such as vehicle cabin on-site detection, human–pet separation, and pet care have put forward higher requirements for non-contact target recognition. Millimeter-wave radar point clouds have advantages such as privacy friendliness and robustness against low light and occlusion. However, their point clouds are generally sparse, with obvious noise and multipath interference. Moreover, the fluctuation of point numbers over time makes alignment and feature learning difficult, which leads to performance degradation of existing point cloud classification methods in complex environments. To this end, this paper proposes a spatiotemporal joint classification framework for millimeter-wave point cloud sequences: An effective point mask mechanism is introduced in the spatial dimension to suppress the interference of invalid points generated by alignment on the neighborhood composition and feature aggregation and improve the reliability of local geometric representation; and to integrate attention-based time series modeling in the time dimension and enhance category separability by using cross-frame dynamic patterns. The experimental results show that the proposed method can achieve an accuracy rate of 97.8% in the three-classification tasks of Child, Cat and Dog and the ablation analysis verifies the key contributions of the mask mechanism and time series modeling to robust recognition. This framework provides a deployable and more generalized millimeter-wave point cloud solution for the identification of life forms in confined spaces. Full article
Show Figures

Figure 1

23 pages, 2178 KB  
Article
GDFSIC: A Few-Shot Image Classification Framework Integrating Global–Local Attention with Distance–Direction Similarity
by Biao Geng and Liping Pu
Math. Comput. Appl. 2026, 31(2), 38; https://doi.org/10.3390/mca31020038 - 3 Mar 2026
Abstract
For few-shot image classification tasks, the recognition accuracy of existing models remains limited due to the inherent complexity of the few-shot learning setting. To address this challenge, this paper proposes a few-shot image classification approach, termed GDFSIC, which integrates a Global–Local Channel Attention [...] Read more.
For few-shot image classification tasks, the recognition accuracy of existing models remains limited due to the inherent complexity of the few-shot learning setting. To address this challenge, this paper proposes a few-shot image classification approach, termed GDFSIC, which integrates a Global–Local Channel Attention Module (GLCAM) with a graph-propagation-based Distance–Direction Similarity Earth Mover’s Distance (DDS-EMD). The GLCAM module is incorporated into the feature extractor to enhance focus on discriminative regions and increase model attention to critical feature areas. Furthermore, a Distance–Direction Similarity (DDS) metric is introduced as a more effective distance criterion for capturing subtle differences in latent spatial representations. The proposed method is evaluated on four widely used few-shot image classification benchmarks: CIFAR-FS, CUB-200-2011, mini-ImageNet, and Tiered-ImageNet. Experimental results demonstrate that our approach achieves a clear competitive advantage in classification accuracy across these datasets. Ablation studies and further analyses confirm the effectiveness of each component of the proposed framework. Full article
Show Figures

Figure 1

20 pages, 4390 KB  
Article
NeuroFusion-ViT: A Hybrid CNN–EVA Transformer Model with Cross-Attention Fusion for MRI-Based Alzheimer’s Stage Classification
by Derya Öztürk Söylemez and Sevinç Ay Doğru
Diagnostics 2026, 16(5), 754; https://doi.org/10.3390/diagnostics16050754 - 3 Mar 2026
Abstract
Background: Alzheimer’s disease is the most common type of dementia and a progressive neurodegenerative disease that begins with neuronal damage and leads to a reduction in brain tissue. Currently, there is no cure for this disease, and existing approaches focus on alleviating symptoms. [...] Read more.
Background: Alzheimer’s disease is the most common type of dementia and a progressive neurodegenerative disease that begins with neuronal damage and leads to a reduction in brain tissue. Currently, there is no cure for this disease, and existing approaches focus on alleviating symptoms. Methods: This study proposes NeuroFusion-ViT, a highly accurate and computationally efficient hybrid deep learning model for early-stage detection of Alzheimer’s disease. The model combines an EVA-02-based Vision Transformer (ViT) with the ConvNeXt-Small CNN architecture, providing powerful representation learning that can process both global context and local details. The proposed Gated Cross-Attention Fusion (G-CAF) mechanism dynamically combines two different features, offering high discriminative power and model stability. Results: In experiments conducted on the OASIS MRI dataset, the model achieved 99.86% accuracy, 0.9989 Macro F1, and 0.999 ROC-AUC values, demonstrating clear superiority over single-modal and hybrid models described in the literature. Furthermore, 5-fold cross-validation results also support the model’s high generalizability. Ablation studies showed that each of the components—cross-attention, gate mechanism, Dual LayerNorm, and FFN-Dropout—made a meaningful contribution to performance. Conclusions: The results demonstrate that the NeuroFusion-ViT architecture offers a reliable, stable, and clinically applicable solution for Alzheimer’s stage classification. Full article
(This article belongs to the Special Issue Alzheimer's Disease Diagnosis Based on Deep Learning)
Show Figures

Figure 1

21 pages, 4214 KB  
Article
A Lightweight and Sustainable UAV-Based Forest Fire Detection Algorithm Based on an Improved YOLO11 Model
by Shuangbao Ma, Yongji Hui, Yapeng Zhang and Yurong Wu
Sustainability 2026, 18(5), 2436; https://doi.org/10.3390/su18052436 - 3 Mar 2026
Abstract
Unmanned aerial vehicle (UAV) forest fire detection is vital for forest safety. However, early-stage UAV fire scenarios often involve small targets, weak smoke signals, and strict onboard resource constraints, which pose significant challenges to existing detectors. To improve the speed and accuracy of [...] Read more.
Unmanned aerial vehicle (UAV) forest fire detection is vital for forest safety. However, early-stage UAV fire scenarios often involve small targets, weak smoke signals, and strict onboard resource constraints, which pose significant challenges to existing detectors. To improve the speed and accuracy of UAV forest fire detection, this paper proposes a lightweight fire detection algorithm, AHE-YOLO, specifically designed for UAVs. The proposed method adopts a coordinated lightweight design to improve feature preservation and cross-scale representation under limited computational budgets. Specifically, the Adaptive Downsampling (ADown) module preserves shallow fire-related cues during spatial reduction, improving sensitivity to small flame and smoke targets. The high-level screening-feature fusion pyramid network (HS-FPN) introduces cross-scale attention to promote more discriminative multi-level feature interaction while reducing redundant computation. Furthermore, the Efficient Mobile Inverted Bottleneck Convolution (EMBC) module is employed to improve receptive-field efficiency and feature selectivity under lightweight constraints, further enhancing detection accuracy and inference speed. Finally, the performance of AHE-YOLO is comprehensively evaluated through ablation and comparative experiments on the same dataset. The final experimental results show that YOLO-AHE achieves a mean average precision (mAP) of 94.8% while reducing model parameters by 39.7%, decreasing FLOPs by 27.0%, and shrinking the model size by 36.4%. In addition, its inference speed improves by 16.5%. Beyond detection performance, the proposed framework supports sustainable forest monitoring by enabling early fire warning with reduced computational and energy demands, showing strong potential for real-time deployment on resource-constrained UAV and edge platforms. Full article
Show Figures

Figure 1

Back to TopTop