Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,916)

Search Parameters:
Keywords = lightweight imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 649 KB  
Article
A Multimodal Biomedical Sensing Approach for Muscle Activation Onset Detection
by Qiang Chen, Haofei Li, Zhe Xiang, Moxian Lin, Yinfei Yi, Haoran Tang and Yan Zhan
Sensors 2026, 26(6), 1907; https://doi.org/10.3390/s26061907 - 18 Mar 2026
Abstract
Muscle onset detection is a fundamental problem in electromyography signal analysis, human–machine interaction, and rehabilitation assessment. In medical and biomedical applications, slow muscle activation onset processes are widely encountered in scenarios such as rehabilitation training, postural regulation, and fine motor control. Such processes [...] Read more.
Muscle onset detection is a fundamental problem in electromyography signal analysis, human–machine interaction, and rehabilitation assessment. In medical and biomedical applications, slow muscle activation onset processes are widely encountered in scenarios such as rehabilitation training, postural regulation, and fine motor control. Such processes are typically characterized by slowly varying amplitudes, long temporal durations, and high susceptibility to noise interference, which poses significant challenges for accurate identification of onset timing. To address these issues, a lightweight temporal attention method for slow muscle activation onset detection is proposed and systematically validated under multimodal experimental settings. The proposed method takes surface electromyography signals as the primary input, while synchronously acquired optical motion image data are incorporated into the experimental design and result analysis, thereby aligning with the common joint use of optical imaging and physiological signals in medical and biomedical research. From a methodological perspective, the proposed framework is composed of lightweight temporal feature encoding, a slow activation-aware temporal attention mechanism, and noise suppression with stable decision strategies. Under the constraint of low computational complexity, the ability to model progressive activation signals is effectively enhanced. Experiments are conducted on a dataset containing multiple types of slow activation movements, and model performance is evaluated using five-fold cross-validation. The results demonstrate that under regular signal-to-noise ratio conditions, the proposed method significantly outperforms traditional threshold-based approaches, classical machine learning models, and several deep learning baselines in terms of onset detection accuracy, recall, and precision. Specifically, onset detection accuracy reaches approximately 92%, recall is around 90%, and precision is approximately 93%. Meanwhile, the average onset detection error and detection delay are reduced to about 41ms and 28ms, respectively, with the false positive rate controlled at approximately 2.2%. Stable performance is further maintained under different noise levels and cross-subject settings, indicating strong robustness and generalization capability. Full article
(This article belongs to the Special Issue Application of Optical Imaging in Medical and Biomedical Research)
Show Figures

Figure 1

25 pages, 8614 KB  
Article
Underwater Image Restoration Integrating Monocular Depth Estimation with a Physical Imaging Model
by Tianchi Zhang, Hongwei Qin, Qiang Liu and Xing Liu
J. Mar. Sci. Eng. 2026, 14(6), 563; https://doi.org/10.3390/jmse14060563 - 18 Mar 2026
Abstract
Underwater images suffer from quality degradation such as haze, detail blurring, color distortion, and low contrast due to factors like light scattering and wavelength-dependent attenuation in water. This severely hinders the high-quality completion of target detection tasks for Autonomous Underwater Vehicles (AUV) relying [...] Read more.
Underwater images suffer from quality degradation such as haze, detail blurring, color distortion, and low contrast due to factors like light scattering and wavelength-dependent attenuation in water. This severely hinders the high-quality completion of target detection tasks for Autonomous Underwater Vehicles (AUV) relying on image information. Although deep learning-based methods have gained widespread attention, existing approaches still face challenges such as insufficient feature extraction and limited generalization in complex real-world scenes. Methods based on physical models, on the other hand, heavily rely on depth information which is difficult to obtain accurately. To address these issues, this paper proposes a novel underwater image restoration method that integrates depth estimation with the Akkaynak-Treibitz physical imaging model. In the depth estimation stage, efficient and robust feature extraction is achieved through a lightweight encoder–decoder architecture combined with a channel–spatial hybrid attention mechanism. To overcome the inherent scale ambiguity problem in monocular depth estimation, which prevents direct output of absolute depth consistent with the real scene, sparse depth priors are introduced. Subsequently, adaptive depth binning and depth map optimization are realized via m-Vision Transformer and convolutional regression. In the image restoration stage, the acquired high-quality depth map is combined with the Akkaynak-Treibitz physical imaging model for inverse solving, achieving high-quality restoration from degraded to clear images. Experimental results demonstrate that the proposed method outperforms mainstream depth estimation methods (LapDepth, UDepth, etc.) and mainstream image restoration methods (CLAHE, FUnIE-GAN, etc.) in terms of evaluation metrics and visual perceptual quality. When processing the extremely degraded UIEB-S dataset, the proposed method achieves evaluation metrics of SSIM = 0.8954, UCIQE = 0.6107, and PSNR = 23.35 dB. Compared to the CLAHE and FUnIE-GAN methods, SSIM improved by 2.8% and 16.7%, UCIQE improved by 9.6% and 14.3%, and PSNR improved by 22.5% and 13.9%, respectively. Comprehensive subjective and objective evaluation results validate the effectiveness of the proposed method in addressing image quality degradation, particularly demonstrating outstanding capability in severe color cast correction and detail recovery. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

40 pages, 2214 KB  
Article
A CNN-ViT Hybrid Architecture Res101-MViT-Ens for Accurate and Lightweight Automated Ocular Disease Diagnosis
by Hao Wang, Ting Ke and Hui Lv
Appl. Sci. 2026, 16(6), 2905; https://doi.org/10.3390/app16062905 - 18 Mar 2026
Abstract
Automated ocular disease diagnosis faces critical challenges including insufficient diagnostic precision, local–global feature imbalance, rigid feature fusion, weak cross-domain generalization, and difficult lightweight deployment. This study aims to develop a high-performance, generalizable, and deployable hybrid deep learning architecture for accurate multi-class ocular disease [...] Read more.
Automated ocular disease diagnosis faces critical challenges including insufficient diagnostic precision, local–global feature imbalance, rigid feature fusion, weak cross-domain generalization, and difficult lightweight deployment. This study aims to develop a high-performance, generalizable, and deployable hybrid deep learning architecture for accurate multi-class ocular disease diagnosis. We propose the Res101-MViT-Ens hybrid architecture, which fuses ResNet101 for local fine-grained feature extraction and MobileViT-XXS for global contextual modeling via an end-to-end dynamic learnable weight fusion mechanism, with class-balanced sampling and medically adaptive augmentation for data preprocessing. The model is validated on the ODIR-5K dataset and cross-evaluated on three heterogeneous datasets (MESSIDOR-2, Kaggle DR, EyePACS). It achieves 99.44% accuracy, a 99.41% F1-score, and 99.32% Kappa on ODIR-5K, with a 99.46% average cross-dataset accuracy, outperforming state-of-the-art models. With 54 M parameters and 42.6 ms per-image inference latency on the Snapdragon 8 Gen2 edge module (Qualcomm Technologies, Inc., San Diego, CA, USA), it outperforms mainstream edge architectures. This proposed architecture achieves state-of-the-art diagnostic precision; balances accuracy, generalization and practicality; and is suitable for lightweight grassroots deployment in ocular disease screening. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 6716 KB  
Article
Multi-Type Weld Defect Detection in Galvanized Sheet MIG Welding Using an Improved YOLOv10 Model
by Bangzhi Xiao, Yadong Yang, Yinshui He and Guohong Ma
Materials 2026, 19(6), 1178; https://doi.org/10.3390/ma19061178 - 17 Mar 2026
Abstract
Shop-floor weld inspection may appear to be a solved problem until a camera is deployed near a galvanized-sheet MIG welding line. The seam reflects light, the texture changes from frame to frame, and the defects of interest are often small and visually subtle. [...] Read more.
Shop-floor weld inspection may appear to be a solved problem until a camera is deployed near a galvanized-sheet MIG welding line. The seam reflects light, the texture changes from frame to frame, and the defects of interest are often small and visually subtle. Additionally, the hardware near the line is rarely a data-center GPU. With those constraints in mind, this paper presents YOLO-MIG, a compact detector built on YOLOv10n for weld-seam inspection in practical production conditions. We make three focused changes to the baseline: a C2f-EMSCP backbone block to better preserve weak defect cues with modest parameter growth, a BiFPN neck to keep small-target information alive during feature fusion, and a C2fCIB head to clean up predictions that otherwise get distracted by seam edges and illumination artifacts. On a workshop-collected dataset containing 326 original images, with the training subset expanded through augmentation to 2608 labeled samples in total, YOLO-MIG achieves 98.4% mAP@0.5 and 56.29% mAP@0.5:0.95 on the test set while remaining lightweight (1.83 M parameters, 3.87 MB FP16 weights). Compared with YOLOv10n, the proposed model improves mAP@0.5 by 9.36 points and mAP@0.5:0.95 by 4.89 points, while reducing parameters, GFLOPs, and model size by 43.4%, 19.9%, and 29.9%, respectively. The results suggest that YOLO-MIG is not only accurate but also realistic to deploy at the edge for intelligent weld quality control. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

20 pages, 48606 KB  
Article
GMUD-Net: Global Modulated Unbalanced Dual-Branch Network for Image Restoration in Various Degraded Environments
by Shengchun Wang, Yingjie Liu and Huijie Zhu
Appl. Sci. 2026, 16(6), 2854; https://doi.org/10.3390/app16062854 - 16 Mar 2026
Abstract
Image restoration has wide applications in the field of computer vision, yet existing methods suffer from limitations. CNNs struggle to capture long-range dependencies, while transformers exhibit insufficient performance in handling local details and high computational complexity. Additionally, existing dual-branch networks fail to define [...] Read more.
Image restoration has wide applications in the field of computer vision, yet existing methods suffer from limitations. CNNs struggle to capture long-range dependencies, while transformers exhibit insufficient performance in handling local details and high computational complexity. Additionally, existing dual-branch networks fail to define a clear dominant–auxiliary role between branches, leading to redundancy and high computational costs. This paper proposes a Global Modulated Unbalanced Dual-Branch Network (GMUD-Net), which innovatively adopts an unbalanced structure with a CNN as the main branch and a transformer as the auxiliary branch. Specifically, the CNN branch achieves strong restoration capability by integrating the global–local hybrid backbone block (GLBB) and the frequency-based global attention module (FGAM). As the key building block in the CNN branch, GLBB integrates a local backbone branch, a global Fourier branch, and a residual branch to fuse local details with global context. Meanwhile, FGAM leverages the fast Fourier transform at the bottleneck to enhance cross-channel interaction and improve global restoration performance. In addition, the lightweight transformer branch employs efficient cross-channel attention to provide complementary global cues, which are filtered and injected into the CNN branch via the global attention guidance block (GAG). These designs integrate the advantages of both CNNs and transformers while significantly reducing computational burden, offering a new paradigm to address the limitations of traditional dual-branch architectures. Experimental results demonstrate that compared with existing algorithms, the proposed method achieves state-of-the-art or highly competitive performance in both quantitative evaluations and qualitative results across nine datasets. Full article
(This article belongs to the Special Issue AI-Driven Image and Signal Processing)
Show Figures

Figure 1

18 pages, 23505 KB  
Article
ArtUnmasked: A Multimodal Classifier for Real, AI, and Imitated Artworks
by Akshad Chidrawar and Garima Bajwa
J. Imaging 2026, 12(3), 133; https://doi.org/10.3390/jimaging12030133 - 16 Mar 2026
Abstract
Differentiating AI-generated, real, or imitated artworks is becoming a tedious and computationally challenging problem in digital art analysis. AI-generated art has become nearly indistinguishable from human-made works, posing a significant threat to copyrighted content. This content is appearing on online platforms, at exhibitions, [...] Read more.
Differentiating AI-generated, real, or imitated artworks is becoming a tedious and computationally challenging problem in digital art analysis. AI-generated art has become nearly indistinguishable from human-made works, posing a significant threat to copyrighted content. This content is appearing on online platforms, at exhibitions, and in commercial galleries, thereby escalating the risk of copyright infringement. This sudden increase in generative images raises concerns like authenticity, intellectual property, and the preservation of cultural heritage. Without an automated, comprehensible system to determine whether an artwork has been AI-generated, authentic (real), or imitated, artists are prone to the reduction of their unique works. Institutions also struggle to curate and safeguard authentic pieces. As the variety of generative models continues to grow, it becomes a cultural necessity to build a robust, efficient, and transparent framework for determining whether a piece of art or an artist is involved in potential copyright infringement. To address these challenges, we introduce ArtUnmasked, a practical and interpretable framework capable of (i) efficiently distinguishing AI-generated artworks from real ones using a lightweight Spectral Artifact Identification (SPAI), (ii) a TagMatch-based artist filtering module for stylistic attribution, and (iii) a DINOv3–CLIP similarity module with patch-level correspondence that leverages the one-shot generalization ability of modern vision transformers to determine whether an artwork is authentic or imitated. We also created a custom dataset of ∼24K imitated artworks to complement our evaluation and support future research. The complete implementation is available in our GitHub repository. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

19 pages, 9160 KB  
Article
Machine Vision-Based Intelligent Intrusion Detection Method for Obstacles on Open Railways in Low-Light Environments
by Heng Zhou, Fengkui Chen, Xinyao Dong, Jikang Sun, Qing Yang and Dexin Gao
Appl. Sci. 2026, 16(6), 2848; https://doi.org/10.3390/app16062848 - 16 Mar 2026
Abstract
Railway obstacle detection in low-light environments faces challenges such as complex scenes and frequent obstacle intrusion across boundaries, and to address these issues, this paper proposes an improved RT-DETR-based method for low-light railway obstacle detection, named SWC-DETR. Firstly, the low-light image enhancement network [...] Read more.
Railway obstacle detection in low-light environments faces challenges such as complex scenes and frequent obstacle intrusion across boundaries, and to address these issues, this paper proposes an improved RT-DETR-based method for low-light railway obstacle detection, named SWC-DETR. Firstly, the low-light image enhancement network SCINet is introduced to improve image quality in low-light environments and enhance the stability of feature extraction in the model; secondly, WTConv is integrated into the RepC3 module by combining wavelet transform with convolution to achieve a balance between a large receptive field and low parameter count; and thirdly, the CloFormer dual-branch structure is incorporated into the AIFI module to further suppress background noise under low-light conditions and strengthen the representation of edge features for small targets. In this paper, a low-light open railway obstacle detection dataset is constructed, and extensive comparative experiments along with multiple independent runs are conducted. The results demonstrate that the improved model reduces the number of parameters and the computational complexity by 18.1% and 33.8%, respectively, while consistently achieving a mAP@0.5 of 72.2% and a recall of 66.6% (representing an average improvement of 5.9% and 5.5% over the original model, respectively), achieving model lightweighting while significantly improving the accuracy of obstacle detection in low-light environments and providing effective technical support for the safety protection of open railways. Full article
Show Figures

Figure 1

18 pages, 11760 KB  
Article
Innovative Real-Time Palm Tree Detection, Geo-Localization and Counting from Unmanned Aerial Vehicle (UAV) Aerial Images Using Deep Learning
by Ali Mazinani, Mostafa Norouzi, Amin Talaeizadeh, Aria Alasty, Mahmoud Saadat Foumani and Amin Kolahdooz
Automation 2026, 7(2), 51; https://doi.org/10.3390/automation7020051 - 16 Mar 2026
Abstract
Accurate real-time detection, geolocation, and counting of palm trees are essential for plantation management, yield estimation, and resource allocation in precision agriculture. Traditional approaches such as manual surveys or offline image processing are labor-intensive and unsuitable for large-scale applications. This study introduces a [...] Read more.
Accurate real-time detection, geolocation, and counting of palm trees are essential for plantation management, yield estimation, and resource allocation in precision agriculture. Traditional approaches such as manual surveys or offline image processing are labor-intensive and unsuitable for large-scale applications. This study introduces a fully onboard real-time framework that integrates Unmanned Aerial Vehivle (UAV) imagery, the YOLOv12 deep learning model, and a camera projection technique to detect, geolocate, and count palm trees directly during flight. The lightweight YOLOv12n variant, deployed on an NVIDIA Jetson Nano edge device, achieved a detection precision of 92.4%, an average geolocation error of 2.14 m, and a counting error of only 0.2% across 915 trees. Unlike many existing methods that rely on offline processing or offboard computation, the proposed system performs all computations in real time, enabling immediate decision-making for tasks such as plantation density analysis, replanting planning, and yield forecasting. Experimental results demonstrate that the proposed approach provides a scalable, cost-effective, and autonomous solution for modern precision agriculture. Full article
Show Figures

Figure 1

22 pages, 2762 KB  
Article
Automated Classification of Medical Image Modality and Anatomy
by Jean de Smidt, Kian Anderson and Andries Engelbrecht
Algorithms 2026, 19(3), 222; https://doi.org/10.3390/a19030222 - 16 Mar 2026
Abstract
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow [...] Read more.
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow to improve service quality and efficiency. Transfer learning methods were applied to various convolutional neural network (CNN) architectures and compared to classify medical images across different modalities, i.e., X-rays, ultrasound, magnetic resonance imaging (MRI), and angiography, through a two-component model: medical image modality prediction and anatomical region prediction. Several publicly available datasets were combined to create a representative dataset to evaluate residual networks (ResNet), dense networks (DenseNet), efficient networks (EfficientNet), and the Swin Transformer (Swin-T). The models were evaluated through accuracy, precision, recall, and F1-score metrics with macro-averaging to account for class imbalance. The results demonstrate that lightweight transfer learning methods effectively classify medical imagery, with an accuracy of 97.21% on test data for the combined transfer learning pipeline. EfficientNet-B4 demonstrated the best performance on both components of the proposed pipeline and achieved a 99.6% accuracy for modality prediction and 99.21% accuracy for anatomical region prediction on unseen test data. This approach offers the potential for streamlined radiological workflows while maintaining diagnostic quality. The strong model performance across diverse modalities and anatomical regions indicates robust generalisability for practical implementation in clinical settings. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

28 pages, 12746 KB  
Article
PSTNet: A Hyperspectral Image Classification Method Based on Adaptive Spectral–Spatial Tokens and Parallel Attention
by Shaokang Yu, Yong Mei, Xiangsuo Fan, Song Guo, Wujun Xu and Jinlong Fan
Remote Sens. 2026, 18(6), 901; https://doi.org/10.3390/rs18060901 - 15 Mar 2026
Abstract
Hyperspectral image classification holds significant applications across multiple domains due to its rich spectral and spatial information. However, it faces challenges such as spectral variation within the same object, spectral variation across different objects, and noise interference. Existing methods like convolutional neural networks [...] Read more.
Hyperspectral image classification holds significant applications across multiple domains due to its rich spectral and spatial information. However, it faces challenges such as spectral variation within the same object, spectral variation across different objects, and noise interference. Existing methods like convolutional neural networks perform well in local feature extraction but inadequately model long-range dependencies. While Transformers can capture global relationships, they struggle to effectively coordinate spectral and spatial information modeling. To address these limitations, this paper proposes a dual-branch collaborative Transformer network (PST-Net). This architecture integrates an adaptive spectral–spatial token (ASST) module, a Parallel Attention-Augmented lightweight CNN branch (PA-SSCNN), and a collaborative fusion layer. The ASST constructs joint representation tokens through local spectral smoothing and learnable spatial embedding. PA-SSCNN employs 3D-2D cascaded convolutions and channel–spatial attention mechanisms to enhance local texture and spatial feature extraction; CHIB enables deep interaction and synergistic fusion of dual-branch features across different levels and scales. Experimental results demonstrate that with only 2% labeled samples, PST-Net achieves overall classification accuracies of 96.31%, 96.59%, 95.27%, and 89.06% on the Salinas and Whuhh, and the two complex urban scene datasets Qingyun and Houston. Especially in fine-grained categories and complex scenes, it exhibits strong robustness. The ablation experiment further validated the effectiveness and complementarity of each module. This study provides an efficient collaborative modeling framework for hyperspectral image classification that balances global dependencies and local details. Full article
Show Figures

Figure 1

17 pages, 2662 KB  
Article
A Swin-Transformer-Based Network for Adaptive Backlight Optimization
by Jin Li, Rui Pu, Junbang Jiang and Man Zhu
Symmetry 2026, 18(3), 502; https://doi.org/10.3390/sym18030502 - 15 Mar 2026
Abstract
Mini-LED local dimming systems commonly suffer from luminance discontinuity, halo artifacts, and temporal instability in dynamic scenes. Traditional heuristic-based methods and standard convolutional neural networks often fail to capture long-range spatial dependencies and struggle to balance spatial smoothness, content fidelity, and real-time performance [...] Read more.
Mini-LED local dimming systems commonly suffer from luminance discontinuity, halo artifacts, and temporal instability in dynamic scenes. Traditional heuristic-based methods and standard convolutional neural networks often fail to capture long-range spatial dependencies and struggle to balance spatial smoothness, content fidelity, and real-time performance under hardware constraints. To address these challenges, this paper proposes SwinLightNet, an efficient adaptive backlight optimization network tailored for Mini-LED displays. Built upon a Swin Transformer framework tailored for Mini-LED backlight optimization, SwinLightNet integrates five hardware-aware design strategies: (i) a lightweight Swin variant (window size = 8, MLP ratio = 2.0) for efficient global context modeling; (ii) CNN encoder–decoder integration for multi-scale feature extraction; (iii) a partition-level alignment module ensuring spatial consistency; (iv) a backlight constraint module enforcing local luminance consistency and contrast preservation; (v) a change-aware temporal decision framework stabilizing dynamic sequences. These components synergistically resolve core limitations: global modeling suppresses halo artifacts while preserving content fidelity; alignment and constraint modules eliminate luminance discontinuity without compromising contrast; and the temporal framework guarantees flicker-free output under motion. Evaluated on DIV2K (static images) and a custom 2K-resolution video dataset (dynamic scenes), SwinLightNet demonstrates robust reconstruction quality while maintaining only 1.18 million parameters and 0.088 GFLOPs (Computational Cost). The results confirm SwinLightNet’s effectiveness in holistically addressing spatial, temporal, and hardware constraints, demonstrating strong potential for practical deployment in resource-constrained Mini-LED backlight control systems. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Optimization Algorithms and Control Systems)
Show Figures

Figure 1

20 pages, 2694 KB  
Article
Formability of AA7021-T4 Sheet Alloy Under Changing Strain Path Conditions: Experiments and Crystal Plasticity Modeling
by Md. Zahidul Sarkar, Joshua Lim, Sarah Sanderson, David T. Fullwood, Marko Knecevic and Michael P. Miles
Crystals 2026, 16(3), 199; https://doi.org/10.3390/cryst16030199 - 15 Mar 2026
Abstract
The formability of AA7021-T4 sheets under changing strain paths was investigated via a novel crystal plasticity model and associated experimentation. The motivation was to advance simulation tools for process design of limited-ductility 7xxx alloys, with important applications in the automotive industry. Pre-strains were [...] Read more.
The formability of AA7021-T4 sheets under changing strain paths was investigated via a novel crystal plasticity model and associated experimentation. The motivation was to advance simulation tools for process design of limited-ductility 7xxx alloys, with important applications in the automotive industry. Pre-strains were applied in biaxial and plane-strain tension using Marciniak tooling, followed by uniaxial tensile testing to failure. Strain measurements were obtained by digital image correlation, while dislocation structures were characterized using high-resolution EBSD. A strain-gradient elasto-plastic self-consistent (SG-EPSC) model incorporating dislocation density-based hardening and backstress from geometrically necessary dislocations (GNDs) was employed to predict the stress–strain response and dislocation evolution. Results showed that pre-strains normalized by forming limit diagram (FLD) criteria produced comparable residual uniaxial tensile ductility, regardless of whether biaxial or plane-strain tension was applied, despite differences in absolute pre-strain levels. Both experiments and simulations revealed that GND density correlated with remaining ductility better than simple strain magnitude values. These findings indicate that AA7021-T4 retains greater formability under multiaxial strain path changes than expected from FLD-based considerations. The combined experimental–modeling approach demonstrates the value of incorporating microstructure-based variables, such as GNDs, into forming assessments of high-strength aluminum alloys, with implications for their potential use in automotive lightweighting development. Full article
(This article belongs to the Section Crystalline Metals and Alloys)
Show Figures

Figure 1

26 pages, 3266 KB  
Article
High-Capacity Dual-Image Reversible Data Hiding in AMBTC Using Difference Expansion with Block-Wise HMAC Authentication
by Cheonshik Kim, Ching-Nung Yang and Lu Leng
Appl. Sci. 2026, 16(6), 2815; https://doi.org/10.3390/app16062815 - 15 Mar 2026
Abstract
Reversible data hiding (RDH) is a key technique in secure multimedia applications, enabling the exact recovery of both embedded data and the original cover content. To further enhance security and embedding capacity, this paper proposes a dual-image reversible data hiding (DIRDH) method based [...] Read more.
Reversible data hiding (RDH) is a key technique in secure multimedia applications, enabling the exact recovery of both embedded data and the original cover content. To further enhance security and embedding capacity, this paper proposes a dual-image reversible data hiding (DIRDH) method based on absolute moment block truncation coding (AMBTC). In the proposed scheme, two identical AMBTC-decoded images are exploited as twin covers, and secret bits are adaptively embedded into paired pixels using a variable embedding rate. To ensure data integrity, a lightweight Hash-based Message Authentication Code (HMAC) mechanism is integrated, allowing reliable detection of tampering without additional side information. Experimental results demonstrate that the proposed method achieves high embedding capacity while preserving good visual quality and provides effective authentication against representative tampering cases, including pixel modification, noise addition, and cropping. These contributions highlight the advantages of combining DIRDH with AMBTC, offering a practical and secure solution for high-capacity reversible data hiding. Full article
Show Figures

Figure 1

27 pages, 5256 KB  
Article
AntID_APP: Empowering Citizen Scientists with YOLO Models for Ant Identification in Taiwan
by Nan-Yuan Hsiung, Jen-Shin Hong, Shiu-Wu Chau and Chung-Der Hsiao
Biology 2026, 15(6), 470; https://doi.org/10.3390/biology15060470 - 14 Mar 2026
Abstract
Ants are vital bioindicators that contribute to soil health and food webs, making accurate identification essential for biodiversity monitoring and conservation. However, traditional taxonomic methods are time-consuming and require specialized expertise, limiting large-scale data collection and public participation. This paper presents AntID_APP, a [...] Read more.
Ants are vital bioindicators that contribute to soil health and food webs, making accurate identification essential for biodiversity monitoring and conservation. However, traditional taxonomic methods are time-consuming and require specialized expertise, limiting large-scale data collection and public participation. This paper presents AntID_APP, a web-based application designed to support citizen scientists in Taiwan by enabling real-time, image-based detection and the identification of native ant genera. Fine-tuned YOLO models first detect ants in user-uploaded images and then classify them at the genus level. The models were trained on a curated dataset of 60,429 open-access images from iNaturalist, covering 54 native ant species. To ensure robustness in real-world conditions, we applied targeted data augmentation and evaluated multiple YOLO versions (v9–v12). The best-performing model achieved a mean Average Precision (mAP50: 0.935–0.948, mAP50-95: 0.777–0.807) for the detection task, followed by accurate genus-level identification. The application features an intuitive interface and a lightweight asynchronous server architecture, allowing users to upload images and receive both visual detection results (bounding boxes) and genus predictions efficiently. By combining high accuracy with accessibility, AntID_APP offers a scalable solution for biodiversity monitoring and public engagement in ecological research. Full article
(This article belongs to the Special Issue AI Deep Learning Approach to Study Biological Questions (2nd Edition))
Show Figures

Figure 1

24 pages, 4692 KB  
Article
SSTNT: A Spatial–Spectral Similarity Guided Transformer-in-Transformer for Hyperspectral Unmixing
by Xinyu Cui, Xinyue Zhang, Aoran Dai and Da Sun
Photonics 2026, 13(3), 276; https://doi.org/10.3390/photonics13030276 - 13 Mar 2026
Viewed by 68
Abstract
Vision Transformers (ViTs), owing to their strong capability in modeling global contextual dependencies, have been widely adopted in hyperspectral image unmixing (HU). However, standard ViTs process images by partitioning them into non-overlapping patches, which disrupts spatial continuity at the pixel level and neglects [...] Read more.
Vision Transformers (ViTs), owing to their strong capability in modeling global contextual dependencies, have been widely adopted in hyperspectral image unmixing (HU). However, standard ViTs process images by partitioning them into non-overlapping patches, which disrupts spatial continuity at the pixel level and neglects the fine-grained structural relationships among pixels within local regions. Consequently, effectively capturing the detailed spatial–spectral features required for accurate unmixing remains challenging. Furthermore, the high computational complexity of global self-attention and its sensitivity to noise limit the applicability of conventional Transformers to HU. To address these issues, we propose a spatial–spectral similarity guided Transformer-in-Transformer (SSTNT) framework. The proposed network adopts a modified TNT architecture, in which the inner Transformer employs a linear self-attention (LSA) mechanism to efficiently exploit pixel-level local features within sliding windows, while the outer Transformer preserves global attention to aggregate contextual information, thereby forming a cooperative local–global optimization scheme. Furthermore, a lightweight spatial–spectral similarity module is introduced to enhance the modeling of neighborhood structures. Finally, spectral reconstruction is achieved through a trainable endmember decoder and a normalized abundance estimation module. Extensive experiments conducted on both synthetic and real hyperspectral datasets demonstrate the effectiveness and robustness of the proposed method. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

Back to TopTop