Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,543)

Search Parameters:
Keywords = fine structure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3550 KB  
Article
Using Biopolymers to Control Hydraulic Degradation of Natural Expansive-Clay Liners Due to Fines Migration: Long-Term Performance
by Ahmed M. Al-Mahbashi, Abdullah Shaker and Abdullah Almajed
Polymers 2026, 18(2), 272; https://doi.org/10.3390/polym18020272 (registering DOI) - 20 Jan 2026
Abstract
Liners made of natural materials, such as expansive soil with sand, have a wide range of applications, including geotechnical and geoenvironmental applications. Besides being environmentally friendly, these materials are locally available and can be constructed at a low cost. The concern regarding these [...] Read more.
Liners made of natural materials, such as expansive soil with sand, have a wide range of applications, including geotechnical and geoenvironmental applications. Besides being environmentally friendly, these materials are locally available and can be constructed at a low cost. The concern regarding these liners is sustainability and serviceability in the long run. The research conducted revealed significant degradation in hydraulic performance after periods of operation under continuous flow, which was attributed to the migration of fines. This study investigated the stabilization of these liners by using biopolymers as a cementitious agent to prevent the migration of fines and enhance sustainability in the long run. Two different biopolymers were examined in this study, including guar gum (GG) and sodium alginate (SA). The hydraulic conductivity tests were conducted in the laboratory under continuous flow for a long period (i.e., more than 360 days). The results revealed that incorporating biopolymers into these liners is of great significance for enhancing their sustainability and hydraulic performance stability. Further in-depth identification of the interaction mechanisms demonstrates that biopolymer–soil interactions create cross-links between soil particles through adhesive bonding, forming a cementitious gel that stabilizes fines and enhances the stability of the liners’ internal structure. Both examined biopolymers show significant stabilization of fines and stable hydraulic performance within the acceptable range, with high superiority of SA with EC20. The outcomes of this study are valuable for conducting an adequate and sustainable design for liner protection layers as hydraulic barriers or covers. Full article
(This article belongs to the Special Issue Polymers in the Face of Sustainable Development)
Show Figures

Figure 1

16 pages, 8073 KB  
Article
Bifaciality Optimization of TBC Silicon Solar Cells Based on Quokka3 Simulation
by Fen Yang, Zhibin Jiang, Yi Xie, Taihong Xie, Jingquan Zhang, Xia Hao, Guanggen Zeng, Zhengguo Yuan and Lili Wu
Materials 2026, 19(2), 405; https://doi.org/10.3390/ma19020405 - 20 Jan 2026
Abstract
Tunnel Oxide-Passivated Back Contact solar cells represent a next-generation photovoltaic technology with significant potential for achieving both high efficiency and low cost. This study addresses the challenge of low bifaciality inherent to the rear-side structure of TBC cells. Using the Quokka3 simulation and [...] Read more.
Tunnel Oxide-Passivated Back Contact solar cells represent a next-generation photovoltaic technology with significant potential for achieving both high efficiency and low cost. This study addresses the challenge of low bifaciality inherent to the rear-side structure of TBC cells. Using the Quokka3 simulation and assuming high-quality surface passivation and fine-line printing accuracy, a systematic optimization was conducted. The optimization encompassed surface morphology, optical coatings, bulk material parameters (carrier lifetime and resistivity), and rear-side geometry (emitter fraction, metallization pattern and gap width). Through a multi-parameter co-optimization process aimed at enhancing conversion efficiency, a simulated conversion efficiency of 27.26% and a bifaciality ratio of 92.96% were achieved. The simulation analysis quantified the trade-off relationships between FF, bifaciality, and efficiency under different parameter combinations. This enables accurate prediction of final performance outcomes when prioritizing different metrics, thereby providing scientific decision-making support for addressing the core design challenges in the industrialization of TBC cells. Full article
(This article belongs to the Section Electronic Materials)
Show Figures

Figure 1

31 pages, 4972 KB  
Article
Minutiae-Free Fingerprint Recognition via Vision Transformers: An Explainable Approach
by Bilgehan Arslan
Appl. Sci. 2026, 16(2), 1009; https://doi.org/10.3390/app16021009 - 19 Jan 2026
Abstract
Fingerprint recognition systems have relied on fragile workflows based on minutiae extraction, which suffer from significant performance losses under real-world conditions such as sensor diversity and low image quality. This study introduces a fully minutiae-free fingerprint recognition framework based on self-supervised Vision Transformers. [...] Read more.
Fingerprint recognition systems have relied on fragile workflows based on minutiae extraction, which suffer from significant performance losses under real-world conditions such as sensor diversity and low image quality. This study introduces a fully minutiae-free fingerprint recognition framework based on self-supervised Vision Transformers. A systematic evaluation of multiple DINOv2 model variants is conducted, and the proposed system ultimately adopts the DINOv2-Base Vision Transformer as the primary configuration, as it offers the best generalization performance trade-off under conditions of limited fingerprint data. Larger variants are additionally analyzed to assess scalability and capacity limits. The DINOv2 pretrained network is fine-tuned using self-supervised domain adaptation on 64,801 fingerprint images, eliminating all classical enhancement, binarization, and minutiae extraction steps. Unlike the single-sensor protocols commonly used in the literature, the proposed approach is extensively evaluated in a heterogeneous testbed with a wide range of sensors, qualities, and acquisition methods, including 1631 unique fingers from 12 datasets. The achieved EER of 5.56% under these challenging conditions demonstrates clear cross-sensor superiority over traditional systems such as VeriFinger (26.90%) and SourceAFIS (41.95%) on the same testbed. A systematic comparison of different model capacities shows that moderate-scale ViT models provide optimal generalization under limited-data conditions. Explainability analyses indicate that the attention maps of the model trained without any minutiae information exhibit meaningful overlap with classical structural regions (IoU = 0.41 ± 0.07). Openly sharing the full implementation and evaluation infrastructure makes the study reproducible and provides a standardized benchmark for future research. Full article
Show Figures

Figure 1

22 pages, 1072 KB  
Article
A Meta-Contrastive Optimization Framework for Multilabel Bug Dependency Classification
by Jantima Polpinij, Manasawee Kaenampornpan and Bancha Luaphol
Mathematics 2026, 14(2), 334; https://doi.org/10.3390/math14020334 - 19 Jan 2026
Abstract
Software maintenance and release management demand proper identification of bug dependencies since priority violations or unresolved dependent issues can often lead to a chain of failures. However, dependency annotations in bug reports are extremely sparse and imbalanced. These dependencies are often expressed implicitly [...] Read more.
Software maintenance and release management demand proper identification of bug dependencies since priority violations or unresolved dependent issues can often lead to a chain of failures. However, dependency annotations in bug reports are extremely sparse and imbalanced. These dependencies are often expressed implicitly through natural language descriptions rather than explicit metadata. This creates challenges for automated multilabel dependency classification systems. To tackle these drawbacks, we introduce a meta-contrastive optimization framework (MCOF). This framework integrates established learning paradigms to enhance transformer-based classification through two key mechanisms: (1) a meta-contrastive objective adapted for enhancing discriminative representation learning under few-shot supervision, particularly for rare dependency types, and (2) dependency-aware Laplacian regularization that captures relational structures among different dependency types, reducing confusion between semantically related labels. Experimental evaluation on a real-world dataset demonstrates that MCOF achieves significant improvement over strong baselines, including BM25-based clustering and standard BERT fine-tuning. The framework attains a micro-F1 score of 0.76 and macro-F1 score of 0.58, while reducing hamming loss to 0.14. Label-wise analysis shows significant performance gain on low-frequency dependency types, with improvements of up to 16% in F1 score. Runtime analysis exhibits only modest inference overhead at 15%, confirming that MCOF remains practical for deployment in CI/AT pipelines. These results demonstrate that integrating meta-contrastive learning and structural regularization is an effective approach for robust bug dependency discovery. The framework provides both practical and accurate solutions for supporting real-world software engineering workflows. Full article
24 pages, 2082 KB  
Article
An Optical–SAR Remote Sensing Image Automatic Registration Model Based on Multi-Constraint Optimization
by Yaqi Zhang, Shengbo Chen, Xitong Xu, Jiaqi Yang, Yuqiao Suo, Jinchen Zhu, Menghan Wu, Aonan Zhang and Qiqi Li
Remote Sens. 2026, 18(2), 333; https://doi.org/10.3390/rs18020333 - 19 Jan 2026
Abstract
Accurate registration of optical and synthetic aperture radar (SAR) images is a fundamental prerequisite for multi-source remote sensing data fusion and analysis. However, due to the substantial differences in imaging mechanisms, optical–SAR image pairs often exhibit significant radiometric discrepancies and spatially varying geometric [...] Read more.
Accurate registration of optical and synthetic aperture radar (SAR) images is a fundamental prerequisite for multi-source remote sensing data fusion and analysis. However, due to the substantial differences in imaging mechanisms, optical–SAR image pairs often exhibit significant radiometric discrepancies and spatially varying geometric inconsistencies, which severely limit the robustness of traditional feature or region-based registration methods in cross-modal scenarios. To address these challenges, this paper proposes an end-to-end Optical–SAR Registration Network (OSR-Net) based on multi-constraint joint optimization. The proposed framework explicitly decouples cross-modal feature alignment and geometric correction, enabling robust registration under large appearance variation. Specifically, a multi-modal feature extraction module constructs a shared high-level representation, while a multi-scale channel attention mechanism adaptively enhances cross-modal feature consistency. A multi-scale affine transformation prediction module provides a coarse-to-fine geometric initialization, which stabilizes parameter estimation under complex imaging conditions. Furthermore, an improved spatial transformer network is introduced to perform structure-preserving geometric refinement, mitigating spatial distortion induced by modality discrepancies. In addition, a multi-constraint loss formulation is designed to jointly enforce geometric accuracy, structural consistency, and physical plausibility. By employing a dynamic weighting strategy, the optimization process progressively shifts from global alignment to local structural refinement, effectively preventing degenerate solutions and improving robustness. Extensive experiments on public optical–SAR datasets demonstrate that the proposed method achieves accurate and stable registration across diverse scenes, providing a reliable geometric foundation for subsequent multi-source remote sensing data fusion. Full article
(This article belongs to the Section Remote Sensing Image Processing)
22 pages, 3383 KB  
Article
A Degradation-Aware Dual-Path Network with Spatially Adaptive Attention for Underwater Image Enhancement
by Shasha Tian, Adisorn Sirikham, Jessada Konpang and Chuyang Wang
Electronics 2026, 15(2), 435; https://doi.org/10.3390/electronics15020435 - 19 Jan 2026
Abstract
Underwater image enhancement remains challenging due to wavelength-dependent absorption, spatially varying scattering, and non-uniform illumination, which jointly cause severe color distortion, contrast degradation, and structural information loss. To address these issues, we propose UCS-Net, a degradation-aware dual-path framework that exploits the complementarity between [...] Read more.
Underwater image enhancement remains challenging due to wavelength-dependent absorption, spatially varying scattering, and non-uniform illumination, which jointly cause severe color distortion, contrast degradation, and structural information loss. To address these issues, we propose UCS-Net, a degradation-aware dual-path framework that exploits the complementarity between global and local representations. A spatial color balance module first stabilizes the chromatic distribution of degraded inputs through a learnable gray-world-guided normalization, mitigating wavelength-induced color bias prior to feature extraction. The network then adopts a dual-branch architecture, where a hierarchical Swin Transformer branch models long-range contextual dependencies and global color relationships, while a multi-scale residual convolutional branch focuses on recovering local textures and structural details suppressed by scattering. Furthermore, a multi-scale attention fusion mechanism adaptively integrates features from both branches in a degradation-aware manner, enabling dynamic emphasis on global or local cues according to regional attenuation severity. A hue-preserving reconstruction module is finally employed to suppress color artifacts and ensure faithful color rendition. Extensive experiments on UIEB, EUVP, and UFO benchmarks demonstrate that UCS-Net consistently outperforms state-of-the-art methods in both full-reference and non-reference evaluations. Qualitative results further confirm its effectiveness in restoring fine structural details while maintaining globally consistent and visually realistic colors across diverse underwater scenes. Full article
(This article belongs to the Special Issue Image Processing and Analysis)
15 pages, 1505 KB  
Article
Enhancing Early Alzheimer’s Disease Detection via Transfer Learning: From Big Structural MRI Datasets to Ethnically Distinct Small Cohorts
by Minjae Lee, Suwon Lee and Hyeon Seo
Appl. Sci. 2026, 16(2), 1004; https://doi.org/10.3390/app16021004 - 19 Jan 2026
Abstract
Deep learning-based analysis of brain magnetic resonance imaging (MRI) plays a crucial role in the early diagnosis of Alzheimer’s disease (AD). However, data scarcity and racial bias present significant challenges to the generalization of diagnostic models. Large-scale public datasets, which are predominantly composed [...] Read more.
Deep learning-based analysis of brain magnetic resonance imaging (MRI) plays a crucial role in the early diagnosis of Alzheimer’s disease (AD). However, data scarcity and racial bias present significant challenges to the generalization of diagnostic models. Large-scale public datasets, which are predominantly composed of Caucasian individuals, often lead to performance degradation when applied to other ethnic groups owing to domain shifts. To address these issues, this study proposes a two-stage transfer learning framework. Initially, a 3D ResNet model was pretrained on a large-scale Alzheimer’s disease neuroimaging initiative (ADNI) dataset to learn structural brain features. Subsequently, the pretrained weights were transferred and fine-tuned on a small-scale Korean dataset utilizing only 30% of the data for training. The proposed model achieved superior performance in classifying mild cognitive impairment (MCI), which is crucial for early diagnosis, compared with a model trained from scratch using 70% of the Korean data. Furthermore, it effectively mitigated the significant performance degradation observed when directly applying the pretrained model, demonstrating its ability to resolve the domain-shift issue. This study explored the feasibility of transfer learning to address data scarcity and domain shift issues in AD classification, underscoring its potential for developing AI-based diagnostic systems tailored to specific ethnic populations. Full article
24 pages, 15825 KB  
Article
Enhancing High-Resolution Land Cover Classification Using Multi-Level Cross-Modal Attention Fusion
by Yangwei Jiang, Ting Liu, Junhao Zhou, Yihan Guo and Tangao Hu
Land 2026, 15(1), 181; https://doi.org/10.3390/land15010181 - 19 Jan 2026
Abstract
High-precision land cover classification is fundamental to environmental monitoring, urban planning, and sustainable land-use management. With the growing availability of multimodal remote sensing data, combining spectral and structural information has become an effective strategy for improving classification performance in complex high-resolution scenes. However, [...] Read more.
High-precision land cover classification is fundamental to environmental monitoring, urban planning, and sustainable land-use management. With the growing availability of multimodal remote sensing data, combining spectral and structural information has become an effective strategy for improving classification performance in complex high-resolution scenes. However, most existing methods predominantly rely on shallow feature concatenation, which fails to capture long-range dependencies and cross-modal interactions that are critical for distinguishing fine-grained land cover categories. This study proposes a multi-level cross-modal attention fusion network, Cross-Modal Cross-Attention UNet (CMCAUNet), which integrates a Cross-Modal Cross-Attention Fusion (CMCA) module and a Skip-Connection Attention Gate (SCAG) module. The CMCA module progressively enhances multimodal feature representations throughout the encoder, while the SCAG module leverages high-level semantics to refine spatial details during decoding and improve boundary delineation. Together, these modules enable more effective integration of spectral–textural and structural information. Experiments conducted on the ISPRS Vaihingen and Potsdam datasets demonstrate the effectiveness of the proposed approach. CMCAUNet achieves an mean Intersection over Union (mIoU) ratio of 81.49% and 84.76%, with Overall Accuracy (OA) of 90.74% and 90.28%, respectively. The model also shows superior performance in small object classification, with targets like “Car,” achieving 90.85% and 96.98% OA for the “Car” category. Ablation studies further confirm that the combination of CMCA and SCAG modules significantly improves feature discriminability and leads to more accurate and detailed land cover maps. Full article
Show Figures

Figure 1

23 pages, 2211 KB  
Article
BEMF-Net: A Boundary-Enhanced Multi-Scale Feature Fusion Network
by Jiayi Zhang, Chao Xu and Zhengping Li
Electronics 2026, 15(2), 430; https://doi.org/10.3390/electronics15020430 - 19 Jan 2026
Abstract
The elevated morbidity and mortality of kidney cancer make the precise, automated segmentation of kidneys and tumors essential for supporting clinical diagnosis and guiding surgical interventions. Recently, the segmentation of kidney tumors has been significantly advanced by deep learning. However, persistent challenges include [...] Read more.
The elevated morbidity and mortality of kidney cancer make the precise, automated segmentation of kidneys and tumors essential for supporting clinical diagnosis and guiding surgical interventions. Recently, the segmentation of kidney tumors has been significantly advanced by deep learning. However, persistent challenges include the fuzzy boundaries of kidney tumors, multi-scale problems with kidney and renal tumors regarding location and size, and the strikingly similar textural characteristics of malignant lesions and the surrounding renal parenchyma. To overcome the aforementioned constraints, this study introduces a boundary-enhanced multi-scale feature fusion network (BEMF-Net) for endoscopic image segmentation of kidney tumors. This network incorporates a boundary-selective attention module (BSA) to cope with the renal tumor boundary ambiguity problem and obtain more accurate tumor boundaries. Furthermore, we introduce a multi-scale feature fusion attention module (MFA) designed to handle 4 distinct feature hierarchies captured by the encoder, enabling it to effectively accommodate the diverse size variations observed in kidney tumors. Finally, a hybrid cross-modal attention module (HCA) is introduced to conclude our design. It is designed with a dual-branch structure combining Transformer and CNN, thereby integrating both global contextual relationships and fine-grained local patterns. On the Re-TMRS dataset, our approach achieved mDice and mIoU scores of 91.2% and 85.7%. These results confirm its superior segmentation quality and generalization performance compared to leading existing methods. Full article
Show Figures

Figure 1

26 pages, 3132 KB  
Article
An Unsupervised Cloud-Centric Intrusion Diagnosis Framework Using Autoencoder and Density-Based Learning
by Suresh K. S, Thenmozhi Elumalai, Radhakrishnan Rajamani, Anubhav Kumar, Balamurugan Balusamy, Sumendra Yogarayan and Kaliyaperumal Prabu
Future Internet 2026, 18(1), 54; https://doi.org/10.3390/fi18010054 - 19 Jan 2026
Abstract
Cloud computing environments generate high-dimensional, large-scale, and highly dynamic network traffic, making intrusion diagnosis challenging due to evolving attack patterns, severe traffic imbalance, and limited availability of labeled data. To address these challenges, this study presents an unsupervised, cloud-centric intrusion diagnosis framework that [...] Read more.
Cloud computing environments generate high-dimensional, large-scale, and highly dynamic network traffic, making intrusion diagnosis challenging due to evolving attack patterns, severe traffic imbalance, and limited availability of labeled data. To address these challenges, this study presents an unsupervised, cloud-centric intrusion diagnosis framework that integrates autoencoder-based representation learning with density-based attack categorization. A dual-stage autoencoder is trained exclusively on benign traffic to learn compact latent representations and to identify anomalous flows using reconstruction-error analysis, enabling effective anomaly detection without prior attack labels. The detected anomalies are subsequently grouped using density-based learning to uncover latent attack structures and support fine-grained multiclass intrusion diagnosis under varying attack densities. Experiments conducted on the large-scale CSE-CIC-IDS2018 dataset demonstrate that the proposed framework achieves an anomaly detection accuracy of 99.46%, with high recall and low false-negative rates in the optimal latent-space configuration. The density-based classification stage achieves an overall multiclass attack classification accuracy of 98.79%, effectively handling both majority and minority attack categories. Clustering quality evaluation reports a Silhouette Score of 0.9857 and a Davies–Bouldin Index of 0.0091, indicating strong cluster compactness and separability. Comparative analysis against representative supervised and unsupervised baselines confirms the framework’s scalability and robustness under highly imbalanced cloud traffic, highlighting its suitability for future Internet cloud security ecosystems. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

22 pages, 13507 KB  
Article
Integrating AI for In-Depth Segmentation of Coastal Environments in Remote Sensing Imagery
by Pelagia Drakopoulou, Paraskevi Tzouveli, Aikaterini Karditsa and Serafim Poulos
Remote Sens. 2026, 18(2), 325; https://doi.org/10.3390/rs18020325 - 19 Jan 2026
Abstract
Mapping coastal landforms is critical for the sustainable management of ecosystems influenced by both natural dynamics and human activity. This study investigates the application of Transformer-based semantic segmentation models for pixel-level classification of key surface types such as water, sandy shores, rocky areas, [...] Read more.
Mapping coastal landforms is critical for the sustainable management of ecosystems influenced by both natural dynamics and human activity. This study investigates the application of Transformer-based semantic segmentation models for pixel-level classification of key surface types such as water, sandy shores, rocky areas, vegetation, and built structures. We utilize a diverse, multi-resolution dataset that includes NAIP (1 m), Quadrangle (6 m), Sentinel-2 (10 m), and Landsat-8 (15 m) imagery from U.S. coastlines, along with high-resolution aerial images of the Greek coastline provided by the Hellenic Land Registry. Due to the lack of labeled Greek data, models were pre-trained on U.S. datasets and fine-tuned using a manually annotated subset of Greek images. We evaluate the performance of three advanced Transformer architectures, with Mask2Former achieving the most robust results, further improved 11 through a coastal-class weighted focal loss to enhance boundary precision. The findings demonstrate that Transformer-based models offer an effective, scalable, and cost-efficient solution for automated coastal monitoring. This work highlights the potential of AI-driven remote sensing to replace or complement traditional in-situ surveys, and lays the foundation for future research in multimodal data integration and regional adaptation for environmental analysis. Full article
Show Figures

Figure 1

12 pages, 1944 KB  
Article
Extracting Metasystem: A Novel Paradigm to Perceive Complex Systems
by Xue Li and Ying’an Cui
Big Data Cogn. Comput. 2026, 10(1), 36; https://doi.org/10.3390/bdcc10010036 - 19 Jan 2026
Abstract
Abundant evidence shows that there is a core component within a complex system, referred to as the metasystem, that fundamentally shapes the structural and dynamical characteristics of a complex system. The limitations of existing techniques for analyzing complex systems have made it increasingly [...] Read more.
Abundant evidence shows that there is a core component within a complex system, referred to as the metasystem, that fundamentally shapes the structural and dynamical characteristics of a complex system. The limitations of existing techniques for analyzing complex systems have made it increasingly desirable to extract metasystems for modeling, measuring, and analyzing complex phenomena. However, the methods of extracting metasystems are still in their infancy with various shortcomings. Here, we propose a universal framework based on divide and conquer to extract fine-grained metasystems. The method comprises three stages performed in sequence: partitioning, sampling, and optimizing. It can decompose a complex system into interconnected metasystem and non-metasystem components, providing a lightweight perspective for studying complex systems: essential insights can be gained by merely examining the internal mechanisms of each component and their interaction patterns. Full article
(This article belongs to the Special Issue Advances in Complex Networks)
Show Figures

Figure 1

23 pages, 1750 KB  
Article
LLM-Generated Samples for Android Malware Detection
by Nik Rollinson and Nikolaos Polatidis
Digital 2026, 6(1), 5; https://doi.org/10.3390/digital6010005 - 18 Jan 2026
Viewed by 96
Abstract
Android malware continues to evolve through obfuscation and polymorphism, posing challenges for both signature-based defenses and machine learning models trained on limited and imbalanced datasets. Synthetic data has been proposed as a remedy for scarcity, yet the role of Large Language Models (LLMs) [...] Read more.
Android malware continues to evolve through obfuscation and polymorphism, posing challenges for both signature-based defenses and machine learning models trained on limited and imbalanced datasets. Synthetic data has been proposed as a remedy for scarcity, yet the role of Large Language Models (LLMs) in generating effective malware data for detection tasks remains underexplored. In this study, we fine-tune GPT-4.1-mini to produce structured records for three malware families: BankBot, Locker/SLocker, and Airpush/StopSMS, using the KronoDroid dataset. After addressing generation inconsistencies with prompt engineering and post-processing, we evaluate multiple classifiers under three settings: training with real data only, real-plus-synthetic data, and synthetic data alone. Results show that real-only training achieves near-perfect detection, while augmentation with synthetic data preserves high performance with only minor degradations. In contrast, synthetic-only training produces mixed outcomes, with effectiveness varying across malware families and fine-tuning strategies. These findings suggest that LLM-generated tabular malware feature records can enhance scarce datasets without compromising detection accuracy, but remain insufficient as a standalone training source. Full article
Show Figures

Figure 1

26 pages, 7469 KB  
Article
Generalized Vision-Based Coordinate Extraction Framework for EDA Layout Reports and PCB Optical Positioning
by Pu-Sheng Tsai, Ter-Feng Wu and Wen-Hai Chen
Processes 2026, 14(2), 342; https://doi.org/10.3390/pr14020342 - 18 Jan 2026
Viewed by 145
Abstract
Automated optical inspection (AOI) technologies are widely used in PCB and semiconductor manufacturing to improve accuracy and reduce human error during quality inspection. While existing AOI systems can perform defect detection, they often rely on pre-defined camera positions and lack flexibility for interactive [...] Read more.
Automated optical inspection (AOI) technologies are widely used in PCB and semiconductor manufacturing to improve accuracy and reduce human error during quality inspection. While existing AOI systems can perform defect detection, they often rely on pre-defined camera positions and lack flexibility for interactive inspection, especially when the operator needs to visually verify solder pad conditions or examine specific layout regions. This study focuses on the front-end optical positioning and inspection stage of the AOI workflow, providing an automated mechanism to link digitally generated layout reports from EDA layout tools with real PCB inspection tasks. The proposed system operates on component-placement reports exported by EDA layout environments and uses them to automatically guide the camera to the corresponding PCB coordinates. Since PCB design reports may vary in format and structure across EDA tools, this study proposes a vision-based extraction approach that employs Hough transform-based region detection and a CNN-based digit recognizer to recover component coordinates from visually rendered design data. A dual-axis sliding platform is driven through a hierarchical control architecture, where coarse positioning is performed via TB6600 stepper control and Bluetooth-based communication, while fine alignment is achieved through a non-contact, gesture-based interface designed for clean-room operation. A high-resolution autofocus camera subsequently displays the magnified solder pads on a large screen for operator verification. Experimental results show that the proposed platform provides accurate, repeatable, and intuitive optical positioning, improving inspection efficiency while maintaining operator ergonomics and system modularity. Rather than replacing defect-classification AOI systems, this work complements them by serving as a positioning-assisted inspection module for interactive and semi-automated PCB quality evaluation. Full article
Show Figures

Figure 1

28 pages, 5548 KB  
Article
CVMFusion: ConvNeXtV2 and Visual Mamba Fusion for Remote Sensing Segmentation
by Zelin Wang, Li Qin, Cheng Xu, Dexi Liu, Zeyu Guo, Yu Hu and Tianyu Yang
Sensors 2026, 26(2), 640; https://doi.org/10.3390/s26020640 - 18 Jan 2026
Viewed by 56
Abstract
In recent years, extracting coastlines from high-resolution remote sensing imagery has proven difficult due to complex details and variable targets. Current methods struggle with the fact that CNNs cannot model long-range dependencies, while Transformers incur high computational costs. To address these issues, we [...] Read more.
In recent years, extracting coastlines from high-resolution remote sensing imagery has proven difficult due to complex details and variable targets. Current methods struggle with the fact that CNNs cannot model long-range dependencies, while Transformers incur high computational costs. To address these issues, we propose CVMFusion: a land–sea segmentation network based on a U-shaped encoder–decoder structure, whereby both the encoder and decoder are hierarchically organized. This architecture integrates the local feature extraction capabilities of CNNs with the global interaction efficiency of Mamba. The encoder uses parallel ConvNeXtV2 and VMamba branches to capture fine-grained details and long-range context, respectively. This network incorporates Dynamic Multi-Scale Attention (DyMSA) and Dynamic Weighted Cross-Attention (DyWCA) modules, which replace the traditional concatenation with an adaptive fusion mechanism to effectively fuse the features from the dual-branch encoder and utilize skip connections to complete the fusion between the encoder and decoder. Experiments on two public datasets demonstrate that CVMFusion attained MIoU accuracies of 98.05% and 96.28%, outperforming existing methods. It performs particularly well in segmenting small objects and intricate boundary regions. Full article
(This article belongs to the Special Issue Smart Remote Sensing Images Processing for Sensor-Based Applications)
Show Figures

Figure 1

Back to TopTop