Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (792)

Search Parameters:
Keywords = masking task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3263 KB  
Article
Combining MTCNN and Enhanced FaceNet with Adaptive Feature Fusion for Robust Face Recognition
by Sasan Karamizadeh, Saman Shojae Chaeikar and Hamidreza Salarian
Technologies 2025, 13(10), 450; https://doi.org/10.3390/technologies13100450 - 3 Oct 2025
Abstract
Face recognition systems typically face actual challenges like facial pose, illumination, occlusion, and ageing that significantly impact the recognition accuracy. In this paper, a robust face recognition system that uses Multi-task Cascaded Convolutional Networks (MTCNN) for face detection and face alignment with an [...] Read more.
Face recognition systems typically face actual challenges like facial pose, illumination, occlusion, and ageing that significantly impact the recognition accuracy. In this paper, a robust face recognition system that uses Multi-task Cascaded Convolutional Networks (MTCNN) for face detection and face alignment with an enhanced FaceNet for facial embedding extraction is presented. The enhanced FaceNet uses attention mechanisms to achieve more discriminative facial embeddings, especially in challenging scenarios. In addition, an Adaptive Feature Fusion module synthetically combines identity-specific embeddings with context information such as pose, lighting, and presence of masks, hence enhancing robustness and accuracy. Training takes place using the CelebA dataset, and the test is conducted independently on LFW and IJB-C to enable subject-disjoint evaluation. CelebA has over 200,000 faces of 10,177 individuals, LFW consists of 13,000+ faces of 5749 individuals in unconstrained conditions, and IJB-C has 31,000 faces and 117,000 video frames with extreme pose and occlusion changes. The system introduced here achieves 99.6% on CelebA, 94.2% on LFW, and 91.5% on IJB-C and outperforms baselines such as simple MTCNN-FaceNet, AFF-Net, and state-of-the-art models such as ArcFace, CosFace, and AdaCos. These findings demonstrate that the proposed framework generalizes effectively between datasets and is resilient in real-world scenarios. Full article
Show Figures

Figure 1

23 pages, 838 KB  
Article
Applied with Caution: Extreme-Scenario Testing Reveals Significant Risks in Using LLMs for Humanities and Social Sciences Paper Evaluation
by Hua Liu, Ling Dai and Haozhe Jiang
Appl. Sci. 2025, 15(19), 10696; https://doi.org/10.3390/app151910696 - 3 Oct 2025
Abstract
The deployment of large language models (LLMs) in academic paper evaluation is increasingly widespread, yet their trustworthiness remains debated; to expose fundamental flaws often masked under conventional testing, this study employed extreme-scenario testing to systematically probe the lower performance boundaries of LLMs in [...] Read more.
The deployment of large language models (LLMs) in academic paper evaluation is increasingly widespread, yet their trustworthiness remains debated; to expose fundamental flaws often masked under conventional testing, this study employed extreme-scenario testing to systematically probe the lower performance boundaries of LLMs in assessing the scientific validity and logical coherence of papers from the humanities and social sciences (HSS). Through a highly credible quasi-experiment, 40 high-quality Chinese papers from philosophy, sociology, education, and psychology were selected, for which domain experts created versions with implanted “scientific flaws” and “logical flaws”. Three representative LLMs (GPT-4, DeepSeek, and Doubao) were evaluated against a baseline of 24 doctoral candidates, following a protocol progressing from ‘broad’ to ‘targeted’ prompts. Key findings reveal poor evaluation consistency, with significantly low intra-rater and inter-rater reliability for the LLMs, and limited flaw detection capability, as all models failed to distinguish between original and flawed papers under broad prompts, unlike human evaluators; although targeted prompts improved detection, LLM performance remained substantially inferior, particularly in tasks requiring deep empirical insight and logical reasoning. The study proposes that LLMs operate on a fundamentally different “task decomposition-semantic understanding” mechanism, relying on limited text extraction and shallow semantic comparison rather than the human process of “worldscape reconstruction → meaning construction and critique”, resulting in a critical inability to assess argumentative plausibility and logical coherence. It concludes that current LLMs possess fundamental limitations in evaluations requiring depth and critical thinking, are not reliable independent evaluators, and that over-trusting them carries substantial risks, necessitating rational human-AI collaborative frameworks, enhanced model adaptation through downstream alignment techniques like prompt engineering and fine-tuning, and improvements in general capabilities such as logical reasoning. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 162180 KB  
Article
Annotation-Efficient and Domain-General Segmentation from Weak Labels: A Bounding Box-Guided Approach
by Ammar M. Okran, Hatem A. Rashwan, Sylvie Chambon and Domenec Puig
Electronics 2025, 14(19), 3917; https://doi.org/10.3390/electronics14193917 - 1 Oct 2025
Abstract
Manual pixel-level annotation remains a major bottleneck in deploying deep learning models for dense prediction and semantic segmentation tasks across domains. This challenge is especially pronounced in applications involving fine-scale structures, such as cracks in infrastructure or lesions in medical imaging, where annotations [...] Read more.
Manual pixel-level annotation remains a major bottleneck in deploying deep learning models for dense prediction and semantic segmentation tasks across domains. This challenge is especially pronounced in applications involving fine-scale structures, such as cracks in infrastructure or lesions in medical imaging, where annotations are time-consuming, expensive, and subject to inter-observer variability. To address these challenges, this work proposes a weakly supervised and annotation-efficient segmentation framework that integrates sparse bounding-box annotations with a limited subset of strong (pixel-level) labels to train robust segmentation models. The fundamental element of the framework is a lightweight Bounding Box Encoder that converts weak annotations into multi-scale attention maps. These maps guide a ConvNeXt-Base encoder, and a lightweight U-Net–style convolutional neural network (CNN) decoder—using nearest-neighbor upsampling and skip connections—reconstructs the final segmentation mask. This design enables the model to focus on semantically relevant regions without relying on full supervision, drastically reducing annotation cost while maintaining high accuracy. We validate our framework on two distinct domains, road crack detection and skin cancer segmentation, demonstrating that it achieves performance comparable to fully supervised segmentation models using only 10–20% of strong annotations. Given the ability of the proposed framework to generalize across varied visual contexts, it has strong potential as a general annotation-efficient segmentation tool for domains where strong labeling is costly or infeasible. Full article
Show Figures

Figure 1

19 pages, 2933 KB  
Article
Image-Based Detection of Chinese Bayberry (Myrica rubra) Maturity Using Cascaded Instance Segmentation and Multi-Feature Regression
by Hao Zheng, Li Sun, Yue Wang, Han Yang and Shuwen Zhang
Horticulturae 2025, 11(10), 1166; https://doi.org/10.3390/horticulturae11101166 - 1 Oct 2025
Abstract
The accurate assessment of Chinese bayberry (Myrica rubra) maturity is critical for intelligent harvesting. This study proposes a novel cascaded framework combining instance segmentation and multi-feature regression for accurate maturity detection. First, a lightweight SOLOv2-Light network is employed to segment each [...] Read more.
The accurate assessment of Chinese bayberry (Myrica rubra) maturity is critical for intelligent harvesting. This study proposes a novel cascaded framework combining instance segmentation and multi-feature regression for accurate maturity detection. First, a lightweight SOLOv2-Light network is employed to segment each fruit individually, which significantly reduces computational costs with only a marginal drop in accuracy. Then, a multi-feature extraction network is developed to fuse deep semantic, color (LAB space), and multi-scale texture features, enhanced by a channel attention mechanism for adaptive weighting. The maturity ground truth is defined using the a*/b* ratio measured by a colorimeter, which correlates strongly with anthocyanin accumulation and visual ripeness. Experimental results demonstrated that the proposed method achieves a mask mAP of 0.788 on the instance segmentation task, outperforming Mask R-CNN and YOLACT. For maturity prediction, a mean absolute error of 3.946% is attained, which is a significant improvement over the baseline. When the data are discretized into three maturity categories, the overall accuracy reaches 95.51%, surpassing YOLOX-s and Faster R-CNN by a considerable margin while reducing processing time by approximately 46%. The modular design facilitates easy adaptation to new varieties. This research provides a robust and efficient solution for in-field bayberry maturity detection, offering substantial value for the development of automated harvesting systems. Full article
Show Figures

Figure 1

32 pages, 1846 KB  
Article
Joint Scheduling and Placement for Vehicular Intelligent Applications Under QoS Constraints: A PPO-Based Precedence-Preserving Approach
by Wei Shi and Bo Chen
Mathematics 2025, 13(19), 3130; https://doi.org/10.3390/math13193130 - 30 Sep 2025
Abstract
The increasing demand for low-latency, computationally intensive vehicular applications, such as autonomous navigation and real-time perception, has led to the adoption of cloud–edge–vehicle infrastructures. These applications are often modeled as Directed Acyclic Graphs (DAGs) with interdependent subtasks, where precedence constraints enforce causal ordering [...] Read more.
The increasing demand for low-latency, computationally intensive vehicular applications, such as autonomous navigation and real-time perception, has led to the adoption of cloud–edge–vehicle infrastructures. These applications are often modeled as Directed Acyclic Graphs (DAGs) with interdependent subtasks, where precedence constraints enforce causal ordering while allowing concurrency. We propose a task offloading framework that decomposes applications into precedence-constrained subtasks and formulates the joint scheduling and offloading problem as a Markov Decision Process (MDP) to capture the latency–energy trade-off. The system state incorporates vehicle positions, wireless link quality, server load, and task-buffer status. To address the high dimensionality and sequential nature of scheduling, we introduce DepSchedPPO, a dependency-aware sequence-to-sequence policy that processes subtasks in topological order and generates placement decisions using action masking to ensure partial-order feasibility. This policy is trained using Proximal Policy Optimization (PPO) with clipped surrogates, ensuring stable and sample-efficient learning under dynamic task dependencies. Extensive simulations show that our approach consistently reduces task latency, energy consumption and QOS compared to conventional heuristic and DRL-based methods. The proposed solution demonstrates strong applicability to real-time vehicular scenarios such as autonomous navigation, cooperative sensing, and edge-based perception. Full article
24 pages, 4755 KB  
Article
Transfer Entropy and O-Information to Detect Grokking in Tensor Network Multi-Class Classification Problems
by Domenico Pomarico, Roberto Cilli, Alfonso Monaco, Loredana Bellantuono, Marianna La Rocca, Tommaso Maggipinto, Giuseppe Magnifico, Marlis Ontivero Ortega, Ester Pantaleo, Sabina Tangaro, Sebastiano Stramaglia, Roberto Bellotti and Nicola Amoroso
Technologies 2025, 13(10), 438; https://doi.org/10.3390/technologies13100438 - 29 Sep 2025
Abstract
Quantum-enhanced machine learning, encompassing both quantum algorithms and quantum-inspired classical methods such as tensor networks, offers promising tools for extracting structure from complex, high-dimensional data. In this work, we study the training dynamics of Matrix Product State (MPS) classifiers applied to three-class problems, [...] Read more.
Quantum-enhanced machine learning, encompassing both quantum algorithms and quantum-inspired classical methods such as tensor networks, offers promising tools for extracting structure from complex, high-dimensional data. In this work, we study the training dynamics of Matrix Product State (MPS) classifiers applied to three-class problems, using both fashion MNIST and hyperspectral satellite imagery as representative datasets. We investigate the phenomenon of grokking, where generalization emerges suddenly after memorization, by tracking entanglement entropy, local magnetization, and model performance across training sweeps. Additionally, we employ information-theory tools to gain deeper insights: transfer entropy is used to reveal causal dependencies between label-specific quantum masks, while O-information captures the shift from synergistic to redundant correlations among class outputs. Our results show that grokking in the fashion MNIST task coincides with a sharp entanglement transition and a peak in redundant information, whereas the overfitted hyperspectral model retains synergistic, disordered behavior. These findings highlight the relevance of high-order information dynamics in quantum-inspired learning and emphasize the distinct learning behaviors that emerge in multi-class classification, offering a principled framework to interpret generalization in quantum machine learning architectures. Full article
(This article belongs to the Section Quantum Technologies)
Show Figures

Figure 1

21 pages, 5230 KB  
Article
Attention-Guided Differentiable Channel Pruning for Efficient Deep Networks
by Anouar Chahbouni, Khaoula El Manaa, Yassine Abouch, Imane El Manaa, Badre Bossoufi, Mohammed El Ghzaoui and Rachid El Alami
Mach. Learn. Knowl. Extr. 2025, 7(4), 110; https://doi.org/10.3390/make7040110 - 29 Sep 2025
Abstract
Deploying deep learning (DL) models in real-world environments remains a major challenge, particularly under resource-constrained conditions where achieving both high accuracy and compact architectures is essential. While effective, Conventional pruning methods often suffer from high computational overhead, accuracy degradation, or disruption of the [...] Read more.
Deploying deep learning (DL) models in real-world environments remains a major challenge, particularly under resource-constrained conditions where achieving both high accuracy and compact architectures is essential. While effective, Conventional pruning methods often suffer from high computational overhead, accuracy degradation, or disruption of the end-to-end training process, limiting their practicality for embedded and real-time applications. We present Dynamic Attention-Guided Pruning (DAGP), a Dynamic Attention-Guided Soft Channel Pruning framework that overcomes these limitations by embedding learnable, differentiable pruning masks directly within convolutional neural networks (CNNs). These masks act as implicit attention mechanisms, adaptively suppressing non-informative channels during training. A progressively scheduled L1 regularization, activated after a warm-up phase, enables gradual sparsity while preserving early learning capacity. Unlike prior methods, DAGP is retraining-free, introduces minimal architectural overhead, and supports optional hard pruning for deployment efficiency. Joint optimization of classification and sparsity objectives ensures stable convergence and task-adaptive channel selection. Experiments on CIFAR-10 (VGG16, ResNet56) and PlantVillage (custom CNN) achieve up to 98.82% FLOPs reduction with accuracy gains over baselines. Real-world validation on an enhanced PlantDoc dataset for agricultural monitoring achieves 60 ms inference with only 2.00 MB RAM on a Raspberry Pi 4, confirming efficiency under field conditions. These results illustrate DAGP’s potential to scale beyond agriculture to diverse edge-intelligent systems requiring lightweight, accurate, and deployable models. Full article
Show Figures

Figure 1

21 pages, 2380 KB  
Article
Edge-Embedded Multi-Feature Fusion Network for Automatic Checkout
by Jicai Li, Meng Zhu and Honge Ren
J. Imaging 2025, 11(10), 337; https://doi.org/10.3390/jimaging11100337 - 27 Sep 2025
Abstract
The Automatic Checkout (ACO) task aims to accurately generate complete shopping lists from checkout images. Severe product occlusions, numerous categories, and cluttered layouts impose high demands on detection models’ robustness and generalization. To address these challenges, we propose the Edge-Embedded Multi-Feature Fusion Network [...] Read more.
The Automatic Checkout (ACO) task aims to accurately generate complete shopping lists from checkout images. Severe product occlusions, numerous categories, and cluttered layouts impose high demands on detection models’ robustness and generalization. To address these challenges, we propose the Edge-Embedded Multi-Feature Fusion Network (E2MF2Net), which jointly optimizes synthetic image generation and feature modeling. We introduce the Hierarchical Mask-Guided Composition (HMGC) strategy to select natural product poses based on mask compactness, incorporating geometric priors and occlusion tolerance to produce photorealistic, structurally coherent synthetic images. Mask-structure supervision further enhances boundary and spatial awareness. Architecturally, the Edge-Embedded Enhancement Module (E3) embeds salient structural cues to explicitly capture boundary details and facilitate cross-layer edge propagation, while the Multi-Feature Fusion Module (MFF) integrates multi-scale semantic cues, improving feature discriminability. Experiments on the RPC dataset demonstrate that E2MF2Net outperforms state-of-the-art methods, achieving checkout accuracy (cAcc) of 98.52%, 97.95%, 96.52%, and 97.62% on Easy, Medium, Hard, and Average mode, respectively. Notably, it improves by 3.63 percentage points in the heavily occluded Hard mode and exhibits strong robustness and adaptability in incremental learning and domain generalization scenarios. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 5124 KB  
Article
Self-Attention Diffusion Models for Zero-Shot Biomedical Image Segmentation: Unlocking New Frontiers in Medical Imaging
by Abderrachid Hamrani and Anuradha Godavarty
Bioengineering 2025, 12(10), 1036; https://doi.org/10.3390/bioengineering12101036 - 27 Sep 2025
Abstract
Producing high-quality segmentation masks for medical images is a fundamental challenge in biomedical image analysis. Recent research has investigated the use of supervised learning with large volumes of labeled data to improve segmentation across medical imaging modalities and unsupervised learning with unlabeled data [...] Read more.
Producing high-quality segmentation masks for medical images is a fundamental challenge in biomedical image analysis. Recent research has investigated the use of supervised learning with large volumes of labeled data to improve segmentation across medical imaging modalities and unsupervised learning with unlabeled data to segment without detailed annotations. However, a significant hurdle remains in constructing a model that can segment diverse medical images in a zero-shot manner without any annotations. In this work, we introduce the attention diffusion zero-shot unsupervised system (ADZUS), a new method that uses self-attention diffusion models to segment biomedical images without needing any prior labels. This method combines self-attention mechanisms to enable context-aware and detail-sensitive segmentations, with the strengths of the pre-trained diffusion model. The experimental results show that ADZUS outperformed state-of-the-art models on various medical imaging datasets, such as skin lesions, chest X-ray infections, and white blood cell segmentations. The model demonstrated significant improvements by achieving Dice scores ranging from 88.7% to 92.9% and IoU scores from 66.3% to 93.3%. The success of the ADZUS model in zero-shot settings could lower the costs of labeling data and help it adapt to new medical imaging tasks, improving the diagnostic capabilities of AI-based medical imaging technologies. Full article
(This article belongs to the Special Issue Medical Imaging Analysis: Current and Future Trends)
Show Figures

Graphical abstract

40 pages, 9065 KB  
Article
Empirical Evaluation of Invariances in Deep Vision Models
by Konstantinos Keremis, Eleni Vrochidou and George A. Papakostas
J. Imaging 2025, 11(9), 322; https://doi.org/10.3390/jimaging11090322 - 19 Sep 2025
Viewed by 415
Abstract
The ability of deep learning models to maintain consistent performance under image transformations-termed invariances, is critical for reliable deployment across diverse computer vision applications. This study presents a comprehensive empirical evaluation of modern convolutional neural networks (CNNs) and vision transformers (ViTs) concerning four [...] Read more.
The ability of deep learning models to maintain consistent performance under image transformations-termed invariances, is critical for reliable deployment across diverse computer vision applications. This study presents a comprehensive empirical evaluation of modern convolutional neural networks (CNNs) and vision transformers (ViTs) concerning four fundamental types of image invariances: blur, noise, rotation, and scale. We analyze a curated selection of thirty models across three common vision tasks, object localization, recognition, and semantic segmentation, using benchmark datasets including COCO, ImageNet, and a custom segmentation dataset. Our experimental protocol introduces controlled perturbations to test model robustness and employs task-specific metrics such as mean Intersection over Union (mIoU), and classification accuracy (Acc) to quantify models’ performance degradation. Results indicate that while ViTs generally outperform CNNs under blur and noise corruption in recognition tasks, both model families exhibit significant vulnerabilities to rotation and extreme scale transformations. Notably, segmentation models demonstrate higher resilience to geometric variations, with SegFormer and Mask2Former emerging as the most robust architectures. These findings challenge prevailing assumptions regarding model robustness and provide actionable insights for designing vision systems capable of withstanding real-world input variability. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Computer Vision Applications)
Show Figures

Figure 1

23 pages, 5880 KB  
Article
Offline Knowledge Base and Attention-Driven Semantic Communication for Image-Based Applications in ITS Scenarios
by Yan Xiao, Xiumei Fan, Zhixin Xie and Yuanbo Lu
Big Data Cogn. Comput. 2025, 9(9), 240; https://doi.org/10.3390/bdcc9090240 - 18 Sep 2025
Viewed by 210
Abstract
Communications in intelligent transportation systems (ITS) face explosive data growth from applications such as autonomous driving, remote diagnostics, and real-time monitoring, imposing severe challenges on limited spectrum, bandwidth, and latency. Reliable semantic image reconstruction under noisy channel conditions is critical for ITS perception [...] Read more.
Communications in intelligent transportation systems (ITS) face explosive data growth from applications such as autonomous driving, remote diagnostics, and real-time monitoring, imposing severe challenges on limited spectrum, bandwidth, and latency. Reliable semantic image reconstruction under noisy channel conditions is critical for ITS perception tasks, since noise directly impacts the recognition of both static infrastructure and dynamic obstacles. Unlike traditional approaches that aim to transmit all image data with equal fidelity, effective ITS communication requires prioritizing task-relevant dynamic elements such as vehicles and pedestrians while filtering out largely static background features such as buildings, road signs, and vegetation. To address this, we propose an Offline Knowledge Base and Attention-Driven Semantic Communication (OKBASC) framework for image-based applications in ITS scenarios. The proposed framework performs offline semantic segmentation to build a compact knowledge base of semantic masks, focusing on dynamic task-relevant regions such as vehicles, pedestrians, and traffic signals. At runtime, precomputed masks are adaptively fused with input images via sparse attention to generate semantic-aware representations that selectively preserve essential information while suppressing redundant background. Moreover, we introduce a further Bi-Level Routing Attention (BRA) module that hierarchically refines semantic features through global channel selection and local spatial attention, resulting in improved discriminability and compression efficiency. Experiments on the VOC2012 and nuPlan datasets under varying SNR levels show that OKBASC achieves higher semantic reconstruction quality than baseline methods, both quantitatively via the Structural Similarity Index Metric (SSIM) and qualitatively via visual comparisons. These results highlight the value of OKBASC as a communication-layer enabler that provides reliable perceptual inputs for downstream ITS applications, including cooperative perception, real-time traffic safety, and incident detection. Full article
Show Figures

Graphical abstract

22 pages, 7476 KB  
Article
Neural Network for Robotic Control and Security in Resistant Settings
by Kubra Kose, Nuri Alperen Kose and Fan Liang
Electronics 2025, 14(18), 3618; https://doi.org/10.3390/electronics14183618 - 12 Sep 2025
Viewed by 391
Abstract
As the industrial automation landscape advances, the integration of sophisticated perception and manipulation technologies into robotic systems has become crucial for enhancing operational efficiency and precision. This paper presents a significant enhancement to a robotic system by incorporating the Mask R-CNN deep learning [...] Read more.
As the industrial automation landscape advances, the integration of sophisticated perception and manipulation technologies into robotic systems has become crucial for enhancing operational efficiency and precision. This paper presents a significant enhancement to a robotic system by incorporating the Mask R-CNN deep learning algorithm and the Intel® RealSense™ D435 camera with the UFactory xArm 5 robotic arm. The Mask R-CNN algorithm, known for its powerful object detection and segmentation capabilities, combined with the depth-sensing features of the D435, enables the robotic system to perform complex tasks with high accuracy. This integration facilitates the detection, manipulation, and precise placement of single objects, achieving 98% detection accuracy, 98% gripping accuracy, and 100% transport accuracy, resulting in a peak manipulation accuracy of 99%. Experimental evaluations demonstrate a 20% improvement in manipulation success rates with the incorporation of depth data, reflecting significant enhancements in operational flexibility and efficiency. Additionally, the system was evaluated under adversarial conditions where structured noise was introduced to test its stability, leading to only a minor reduction in performance. Furthermore, this study delves into cybersecurity concerns pertinent to robotic systems, addressing vulnerabilities such as physical attacks, network breaches, and operating system exploits. The study also addresses specific threats, including sabotage and service disruptions, and emphasizes the importance of implementing comprehensive cybersecurity measures to protect advanced robotic systems in manufacturing environments. To ensure truly robust, secure, and reliable robotic operations in industrial environments, this paper highlights the critical role of international cybersecurity standards and safety standards for the physical protection of industrial robot applications and their human operators. Full article
Show Figures

Figure 1

23 pages, 5807 KB  
Article
Numerical Analysis of Mask-Based Phase Reconstruction in Phaseless Spherical Near-Field Antenna Measurements
by Adrien A. Guth, Sakirudeen Abdulsalaam, Holger Rauhut and Dirk Heberling
Sensors 2025, 25(18), 5637; https://doi.org/10.3390/s25185637 - 10 Sep 2025
Viewed by 330
Abstract
Phase-retrieval problems are employed to tackle the challenge of recovering a complex signal from amplitude-only data. In phaseless spherical near-field antenna measurements, the task is to recover the complex coefficients describing the radiation behavior of the antenna under test (AUT) from amplitude near-field [...] Read more.
Phase-retrieval problems are employed to tackle the challenge of recovering a complex signal from amplitude-only data. In phaseless spherical near-field antenna measurements, the task is to recover the complex coefficients describing the radiation behavior of the antenna under test (AUT) from amplitude near-field measurements. The coefficients refer, for example, to equivalent currents or spherical modes, and from these, the AUT’s far-field characteristic, which is usually of interest, can be obtained. In this article, the concept of a mask-based phase recovery is applied to spherical near-field antenna measurements. First, the theory of the mask approach is described with its mathematical definition. Then, several mask types based on random distributions, ϕ-rotations, or probes are introduced and discussed. Finally, the performances of the different masks are evaluated based on simulations with multiple AUTs and with Wirtinger flow as a phase-retrieval algorithm. The simulation results show that the mask approach can improve the reconstruction error depending on the number of masks, oversampling, and the type of mask. Full article
(This article belongs to the Special Issue Recent Advances in Antenna Measurement Techniques)
Show Figures

Figure 1

18 pages, 4265 KB  
Article
Hybrid-Recursive-Refinement Network for Camouflaged Object Detection
by Hailong Chen, Xinyi Wang and Haipeng Jin
J. Imaging 2025, 11(9), 299; https://doi.org/10.3390/jimaging11090299 - 2 Sep 2025
Viewed by 447
Abstract
Camouflaged object detection (COD) seeks to precisely detect and delineate objects that are concealed within complex and ambiguous backgrounds. However, due to subtle texture variations and semantic ambiguity, it remains a highly challenging task. Existing methods that rely solely on either convolutional neural [...] Read more.
Camouflaged object detection (COD) seeks to precisely detect and delineate objects that are concealed within complex and ambiguous backgrounds. However, due to subtle texture variations and semantic ambiguity, it remains a highly challenging task. Existing methods that rely solely on either convolutional neural network (CNN) or Transformer architectures often suffer from incomplete feature representations and the loss of boundary details. To address the aforementioned challenges, we propose an innovative hybrid architecture that synergistically leverages the strengths of CNNs and Transformers. In particular, we devise a Hybrid Feature Fusion Module (HFFM) that harmonizes hierarchical features extracted from CNN and Transformer pathways, ultimately boosting the representational quality of the combined features. Furthermore, we design a Combined Recursive Decoder (CRD) that adaptively aggregates hierarchical features through recursive pooling/upsampling operators and stage-wise mask-guided refinement, enabling precise structural detail capture across multiple scales. In addition, we propose a Foreground–Background Selection (FBS) module, which alternates attention between foreground objects and background boundary regions, progressively refining object contours while suppressing background interference. Evaluations on four widely used public COD datasets, CHAMELEON, CAMO, COD10K, and NC4K, demonstrate that our method achieves state-of-the-art performance. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Graphical abstract

13 pages, 856 KB  
Article
Muscular Performance Is Not Significantly Altered Throughout Phases of the Menstrual Cycle or a Hormonal Contraceptive Cycle in Collegiate Softball Players
by Shelby L. Houchlei, Sarah N. Wood, Sarah E. Peters, Shane K. Miller, Taylor K. Dinyer-McNeely and Ryan A. Gordon
Muscles 2025, 4(3), 37; https://doi.org/10.3390/muscles4030037 - 2 Sep 2025
Viewed by 427
Abstract
Potential variability in neuromuscular function or physiology throughout the menstrual cycle (MC) or a cycle of using hormonal contraceptives may affect muscular performance variables that are relevant to exercise, training, or sport. Collegiate softball players (n = 11) that reported using and not [...] Read more.
Potential variability in neuromuscular function or physiology throughout the menstrual cycle (MC) or a cycle of using hormonal contraceptives may affect muscular performance variables that are relevant to exercise, training, or sport. Collegiate softball players (n = 11) that reported using and not using hormonal contraceptives completed three testing sessions during their respective early follicular, ovulatory, and mid luteal phases of the MC or early, mid, or late phases of their hormonal contraceptive cycle (HCC). Each testing session included a series of performance tests: countermovement jump on a force plate, 15-yard sprints, velocity assessment of the back squat performed at 70% of one-repetition maximum (1-RM), one-repetition maximum bench press, and 70% 1-RM repetitions to failure testing on the bench press. No significant differences were found for any of the performance tests between the three phases, though performance on most tasks peaked during the mid luteal/late phases of the MC/HCC. It is important to note that this study was underpowered and this could have masked any observed differences. Collectively, muscular performance was not significantly different throughout phases of the MC or HCC in these athletes, indicating that potential hormonal variability throughout the MC or HCC did not seem to have an effect on performance outcomes in this study. Full article
Show Figures

Figure 1

Back to TopTop