Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,854)

Search Parameters:
Keywords = enhanced visual quality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2388 KB  
Article
MAF-GAN: A Multi-Attention Fusion Generative Adversarial Network for Remote Sensing Image Super-Resolution
by Zhaohe Wang, Hai Tan, Zhongwu Wang, Jinlong Ci and Haoran Zhai
Remote Sens. 2025, 17(24), 3959; https://doi.org/10.3390/rs17243959 (registering DOI) - 7 Dec 2025
Abstract
Existing Generative Adversarial Networks (GANs) frequently yield remote sensing images with blurred fine details, distorted textures, and compromised spatial structures when applied to super-resolution (SR) tasks, so this study proposes a Multi-Attention Fusion Generative Adversarial Network (MAF-GAN) to address these limitations: the generator [...] Read more.
Existing Generative Adversarial Networks (GANs) frequently yield remote sensing images with blurred fine details, distorted textures, and compromised spatial structures when applied to super-resolution (SR) tasks, so this study proposes a Multi-Attention Fusion Generative Adversarial Network (MAF-GAN) to address these limitations: the generator of MAF-GAN is built on a U-Net backbone, which incorporates Oriented Convolutions (OrientedConv) to enhance the extraction of directional features and textures, while a novel co-calibration mechanism—incorporating channel, spatial, gating, and spectral attention—is embedded in the encoding path and skip connections, supplemented by an adaptive weighting strategy to enable effective multi-scale feature fusion, and a composite loss function is further designed to integrate adversarial loss, perceptual loss, hybrid pixel loss, total variation loss, and feature consistency loss for optimizing model performance; extensive experiments on the GF7-SR4×-MSD dataset demonstrate that MAF-GAN achieves state-of-the-art performance, delivering a Peak Signal-to-Noise Ratio (PSNR) of 27.14 dB, Structural Similarity Index (SSIM) of 0.7206, Learned Perceptual Image Patch Similarity (LPIPS) of 0.1017, and Spectral Angle Mapper (SAM) of 1.0871, which significantly outperforms mainstream models including SRGAN, ESRGAN, SwinIR, HAT, and ESatSR as well as exceeds traditional interpolation methods (e.g., Bicubic) by a substantial margin, and notably, MAF-GAN maintains an excellent balance between reconstruction quality and inference efficiency to further reinforce its advantages over competing methods; additionally, ablation studies validate the individual contribution of each proposed component to the model’s overall performance, and this method generates super-resolution remote sensing images with more natural visual perception, clearer spatial structures, and superior spectral fidelity, thus offering a reliable technical solution for high-precision remote sensing applications. Full article
(This article belongs to the Section Environmental Remote Sensing)
21 pages, 17206 KB  
Article
Mean-Curvature-Regularized Deep Image Prior with Soft Attention for Image Denoising and Deblurring
by Muhammad Israr, Shahbaz Ahmad, Muhammad Nabeel Asghar and Saad Arif
Mathematics 2025, 13(24), 3906; https://doi.org/10.3390/math13243906 (registering DOI) - 6 Dec 2025
Abstract
Sparsity-driven regularization has undergone significant development in single-image restoration, particularly with the transition from handcrafted priors to trainable deep architectures. In this work, a geometric prior-enhanced deep image prior (DIP) framework, termed DIP-MC, is proposed that integrates mean curvature (MC) regularization to promote [...] Read more.
Sparsity-driven regularization has undergone significant development in single-image restoration, particularly with the transition from handcrafted priors to trainable deep architectures. In this work, a geometric prior-enhanced deep image prior (DIP) framework, termed DIP-MC, is proposed that integrates mean curvature (MC) regularization to promote natural smoothness and structural coherence in reconstructed images. To strengthen the representational capacity of DIP, a self-attention module is incorporated between the encoder and decoder, enabling the network to capture long-range dependencies and preserve fine-scale textures. In contrast to total variation (TV), which frequently produces piecewise-constant artifacts and staircasing, MC regularization leverages curvature information, resulting in smoother transitions while maintaining sharp structural boundaries. DIP-MC is evaluated on standard grayscale and color image denoising and deblurring tasks using benchmark datasets including BSD68, Classic5, LIVE1, Set5, Set12, Set14, and the Levin dataset. Quantitative performance is assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics. Experimental results demonstrate that DIP-MC consistently outperformed the DIP-TV baseline with 26.49 PSNR and 0.9 SSIM. It achieved competitive performance relative to BM3D and EPLL models with 28.6 PSNR and 0.87 SSIM while producing visually more natural reconstructions with improved detail fidelity. Furthermore, the learning dynamics of DIP-MC are analyzed by examining update-cost behavior during optimization, visualizing the best-performing network weights, and monitoring PSNR and SSIM progression across training epochs. These evaluations indicate that DIP-MC exhibits superior stability and convergence characteristics. Overall, DIP-MC establishes itself as a robust, scalable, and geometrically informed framework for high-quality single-image restoration. Full article
(This article belongs to the Special Issue Mathematical Methods for Image Processing and Understanding)
Show Figures

Figure 1

31 pages, 11595 KB  
Article
PCB-Faster-RCNN: An Improved Object Detection Algorithm for PCB Surface Defects
by Zhige He, Yuezhou Wu, Yang Lv and Yuanqing He
Appl. Sci. 2025, 15(24), 12881; https://doi.org/10.3390/app152412881 - 5 Dec 2025
Abstract
As a fundamental and indispensable component of modern electronic devices, the printed circuit board (PCB) has a complex structure and highly integrated functions, with its manufacturing quality directly affecting the stability and reliability of electronic products. However, during large-scale automated PCB production, its [...] Read more.
As a fundamental and indispensable component of modern electronic devices, the printed circuit board (PCB) has a complex structure and highly integrated functions, with its manufacturing quality directly affecting the stability and reliability of electronic products. However, during large-scale automated PCB production, its surfaces are prone to various defects and imperfections due to uncontrollable factors, such as diverse manufacturing processes, stringent machining precision requirements, and complex production environments, which not only compromise product functionality but also pose potential safety hazards. At present, PCB defect detection in industry still predominantly relies on manual visual inspection, the efficiency and accuracy of which fall short of the automation and intelligence demands in modern electronics manufacturing. To address this issue, in this paper, we have made improvements based on the classical Faster-RCNN object detection framework. Firstly, ResNet-101 is employed to replace the conventional VGG-16 backbone, thereby enhancing the ability to perceive small objects and complex texture features. Then, we extract features from images by using deformable convolution in the backbone network to improve the model’s adaptive modeling capability for deformed objects and irregular defect regions. Finally, the Convolutional Block Attention Module is incorporated into the backbone, leveraging joint spatial and channel attention mechanisms to improve the effectiveness and discriminative power of feature representations. The experimental results demonstrate that the improved model achieves a 4.5% increase in mean average precision compared with the original Faster-RCNN. Moreover, the proposed method exhibits superior detection accuracy, robustness, and adaptability compared with mainstream object detection models, indicating strong potential for engineering applications and industrial deployment. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Object Detection and Tracking)
12 pages, 693 KB  
Article
Fluorescence-Guided Thoracoscopic Surgery Using Indocyanine Green (ICG) in Canine Cadavers: A Descriptive Evaluation of Video-Assisted (VATS) and Robot-Assisted (RATS) Approaches
by Francisco M. Sánchez-Margallo, Lucía Salazar-Carrasco, Manuel J. Pérez-Salazar and Juan A. Sánchez-Margallo
Animals 2025, 15(24), 3519; https://doi.org/10.3390/ani15243519 - 5 Dec 2025
Abstract
Precise intraoperative identification of the canine thoracic duct remains challenging due to anatomical variability and limited visualization. This exploratory cadaveric feasibility study aimed to describe the technical applicability of fluorescence-guided thoracic duct mapping using video-assisted thoracoscopy (VATS) and robot-assisted thoracoscopy (Versius™ system). Four [...] Read more.
Precise intraoperative identification of the canine thoracic duct remains challenging due to anatomical variability and limited visualization. This exploratory cadaveric feasibility study aimed to describe the technical applicability of fluorescence-guided thoracic duct mapping using video-assisted thoracoscopy (VATS) and robot-assisted thoracoscopy (Versius™ system). Four adult Beagle cadavers underwent bilateral thoracoscopic exploration after intranodal injection of indocyanine green (ICG, Verdye®, 0.05 mg/kg; 0.5 mL). Near-infrared (NIR) fluorescence imaging enabled real-time visualization of the thoracic duct and its branches. Fluorescence quality was quantitatively characterized using signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and contrast resolution (CR) calculated from standardized image frames. Both approaches achieved successful duct identification in all cadavers. VATS provided brighter overall fluorescence, whereas the robotic-assisted approach offered stable imaging, enhanced instrument dexterity, and improved duct-to-background discrimination. These findings confirm the feasibility of fluorescence-guided thoracic duct identification using both minimally invasive modalities in canine cadavers. The standardized assessment of optical parameters proposed here may support future in vivo studies to optimize imaging protocols and evaluate the clinical impact of fluorescence-guided thoracic duct surgery in dogs. Full article
(This article belongs to the Section Companion Animals)
28 pages, 3650 KB  
Article
Gastrointestinal Lesion Detection Using Ensemble Deep Learning Through Global Contextual Information
by Vikrant Aadiwal, Vishesh Tanwar, Bhisham Sharma and Dhirendra Prasad Yadav
Bioengineering 2025, 12(12), 1329; https://doi.org/10.3390/bioengineering12121329 - 5 Dec 2025
Abstract
The presence of subtle mucosal abnormalities makes small bowel Crohn’s disease (SBCD) and other gastrointestinal lesions difficult to detect, as these features are often very subtle and can closely resemble other disorders. Although the Kvasir and Esophageal Endoscopy datasets offer high-quality visual representations [...] Read more.
The presence of subtle mucosal abnormalities makes small bowel Crohn’s disease (SBCD) and other gastrointestinal lesions difficult to detect, as these features are often very subtle and can closely resemble other disorders. Although the Kvasir and Esophageal Endoscopy datasets offer high-quality visual representations of various parts of the GI tract, their manual interpretation and analysis by clinicians remain labor-intensive, time-consuming, and prone to subjective variability. To address this, we propose a generalizable ensemble deep learning framework for gastrointestinal lesion detection, capable of identifying pathological patterns such as ulcers, polyps, and esophagitis that visually resemble SBCD-associated abnormalities. Further, the classical convolutional neural network (CNN) extracts shallow high-dimensional features; due to this, it may miss the edges and complex patterns of the gastrointestinal lesions. To mitigate these limitations, this study introduces a deep learning ensemble framework that combines the strengths of EfficientNetB5, MobileNetV2, and multi-head self-attention (MHSA). EfficientNetB5 extracts detailed hierarchical features that help distinguish fine-grained mucosal structures, while MobileNetV2 enhances spatial representation with low computational overhead. The MHSA module further improves the model’s global correlation of the spatial features. We evaluated the model on two publicly available DBE datasets and compared the results with four state-of-the-art methods. Our model achieved classification accuracies of 99.25% and 98.86% on the Kvasir and Kaither datasets. Full article
Show Figures

Figure 1

15 pages, 4297 KB  
Article
Camera-in-the-Loop Realization of Direct Search with Random Trajectory Method for Binary-Phase Computer-Generated Hologram Optimization
by Evgenii Yu. Zlokazov, Rostislav S. Starikov, Pavel A. Cheremkhin and Timur Z. Minikhanov
J. Imaging 2025, 11(12), 434; https://doi.org/10.3390/jimaging11120434 - 5 Dec 2025
Abstract
High-speed realization of computer-generated holograms (CGHs) is a crucial problem in the field of modern 3D visualization and optical image processing system development. Binary CGHs can be realized using high-resolution, high-speed spatial light modulators such as ferroelectric liquid crystals on silicon devices or [...] Read more.
High-speed realization of computer-generated holograms (CGHs) is a crucial problem in the field of modern 3D visualization and optical image processing system development. Binary CGHs can be realized using high-resolution, high-speed spatial light modulators such as ferroelectric liquid crystals on silicon devices or digital micro-mirror devices providing the high throughput of optoelectronic systems. However, the quality of holographic images restored by binary CGHs often suffers from distortions, background noise, and speckle noise caused by the limitations and imperfections of optical system components. The present manuscript introduces a method based on the optimization of CGH models directly in the optical system with a camera-in-the-loop configuration using effective direct search with a random trajectory algorithm. The method was experimentally verified. The results demonstrate a significant enhancement in the quality of the holographic images optically restored by binary-phase CGH models optimized through this method compared to purely digitally generated models. Full article
(This article belongs to the Section Mixed, Augmented and Virtual Reality)
Show Figures

Figure 1

17 pages, 2859 KB  
Article
Investigation of Processing Conditions and Product Geometry in Out-Mold Decoration and Their Effects on Film Adhesion and Deformation
by Hui-Li Chen, Po-Wei Huang, Sheng-Hsun Hsu and Jhong-Sian Wu
Polymers 2025, 17(24), 3239; https://doi.org/10.3390/polym17243239 - 5 Dec 2025
Abstract
The growing demand for high-quality decorative polymer surfaces has increased interest in Out Mold Decoration (OMD), yet the combined influence of processing conditions and product geometry on film adhesion and deformation remains insufficiently defined. This study establishes an integrated framework that connects OMD [...] Read more.
The growing demand for high-quality decorative polymer surfaces has increased interest in Out Mold Decoration (OMD), yet the combined influence of processing conditions and product geometry on film adhesion and deformation remains insufficiently defined. This study establishes an integrated framework that connects OMD process parameters with geometry-dependent deformation behavior using polycarbonate films printed with an ink grid. Adhesion and surface quality were evaluated using 2.5D specimens, while 3D models with varied fillet radii, slopes, and heights enabled quantitative assessment of grid-spacing evolution and thickness distribution. Results show that preheating smooths the film without improving adhesion, whereas increasing the forming environment temperature enhances both bonding and surface quality within the material’s thermal tolerance. Vacuum pressure strengthens film–substrate contact but requires moderation to prevent overstretching. An optimized condition of 100 °C preheating, 90 °C forming temperature, and 2.5 kg vacuum pressure provides a balanced performance. Geometric factors exert strong control over deformation, with small radii, steep slopes, and tall features producing greater strain and nonuniform thinning. These findings establish practical processing windows and geometry guidelines for achieving reliable OMD components that integrate high visual quality with stable adhesion performance. Full article
(This article belongs to the Special Issue Advances in Polymer Processing Technologies: Injection Molding)
Show Figures

Figure 1

26 pages, 3269 KB  
Article
DiagNeXt: A Two-Stage Attention-Guided ConvNeXt Framework for Kidney Pathology Segmentation and Classification
by Hilal Tekin, Şafak Kılıç and Yahya Doğan
J. Imaging 2025, 11(12), 433; https://doi.org/10.3390/jimaging11120433 - 4 Dec 2025
Viewed by 69
Abstract
Accurate segmentation and classification of kidney pathologies from medical images remain a major challenge in computer-aided diagnosis due to complex morphological variations, small lesion sizes, and severe class imbalance. This study introduces DiagNeXt, a novel two-stage deep learning framework designed to overcome these [...] Read more.
Accurate segmentation and classification of kidney pathologies from medical images remain a major challenge in computer-aided diagnosis due to complex morphological variations, small lesion sizes, and severe class imbalance. This study introduces DiagNeXt, a novel two-stage deep learning framework designed to overcome these challenges through an integrated use of attention-enhanced ConvNeXt architectures for both segmentation and classification. In the first stage, DiagNeXt-Seg employs a U-Net-based design incorporating Enhanced Convolutional Blocks (ECBs) with spatial attention gates and Atrous Spatial Pyramid Pooling (ASPP) to achieve precise multi-class kidney segmentation. In the second stage, DiagNeXt-Cls utilizes the segmented regions of interest (ROIs) for pathology classification through a hierarchical multi-resolution strategy enhanced by Context-Aware Feature Fusion (CAFF) and Evidential Deep Learning (EDL) for uncertainty estimation. The main contributions of this work include: (1) enhanced ConvNeXt blocks with large-kernel depthwise convolutions optimized for 3D medical imaging, (2) a boundary-aware compound loss combining Dice, cross-entropy, focal, and distance transform terms to improve segmentation precision, (3) attention-guided skip connections preserving fine-grained spatial details, (4) hierarchical multi-scale feature modeling for robust pathology recognition, and (5) a confidence-modulated classification approach integrating segmentation quality metrics for reliable decision-making. Extensive experiments on a large kidney CT dataset comprising 3847 patients demonstrate that DiagNeXt achieves 98.9% classification accuracy, outperforming state-of-the-art approaches by 6.8%. The framework attains near-perfect AUC scores across all pathology classes (Normal: 1.000, Tumor: 1.000, Cyst: 0.999, Stone: 0.994) while offering clinically interpretable uncertainty maps and attention visualizations. The superior diagnostic accuracy, computational efficiency (6.2× faster inference), and interpretability of DiagNeXt make it a strong candidate for real-world integration into clinical kidney disease diagnosis and treatment planning systems. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

23 pages, 3643 KB  
Article
Daylighting Strategies for Low-Rise Residential Buildings Through Analysis of Architectural Design Parameters
by Kamaraj Kalaimathy, Sudha Gopalakrishnan, Radhakrishnan Shanthi Priya, Chandrasekaran Selvam and Ramalingam Senthil
Architecture 2025, 5(4), 125; https://doi.org/10.3390/architecture5040125 - 4 Dec 2025
Viewed by 67
Abstract
Daylighting is essential in residential building design because it influences energy efficiency and visual comfort while also supporting occupants’ health and overall well-being. Adequate natural light exposure aids circadian regulation and psychological restoration and enhances indoor environmental quality. This study examines how the [...] Read more.
Daylighting is essential in residential building design because it influences energy efficiency and visual comfort while also supporting occupants’ health and overall well-being. Adequate natural light exposure aids circadian regulation and psychological restoration and enhances indoor environmental quality. This study examines how the window-to-wall ratio, skylight-to-roof ratio, and building orientation in a selected low-rise residential building can be optimized to ensure sufficient daylight in warm-humid climates. Using on-site illuminance measurements and climate-based simulations, the daylight performance is evaluated using metrics such as useful daylight illuminance, spatial daylight autonomy, and annual sunlight exposure. Results indicated that a 5% skylight-to-roof ratio (such as a 1:2 skylight setup), combined with a 22% window-to-wall ratio and glazing with a visible transmittance of 0.45, provides a balanced improvement in daylight availability for the chosen case study. The selected configuration optimizes spatial daylight autonomy and useful daylight illuminance while keeping annual sunlight exposure within recommended levels based on the surrounding building landscape. The findings emphasize the importance of tailoring daylighting strategies to site-specific orientation, glazing options, and design constraints. The approach and insights from this case study can be beneficial for incorporating into similar low-rise residential buildings in warm-humid contexts. Incorporating daylight-responsive design into urban and architectural planning supports several United Nations Sustainable Development Goals (SDG 3, 11, and 13). Full article
(This article belongs to the Special Issue Sustainable Built Environments and Human Wellbeing, 2nd Edition)
Show Figures

Figure 1

23 pages, 6136 KB  
Article
A Bidirectional Digital Twin System for Adaptive Manufacturing
by Klaas Maximilian Heide, Berend Denkena and Martin Winkler
J. Manuf. Mater. Process. 2025, 9(12), 400; https://doi.org/10.3390/jmmp9120400 - 4 Dec 2025
Viewed by 163
Abstract
Digital Twin Systems (DTSs) are increasingly recognized as enablers of data-driven manufacturing, yet many implementations remain limited to monitoring or visualization without closed-loop control. This study presents a fully integrated DTS for CNC milling that emphasizes real-time bidirectional coupling between a real machine [...] Read more.
Digital Twin Systems (DTSs) are increasingly recognized as enablers of data-driven manufacturing, yet many implementations remain limited to monitoring or visualization without closed-loop control. This study presents a fully integrated DTS for CNC milling that emphasizes real-time bidirectional coupling between a real machine and a virtual counterpart as well as the use of machine-native signals. The architecture comprises a physical space defined by a five-axis machining center, a virtual space implemented via a dexel-based technological simulation environment, and a digital thread for continuous data exchange between those. A full-factorial simulation study investigated the influence of dexel density and cycle time on engagement accuracy and runtime, yielding an optimal configuration that minimizes discretization errors while maintaining real-time feasibility. Latency measurements confirmed a mean response time of 34.2 ms, supporting process-parallel decision-making. Two application scenarios in orthopedic implant milling validated the DTS: process force monitoring enabled an automatic machine halt within 28 ms of anomaly detection, while adaptive feed rate control reduced predicted form error by 20 µm. These findings demonstrate that the DTS extends beyond passive monitoring by actively intervening in machining processes; enhancing process reliability and part quality; and establishing a foundation for scalable, interpretable digital twins in regulated manufacturing. Full article
(This article belongs to the Special Issue Digital Twinning for Manufacturing)
Show Figures

Graphical abstract

22 pages, 3863 KB  
Article
Enhancing Pedestrian Satisfaction: A Quantitative Study of Visual Perception Elements
by Yi Tian, Dong Sun, Mei Lyu and Shujiao Wang
Buildings 2025, 15(23), 4389; https://doi.org/10.3390/buildings15234389 - 4 Dec 2025
Viewed by 176
Abstract
The urban street environment strongly influences pedestrian satisfaction, with visual perception elements playing a pivotal role. Historic districts serve not only as carriers of urban culture but also as key tourism resources, where spatial quality directly shapes visitor experience and city image. This [...] Read more.
The urban street environment strongly influences pedestrian satisfaction, with visual perception elements playing a pivotal role. Historic districts serve not only as carriers of urban culture but also as key tourism resources, where spatial quality directly shapes visitor experience and city image. This study takes the Shenyang Fangcheng historic district as a case, combining field surveys and questionnaires to gather pedestrian satisfaction data, while applying semantic segmentation of street imagery to quantify visual elements. Using correlation analysis and multiple regression models, the research systematically reveals relationships and mechanisms linking visual elements with pedestrian satisfaction. Results show that an increase in landmark buildings and landscape features enhances legibility and attractiveness; optimizing spatial configuration improves openness and walking comfort; and reducing vehicle presence strengthens perceived safety and overall experiential quality. By integrating subjective perceptions with objective visual indicators, this study offers empirical evidence and methodological innovation to support enhancement of walkability and promote human-centered street design in historic districts. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

19 pages, 2788 KB  
Article
Universal Image Segmentation with Arbitrary Granularity for Efficient Pest Monitoring
by L. Minh Dang, Sufyan Danish, Muhammad Fayaz, Asma Khan, Gul E. Arzu, Lilia Tightiz, Hyoung-Kyu Song and Hyeonjoon Moon
Horticulturae 2025, 11(12), 1462; https://doi.org/10.3390/horticulturae11121462 - 3 Dec 2025
Viewed by 122
Abstract
Accurate and timely pest monitoring is essential for sustainable agriculture and effective crop protection. While recent deep learning-based pest recognition systems have significantly improved accuracy, they are typically trained for fixed label sets and narrowly defined tasks. In this paper, we present RefPestSeg, [...] Read more.
Accurate and timely pest monitoring is essential for sustainable agriculture and effective crop protection. While recent deep learning-based pest recognition systems have significantly improved accuracy, they are typically trained for fixed label sets and narrowly defined tasks. In this paper, we present RefPestSeg, a universal, language-promptable segmentation model specifically designed for pest monitoring. RefPestSeg can segment targets at any semantic level, such as species, genus, life stage, or damage type, conditioned on flexible natural language instructions. The model adopts a symmetric architecture with self-attention and cross-attention mechanisms to tightly align visual features with language embeddings in a unified feature space. To further enhance performance in challenging field conditions, we integrate an optimized super-resolution module to improve image quality and employ diverse data augmentation strategies to enrich the training distribution. A lightweight postprocessing step refines segmentation masks by suppressing highly overlapping regions and removing noise blobs introduced by cluttered backgrounds. Extensive experiments on a challenging pest dataset show that RefPestSeg achieves an Intersection over Union (IoU) of 69.08 while maintaining robustness in real-world scenarios. By enabling language-guided pest segmentation, RefPestSeg advances toward more intelligent, adaptable monitoring systems that can respond to real-time agricultural demands without costly model retraining. Full article
Show Figures

Figure 1

21 pages, 866 KB  
Review
Using VR and BCI to Improve Communication Between a Cyber-Physical System and an Operator in the Industrial Internet of Things
by Adrianna Piszcz, Izabela Rojek, Nataša Náprstková and Dariusz Mikołajewski
Appl. Sci. 2025, 15(23), 12805; https://doi.org/10.3390/app152312805 - 3 Dec 2025
Viewed by 231
Abstract
The Industry 5.0 paradigm places humans and the environment at the center. New communication methods based on virtual reality (VR) and brain–computer interfaces (BCIs) can improve system–operator interaction in multimedia communications, providing immersive environments where operators can more intuitively manage complex systems. The [...] Read more.
The Industry 5.0 paradigm places humans and the environment at the center. New communication methods based on virtual reality (VR) and brain–computer interfaces (BCIs) can improve system–operator interaction in multimedia communications, providing immersive environments where operators can more intuitively manage complex systems. The study was conducted through a systematic literature review combined with bibliometric and thematic analyses to map the current landscape of VR-BCI communication frameworks in IIoT environments. The methodology employed included structured resource selection, comparative assessment of interaction modalities, and cross-domain synthesis to identify patterns, gaps, and emerging technology trends. Key challenges identified include reliable signal processing, real-time integration of neural data with immersive interfaces, and the scalability of VR-BCI solutions in industrial applications. The study concludes by outlining future research directions focused on hybrid multimodal interfaces, adaptive cognition-based automation, and standardized protocols for evaluating human–cyber-physical system communication. VR interfaces enable operators to visualize and interact with network data in 3D, improving their monitoring and troubleshooting in real time. By integrating BCI technology, operators can control systems using neural signals, reducing the need for physical input devices and streamlining operation (including touchless technology). BCI-based protocols enable touchless control, which can be particularly useful in situations where operators must multitask, bypassing traditional input methods such as keyboards or mice. VR environments can simulate network conditions, allowing operators to practice and refine their responses to potential problems in a controlled, safe environment. Combining VR with BCI allows for the creation of adaptive interfaces that respond to the operator’s cognitive load, adjusting the complexity of the displayed information based on real-time neural feedback. This integration can lead to more personalized and effective training programs for operators, enhancing their skills and decision-making. VR and BCI-based solutions also have the potential to reduce operator fatigue by enabling more natural and intuitive interaction with complex systems. The use of these advanced technologies in multimedia telecommunications can translate into more efficient, precise, and user-friendly system management, ultimately improving service quality. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

16 pages, 3361 KB  
Article
PRF Membranes Enhance Postoperative Recovery After Periapical Surgery: A Single-Blind Randomized Pilot Trial Using 3D Imaging
by Martin Major, Melinda Polyák, Tamás Würsching, Gábor Kammerhofer, Éva Kocsis, Zsolt Németh and György Szabó
Oral 2025, 5(4), 98; https://doi.org/10.3390/oral5040098 (registering DOI) - 3 Dec 2025
Viewed by 129
Abstract
Background: Periapical surgery is indicated for persistent periapical lesions that do not respond to conventional endodontic therapy, yet postoperative recovery is often hindered by pain, swelling, and delayed healing. Platelet-rich fibrin (PRF) membranes are autologous biomaterials with regenerative potential, capable of modulating inflammation [...] Read more.
Background: Periapical surgery is indicated for persistent periapical lesions that do not respond to conventional endodontic therapy, yet postoperative recovery is often hindered by pain, swelling, and delayed healing. Platelet-rich fibrin (PRF) membranes are autologous biomaterials with regenerative potential, capable of modulating inflammation and promoting tissue repair. Methods: This preliminary randomized controlled trial evaluated the effectiveness of PRF membranes in improving postoperative outcomes—specifically pain, swelling, and quality of life—after apicoectomy. Twenty patients requiring periapical surgery were randomly allocated to a PRF group (n = 10) or a control group (n = 10). In the PRF group, autologous PRF membranes were applied over the resected root-end and into the osteotomy cavity before flap closure. In the control group, no PRF membranes or any additional biomaterial were applied, apart from the standard root-end filling material (MTA), which was identically used in both groups as part of the routine apicoectomy protocol. All patients were blinded to allocation, and outcomes were assessed by an independent blinded evaluator. Facial swelling was quantified by 3D facial scanning, pain was recorded daily using a visual analog scale (VAS), and quality of life was evaluated with the PROMIS-29+2 Profile. Results: The PRF group showed significantly reduced swelling (mean volume difference, 7.12 cm3; p = 0.025), lower pain scores (VAS: 1.80 ± 1.22 vs. 3.80 ± 2.44; p = 0.034), and improved quality-of-life domains, including higher Physical Function (p = 0.032) and lower Sleep Disturbance (p = 0.008) scores. Conclusions: Within the limitations of this pilot study, PRF membranes enhanced postoperative recovery after periapical surgery by reducing swelling and pain while improving patient-reported outcomes. Larger multicenter trials are needed to confirm these preliminary findings. Full article
Show Figures

Figure 1

20 pages, 3176 KB  
Article
A Compact GPT-Based Multimodal Fake News Detection Model with Context-Aware Fusion
by Zengxiao Chi, Puxin Guo and Fengming Liu
Electronics 2025, 14(23), 4755; https://doi.org/10.3390/electronics14234755 - 3 Dec 2025
Viewed by 176
Abstract
With the rapid development of social networks, online news has gradually surpassed traditional paper media and become a main channel for information dissemination. However, the proliferation of fake news also poses a serious threat to individuals and society. Since online news often involves [...] Read more.
With the rapid development of social networks, online news has gradually surpassed traditional paper media and become a main channel for information dissemination. However, the proliferation of fake news also poses a serious threat to individuals and society. Since online news often involves multimodal content such as text and images, multimodal fake news detection has become increasingly important. To address the challenges of feature extraction and cross-modal fusion in this task, this study presents a new multimodal fake news detection model. The model uses a GPT-style encoder to extract text semantic features, a ResNet backbone to extract image visual features, and dynamically captures correlations between modalities through a context-aware multimodal fusion module. In addition, a joint optimization strategy combining contrastive loss and cross-entropy loss is designed to enhance modal alignment and feature discrimination while optimizing classification performance. Experimental results on the Weibo and PHEME datasets show that the proposed model outperforms baseline methods in accuracy, precision, recall, and F1-score, effectively captures correlations between modalities, and improves the quality of feature representation and overall model performance. This study suggests that the proposed approach may serve as a useful approach for fake news detection on social platforms. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop