Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (926)

Search Parameters:
Keywords = ghost

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 642 KiB  
Article
The Goddess of the Flaming Mouth Between India and Tibet
by Arik Moran and Alexander Zorin
Religions 2025, 16(8), 1002; https://doi.org/10.3390/rel16081002 (registering DOI) - 1 Aug 2025
Abstract
This article examines the evolution and potential cross-cultural adaptations of the “Goddess of the Flaming Mouth”, Jvālāmukhī (Skt.) or Kha ‘bar ma (Tib.), in Indic and Tibetan traditions. A minor figure in medieval Hindu Tantras, Jvālāmukhī is today best known through her tangible [...] Read more.
This article examines the evolution and potential cross-cultural adaptations of the “Goddess of the Flaming Mouth”, Jvālāmukhī (Skt.) or Kha ‘bar ma (Tib.), in Indic and Tibetan traditions. A minor figure in medieval Hindu Tantras, Jvālāmukhī is today best known through her tangible manifestation as natural flames in a West Himalayan temple complex in the valley of Kangra, Himachal Pradesh, India. The gap between her sparse portrayal in Tantric texts and her enduring presence at this local “seat of power” (śakti pīṭha) raises questions regarding her historical development and sectarian affiliations. To address these questions, we examine mentions of Jvālāmukhī’s Tibetan counterpart, Kha ‘bar ma, across a wide range of textual sources: canonical Buddhist texts, original Tibetan works of the Bön and Buddhist traditions, and texts on sacred geography. Regarded as a queen of ghost spirits (pretas) and field protector (kṣetrapāla) in Buddhist sources, her portrayal in Bön texts contain archaic motifs that hint at autochthonous and/or non-Buddhist origins. The assessment of Indic material in conjunction with Tibetan texts point to possible transformations of the goddess across these culturally proximate Himalayan settings. In presenting and contextualizing these transitions, this article contributes critical data to ongoing efforts to map the development, adaptation, and localization of Tantric deities along the Indo-Tibetan interface. Full article
23 pages, 3850 KiB  
Review
Speckle-Correlation Holographic Imaging: Advances, Techniques, and Current Challenges
by Vinu R. V., Ziyang Chen and Jixiong Pu
Photonics 2025, 12(8), 776; https://doi.org/10.3390/photonics12080776 (registering DOI) - 31 Jul 2025
Abstract
The imaging modalities of correlation-assisted techniques utilize the inherent information present in the spatial correlation of random intensity patterns for the successful reconstruction of object information. However, most correlation approaches focus only on the reconstruction of amplitude information, as it is a direct [...] Read more.
The imaging modalities of correlation-assisted techniques utilize the inherent information present in the spatial correlation of random intensity patterns for the successful reconstruction of object information. However, most correlation approaches focus only on the reconstruction of amplitude information, as it is a direct byproduct of the correlation, disregarding the phase information. Complex-field reconstruction requires additional experimental or computational schemes, alongside conventional correlation geometry. The resurgence of holography in recent times, with advanced digital techniques and the adoption of the full-field imaging potential of holography in correlation with imaging techniques, has paved the way for the development of various state-of-the-art approaches to correlation optics. This review article provides an in-depth discussion of the recent developments in speckle-correlation-assisted techniques by focusing on various quantitative imaging scenarios. Furthermore, the recent progress and application of correlation-assisted holographic imaging techniques are reviewed, along with its potential challenges. Full article
(This article belongs to the Special Issue Recent Progress in Holography and Its Future Prospects)
Show Figures

Figure 1

24 pages, 14323 KiB  
Article
GTDR-YOLOv12: Optimizing YOLO for Efficient and Accurate Weed Detection in Agriculture
by Zhaofeng Yang, Zohaib Khan, Yue Shen and Hui Liu
Agronomy 2025, 15(8), 1824; https://doi.org/10.3390/agronomy15081824 - 28 Jul 2025
Viewed by 214
Abstract
Weed infestation contributes significantly to global agricultural yield loss and increases the reliance on herbicides, raising both economic and environmental concerns. Effective weed detection in agriculture requires high accuracy and architectural efficiency. This is particularly important under challenging field conditions, including densely clustered [...] Read more.
Weed infestation contributes significantly to global agricultural yield loss and increases the reliance on herbicides, raising both economic and environmental concerns. Effective weed detection in agriculture requires high accuracy and architectural efficiency. This is particularly important under challenging field conditions, including densely clustered targets, small weed instances, and low visual contrast between vegetation and soil. In this study, we propose GTDR-YOLOv12, an improved object detection framework based on YOLOv12, tailored for real-time weed identification in complex agricultural environments. The model is evaluated on the publicly available Weeds Detection dataset, which contains a wide range of weed species and challenging visual scenarios. To achieve better accuracy and efficiency, GTDR-YOLOv12 introduces several targeted structural enhancements. The backbone incorporates GDR-Conv, which integrates Ghost convolution and Dynamic ReLU (DyReLU) to improve early-stage feature representation while reducing redundancy. The GTDR-C3 module combines GDR-Conv with Task-Dependent Attention Mechanisms (TDAMs), allowing the network to adaptively refine spatial features critical for accurate weed identification and localization. In addition, the Lookahead optimizer is employed during training to improve convergence efficiency and reduce computational overhead, thereby contributing to the model’s lightweight design. GTDR-YOLOv12 outperforms several representative detectors, including YOLOv7, YOLOv9, YOLOv10, YOLOv11, YOLOv12, ATSS, RTMDet and Double-Head. Compared with YOLOv12, GTDR-YOLOv12 achieves notable improvements across multiple evaluation metrics. Precision increases from 85.0% to 88.0%, recall from 79.7% to 83.9%, and F1-score from 82.3% to 85.9%. In terms of detection accuracy, mAP:0.5 improves from 87.0% to 90.0%, while mAP:0.5:0.95 rises from 58.0% to 63.8%. Furthermore, the model reduces computational complexity. GFLOPs drop from 5.8 to 4.8, and the number of parameters is reduced from 2.51 M to 2.23 M. These reductions reflect a more efficient network design that not only lowers model complexity but also enhances detection performance. With a throughput of 58 FPS on the NVIDIA Jetson AGX Xavier, GTDR-YOLOv12 proves both resource-efficient and deployable for practical, real-time weeding tasks in agricultural settings. Full article
(This article belongs to the Section Weed Science and Weed Management)
Show Figures

Figure 1

19 pages, 6372 KiB  
Article
Detecting Planting Holes Using Improved YOLO-PH Algorithm with UAV Images
by Kaiyuan Long, Shibo Li, Jiangping Long, Hui Lin and Yang Yin
Remote Sens. 2025, 17(15), 2614; https://doi.org/10.3390/rs17152614 - 28 Jul 2025
Viewed by 232
Abstract
The identification and detection of planting holes, combined with UAV technology, provides an effective solution to the challenges posed by manual counting, high labor costs, and low efficiency in large-scale planting operations. However, existing target detection algorithms face difficulties in identifying planting holes [...] Read more.
The identification and detection of planting holes, combined with UAV technology, provides an effective solution to the challenges posed by manual counting, high labor costs, and low efficiency in large-scale planting operations. However, existing target detection algorithms face difficulties in identifying planting holes based on their edge features, particularly in complex environments. To address this issue, a target detection network named YOLO-PH was designed to efficiently and rapidly detect planting holes in complex environments. Compared to the YOLOv8 network, the proposed YOLO-PH network incorporates the C2f_DyGhostConv module as a replacement for the original C2f module in both the backbone network and neck network. Furthermore, the ATSS label allocation method is employed to optimize sample allocation and enhance detection effectiveness. Lastly, our proposed Siblings Detection Head reduces computational burden while significantly improving detection performance. Ablation experiments demonstrate that compared to baseline models, YOLO-PH exhibits notable improvements of 1.3% in mAP50 and 1.1% in mAP50:95 while simultaneously achieving a reduction of 48.8% in FLOPs and an impressive increase of 26.8 FPS (frames per second) in detection speed. In practical applications for detecting indistinct boundary planting holes within complex scenarios, our algorithm consistently outperforms other detection networks with exceptional precision (F1-score = 0.95), low computational cost, rapid detection speed, and robustness, thus laying a solid foundation for advancing precision agriculture. Full article
Show Figures

Figure 1

14 pages, 1419 KiB  
Article
GhostBlock-Augmented Lightweight Gaze Tracking via Depthwise Separable Convolution
by Jing-Ming Guo, Yu-Sung Cheng, Yi-Chong Zeng and Zong-Yan Yang
Electronics 2025, 14(15), 2978; https://doi.org/10.3390/electronics14152978 - 25 Jul 2025
Viewed by 155
Abstract
This paper proposes a lightweight gaze-tracking architecture named GhostBlock-Augmented Look to Coordinate Space (L2CS), which integrates GhostNet-based modules and depthwise separable convolution to achieve a better trade-off between model accuracy and computational efficiency. Conventional lightweight gaze-tracking models often suffer from degraded accuracy due [...] Read more.
This paper proposes a lightweight gaze-tracking architecture named GhostBlock-Augmented Look to Coordinate Space (L2CS), which integrates GhostNet-based modules and depthwise separable convolution to achieve a better trade-off between model accuracy and computational efficiency. Conventional lightweight gaze-tracking models often suffer from degraded accuracy due to aggressive parameter reduction. To address this issue, we introduce GhostBlocks, a custom-designed convolutional unit that combines intrinsic feature generation with ghost feature recomposition through depthwise operations. Our method enhances the original L2CS architecture by replacing each ResNet block with GhostBlocks, thereby significantly reducing the number of parameters and floating-point operations. The experimental results on the Gaze360 dataset demonstrate that the proposed model reduces FLOPs from 16.527 × 108 to 8.610 × 108 and parameter count from 2.387 × 105 to 1.224 × 105 while maintaining comparable gaze estimation accuracy, with MAE increasing only slightly from 10.70° to 10.87°. This work highlights the potential of GhostNet-augmented designs for real-time gaze tracking on edge devices, providing a practical solution for deployment in resource-constrained environments. Full article
Show Figures

Figure 1

34 pages, 32238 KiB  
Article
ACLC-Detection: A Network for Remote Sensing Image Detection Based on Attention Mechanism and Lightweight Convolution
by Shaodong Liu, Faming Shao, Chenshan Yang, Juying Dai, Jinhong Xue, Qing Liu and Tao Zhang
Remote Sens. 2025, 17(15), 2572; https://doi.org/10.3390/rs17152572 - 24 Jul 2025
Viewed by 226
Abstract
Detecting small objects using remote sensing technology has consistently posed challenges. To address this issue, a novel detection framework named ACLC-Detection has been introduced. Building upon the Yolov11 architecture, this detector integrates an attention mechanism with lightweight convolution to enhance performance. Specifically, the [...] Read more.
Detecting small objects using remote sensing technology has consistently posed challenges. To address this issue, a novel detection framework named ACLC-Detection has been introduced. Building upon the Yolov11 architecture, this detector integrates an attention mechanism with lightweight convolution to enhance performance. Specifically, the deep and shallow convolutional layers of the backbone network are both introduced to depthwise separable convolution. Moreover, the designed lightweight convolutional excitation module (CEM) is used to obtain the contextual information of targets and reduce the loss of information for small targets. In addition, the C3k2 module in the neck fusion network part, where C3k = True, is replaced by the Convolutional Attention Module with Ghost Module (CAF-GM). This not only reduces the model complexity but also acquires more effective information. The Simple Attention module (SimAM) used in it not only suppresses redundant information but also has zero impact on the growth of model parameters. Finally, the Inner-Complete Intersection over Union (Inner-CIOU) loss function is employed, which enables better localization and detection of small targets. Extensive experiments conducted on the DOTA and VisDrone2019 datasets have demonstrated the advantages of the proposed enhanced model in dealing with small objects in aerial imagery. Full article
Show Figures

Figure 1

11 pages, 21181 KiB  
Article
Parallel Ghost Imaging with Extra Large Field of View and High Pixel Resolution
by Nixi Zhao, Changzhe Zhao, Jie Tang, Jianwen Wu, Danyang Liu, Han Guo, Haipeng Zhang and Tiqiao Xiao
Appl. Sci. 2025, 15(15), 8137; https://doi.org/10.3390/app15158137 - 22 Jul 2025
Viewed by 181
Abstract
Ghost imaging (GI) facilitates image acquisition under low-light conditions through single pixel measurements, thus holding tremendous potential across various fields such as biomedical imaging, remote sensing, defense and military applications, and 3D imaging. However, in order to reconstruct high-resolution images, GI typically requires [...] Read more.
Ghost imaging (GI) facilitates image acquisition under low-light conditions through single pixel measurements, thus holding tremendous potential across various fields such as biomedical imaging, remote sensing, defense and military applications, and 3D imaging. However, in order to reconstruct high-resolution images, GI typically requires a large number of single-pixel measurements, which imposes practical limitations on its application. Parallel ghost imaging addresses this issue by utilizing each pixel of a position-sensitive detector as a bucket detector to simultaneously perform tens of thousands of ghost imaging measurements in parallel. In this work, we explore the non-local characteristics of ghost imaging in depth, and by constructing a large speckle space, we achieve a reconstruction result in parallel ghost imaging where the field of view surpasses the limitations of the reference arm detector. Using a computational ghost imaging framework, after pre-recording the speckle patterns, we are able to complete X-ray ghost imaging at a speed of 6 min per sample, with image dimensions of 14,000 × 10,000 pixels (4.55 mm × 3.25 mm, millimeter-scale field of view) and a pixel resolution of 0.325 µm (sub-micron pixel resolution). We present this framework to enhance efficiency, extend resolution, and dramatically expand the field of view, with the aim of providing a solution for the practical implementation of ghost imaging. Full article
(This article belongs to the Special Issue Single-Pixel Intelligent Imaging and Recognition)
Show Figures

Figure 1

23 pages, 3578 KiB  
Article
High-Precision Chip Detection Using YOLO-Based Methods
by Ruofei Liu and Junjiang Zhu
Algorithms 2025, 18(7), 448; https://doi.org/10.3390/a18070448 - 21 Jul 2025
Viewed by 212
Abstract
Machining chips are directly related to both the machining quality and tool condition. However, detecting chips from images in industrial settings poses challenges in terms of model accuracy and computational speed. We firstly present a novel framework called GM-YOLOv11-DNMS to track the chips, [...] Read more.
Machining chips are directly related to both the machining quality and tool condition. However, detecting chips from images in industrial settings poses challenges in terms of model accuracy and computational speed. We firstly present a novel framework called GM-YOLOv11-DNMS to track the chips, followed by a video-level post-processing algorithm for chip counting in videos. GM-YOLOv11-DNMS has two main improvements: (1) it replaces the CNN layers with a ghost module in YOLOv11n, significantly reducing the computational cost while maintaining the detection performance, and (2) it uses a new dynamic non-maximum suppression (DNMS) method, which dynamically adjusts the thresholds to improve the detection accuracy. The post-processing method uses a trigger signal from rising edges to improve chip counting in video streams. Experimental results show that the ghost module reduces the FLOPs from 6.48 G to 5.72 G compared to YOLOv11n, with a negligible accuracy loss, while the DNMS algorithm improves the debris detection precision across different YOLO versions. The proposed framework achieves precision, recall, and mAP@0.5 values of 97.04%, 96.38%, and 95.56%, respectively, in image-based detection tasks. In video-based experiments, the proposed video-level post-processing algorithm combined with GM-YOLOv11-DNMS achieves crack–debris counting accuracy of 90.14%. This lightweight and efficient approach is particularly effective in detecting small-scale objects within images and accurately analyzing dynamic debris in video sequences, providing a robust solution for automated debris monitoring in machine tool processing applications. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

28 pages, 43087 KiB  
Article
LWSARDet: A Lightweight SAR Small Ship Target Detection Network Based on a Position–Morphology Matching Mechanism
by Yuliang Zhao, Yang Du, Qiutong Wang, Changhe Li, Yan Miao, Tengfei Wang and Xiangyu Song
Remote Sens. 2025, 17(14), 2514; https://doi.org/10.3390/rs17142514 - 19 Jul 2025
Viewed by 368
Abstract
The all-weather imaging capability of synthetic aperture radar (SAR) confers unique advantages for maritime surveillance. However, ship detection under complex sea conditions still faces challenges, such as high-frequency noise interference and the limited computational power of edge computing platforms. To address these challenges, [...] Read more.
The all-weather imaging capability of synthetic aperture radar (SAR) confers unique advantages for maritime surveillance. However, ship detection under complex sea conditions still faces challenges, such as high-frequency noise interference and the limited computational power of edge computing platforms. To address these challenges, we propose a lightweight SAR small ship detection network, LWSARDet, which mitigates feature redundancy and reduces computational complexity in existing models. Specifically, based on the YOLOv5 framework, a dual strategy for the lightweight network is adopted as follows: On the one hand, to address the limited nonlinear representation ability of the original network, a global channel attention mechanism is embedded and a feature extraction module, GCCR-GhostNet, is constructed, which can effectively enhance the network’s feature extraction capability and high-frequency noise suppression, while reducing computational cost. On the other hand, to reduce feature dilution and computational redundancy in traditional detection heads when focusing on small targets, we replace conventional convolutions with simple linear transformations and design a lightweight detection head, LSD-Head. Furthermore, we propose a Position–Morphology Matching IoU loss function, P-MIoU, which integrates center distance constraints and morphological penalty mechanisms to more precisely capture the spatial and structural differences between predicted and ground truth bounding boxes. Extensive experiments conduct on the High-Resolution SAR Image Dataset (HRSID) and the SAR Ship Detection Dataset (SSDD) demonstrate that LWSARDet achieves superior overall performance compared to existing state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

28 pages, 4068 KiB  
Article
GDFC-YOLO: An Efficient Perception Detection Model for Precise Wheat Disease Recognition
by Jiawei Qian, Chenxu Dai, Zhanlin Ji and Jinyun Liu
Agriculture 2025, 15(14), 1526; https://doi.org/10.3390/agriculture15141526 - 15 Jul 2025
Viewed by 299
Abstract
Wheat disease detection is a crucial component of intelligent agricultural systems in modern agriculture. However, at present, its detection accuracy still has certain limitations. The existing models hardly capture the irregular and fine-grained texture features of the lesions, and the results of spatial [...] Read more.
Wheat disease detection is a crucial component of intelligent agricultural systems in modern agriculture. However, at present, its detection accuracy still has certain limitations. The existing models hardly capture the irregular and fine-grained texture features of the lesions, and the results of spatial information reconstruction caused by standard upsampling operations are inaccuracy. In this work, the GDFC-YOLO method is proposed to address these limitations and enhance the accuracy of detection. This method is based on YOLOv11 and encompasses three key aspects of improvement: (1) a newly designed Ghost Dynamic Feature Core (GDFC) in the backbone, which improves the efficiency of disease feature extraction and enhances the model’s ability to capture informative representations; (2) a redesigned neck structure, Disease-Focused Neck (DF-Neck), which further strengthens feature expressiveness, to improve multi-scale fusion and refine feature processing pipelines; and (3) the integration of the Powerful Intersection over Union v2 (PIoUv2) loss function to optimize the regression accuracy and convergence speed. The results showed that GDFC-YOLO improved the average accuracy from 0.86 to 0.90 when the cross-overmerge threshold was 0.5 (mAP@0.5), its accuracy reached 0.899, its recall rate reached 0.821, and it still maintained a structure with only 9.27 M parameters. From these results, it can be known that GDFC-YOLO has a good detection performance and stronger practicability relatively. It is a solution that can accurately and efficiently detect crop diseases in real agricultural scenarios. Full article
Show Figures

Figure 1

35 pages, 2232 KiB  
Article
The Twisting and Untwisting of Actin and Tropomyosin Filaments Are Involved in the Molecular Mechanisms of Muscle Contraction, and Their Disruption Can Result in Muscle Disorders
by Yurii S. Borovikov, Maria V. Tishkova, Stanislava V. Avrova, Vladimir V. Sirenko and Olga E. Karpicheva
Int. J. Mol. Sci. 2025, 26(14), 6705; https://doi.org/10.3390/ijms26146705 - 12 Jul 2025
Viewed by 390
Abstract
Polarized fluorescence microscopy of “ghost” muscle fibers, containing fluorescently labeled F-actin, tropomyosin, and myosin, has provided new insights into the molecular mechanisms underlying muscle contraction. At low Ca2+, the troponin-induced overtwisting of the actin filament alters the configuration of myosin binding [...] Read more.
Polarized fluorescence microscopy of “ghost” muscle fibers, containing fluorescently labeled F-actin, tropomyosin, and myosin, has provided new insights into the molecular mechanisms underlying muscle contraction. At low Ca2+, the troponin-induced overtwisting of the actin filament alters the configuration of myosin binding sites, preventing actin–myosin interactions. As Ca2+ levels rise, the actin filament undergoes untwisting, while tropomyosin becomes overtwisted, facilitating the binding of myosin to actin. In the weakly bound state, myosin heads greatly increase both the internal twist and the bending stiffness of actin filaments, accompanied by the untwisting of tropomyosin. Following phosphate (Pi) release, myosin induces the untwisting of overtwisted actin filaments, driving thin-filament sliding relative to the thick filament during force generation. Point mutations in tropomyosin significantly alter the ability of actin and tropomyosin filaments to respond to Pi release, with coordinated changes in twist and bending stiffness. These structural effects correlate with changes in actomyosin ATPase activity. Together, these findings support a model in which dynamic filament twisting is involved in the molecular mechanisms of muscle contraction together with the active working stroke in the myosin motor, and suggest that impairment of this ability may cause contractile dysfunction. Full article
(This article belongs to the Special Issue Molecular Research on Skeletal Muscle Diseases)
Show Figures

Figure 1

22 pages, 6645 KiB  
Article
Visual Detection on Aircraft Wing Icing Process Using a Lightweight Deep Learning Model
by Yang Yan, Chao Tang, Jirong Huang, Zhixiong Cen and Zonghong Xie
Aerospace 2025, 12(7), 627; https://doi.org/10.3390/aerospace12070627 - 12 Jul 2025
Viewed by 182
Abstract
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes [...] Read more.
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes a detection model, Wing Icing Detection DeeplabV3+ (WID-DeeplabV3+), for efficient and precise aircraft wing leading edge icing detection under natural lighting conditions. WID-DeeplabV3+ adopts the lightweight MobileNetV3 as its backbone network to enhance the extraction of edge features in icing areas. Ghost Convolution and Atrous Spatial Pyramid Pooling modules are incorporated to reduce model parameters and computational complexity. The model is optimized using the transfer learning method, where pre-trained weights are utilized to accelerate convergence and enhance performance. Experimental results show WID-DeepLabV3+ segments the icing edge at 1920 × 1080 within 0.03 s. The model achieves the accuracy of 97.15%, an IOU of 94.16%, a precision of 97%, and a recall of 96.96%, representing respective improvements of 1.83%, 3.55%, 1.79%, and 2.04% over DeeplabV3+. The number of parameters and computational complexity are reduced by 92% and 76%, respectively. With high accuracy, superior IOU, and fast inference speed, WID-DeeplabV3+ provides an effective solution for wing-icing detection. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

22 pages, 388 KiB  
Article
Gauge-Invariant Slavnov–Taylor Decomposition for Trilinear Vertices
by Andrea Quadri
Universe 2025, 11(7), 228; https://doi.org/10.3390/universe11070228 - 11 Jul 2025
Viewed by 118
Abstract
We continue the analysis of the gauge-invariant decomposition of amplitudes in spontaneously broken massive gauge theories by performing the characterization of separately gauge-invariant subsectors for amplitudes involving trilinear interaction vertices for an Abelian theory with chiral fermions. We show that the use of [...] Read more.
We continue the analysis of the gauge-invariant decomposition of amplitudes in spontaneously broken massive gauge theories by performing the characterization of separately gauge-invariant subsectors for amplitudes involving trilinear interaction vertices for an Abelian theory with chiral fermions. We show that the use of Frohlich–Morchio–Strocchi gauge-invariant dynamical (i.e., propagating inside loops) fields yields a very powerful handle on the cancellations among unphysical degrees of freedom (the longitudinal mode of the massive gauge field, the Goldstone scalar and the ghosts). The resulting cancellations are encoded into separate Slavnov–Taylor invariant sectors for 1-PI amplitudes. The construction works to all orders in perturbation theory. This decomposition suggests a novel strategy for the determination of finite counter-terms required to restore the Slavnov–Taylor identities in chiral theories in the absence of an invariant regularization scheme. Full article
(This article belongs to the Section Field Theory)
Show Figures

Figure 1

17 pages, 299 KiB  
Article
Dating Application Use and Its Relationship with Mental Health Outcomes Among Men Who Have Sex with Men in Urban Areas of Thailand: A Nationwide Online Cross-Sectional Survey
by Sarawut Nasahwan, Jadsada Kunno and Parichat Ong-Artborirak
Int. J. Environ. Res. Public Health 2025, 22(7), 1094; https://doi.org/10.3390/ijerph22071094 - 9 Jul 2025
Viewed by 627
Abstract
Dating applications (DAs) are widely used to establish social and sexual connections among men who have sex with men (MSM), particularly in urban areas. In this study, we aimed to examine the associations between DA use and mental health among Thai MSM. An [...] Read more.
Dating applications (DAs) are widely used to establish social and sexual connections among men who have sex with men (MSM), particularly in urban areas. In this study, we aimed to examine the associations between DA use and mental health among Thai MSM. An online cross-sectional survey was completed by 442 MSM residing in Bangkok and urban municipalities across all regions of Thailand. Psychological distress (PD) and probable depression were assessed using the General Health Questionnaire (GHQ-12) and the Patient Health Questionnaire (PHQ-9), respectively. Of the participants, 62.7% were current users, with 33.2% experiencing PD and 33.9% having depression. A logistic regression analysis showed that PD was significantly associated with late-night use (AOR = 2.02, 95% CI: 1.08–3.78), matching failure (AOR = 1.95, 95% CI: 1.12–3.38), rejection (AOR = 2.07, 95% CI: 1.18–3.62), and ghosting (AOR = 1.78, 95% CI: 1.02–3.11). Simultaneously, depression was significantly associated with using DAs with the motivation of hooking up (AOR = 2.27, 95% CI: 1.05–4.93), privacy violations (AOR = 2.76, 95% CI: 1.42–5.38), unsolicited sexual images (AOR = 2.04, 95% CI: 1.11–3.74), physical assault (AOR = 2.97, 95% CI: 1.57–5.61), harassment (AOR = 2.54, 95% CI: 1.37–4.70), scams (AOR = 2.59, 95% CI: 1.41–4.77), and extreme disappointment from DA use (AOR = 5.98, 95% CI: 1.84–19.41). These findings highlight how DA usage patterns and negative experiences may contribute to the poorer mental health among MSM in urban areas. Full article
(This article belongs to the Section Behavioral and Mental Health)
31 pages, 20469 KiB  
Article
YOLO-SRMX: A Lightweight Model for Real-Time Object Detection on Unmanned Aerial Vehicles
by Shimin Weng, Han Wang, Jiashu Wang, Changming Xu and Ende Zhang
Remote Sens. 2025, 17(13), 2313; https://doi.org/10.3390/rs17132313 - 5 Jul 2025
Cited by 1 | Viewed by 680
Abstract
Unmanned Aerial Vehicles (UAVs) face a significant challenge in balancing high accuracy and high efficiency when performing real-time object detection tasks, especially amidst intricate backgrounds, diverse target scales, and stringent onboard computational resource constraints. To tackle these difficulties, this study introduces YOLO-SRMX, a [...] Read more.
Unmanned Aerial Vehicles (UAVs) face a significant challenge in balancing high accuracy and high efficiency when performing real-time object detection tasks, especially amidst intricate backgrounds, diverse target scales, and stringent onboard computational resource constraints. To tackle these difficulties, this study introduces YOLO-SRMX, a lightweight real-time object detection framework specifically designed for infrared imagery captured by UAVs. Firstly, the model utilizes ShuffleNetV2 as an efficient lightweight backbone and integrates the novel Multi-Scale Dilated Attention (MSDA) module. This strategy not only facilitates a substantial 46.4% reduction in parameter volume but also, through the flexible adaptation of receptive fields, boosts the model’s robustness and precision in multi-scale object recognition tasks. Secondly, within the neck network, multi-scale feature extraction is facilitated through the design of novel composite convolutions, ConvX and MConv, based on a “split–differentiate–concatenate” paradigm. Furthermore, the lightweight GhostConv is incorporated to reduce model complexity. By synthesizing these principles, a novel composite receptive field lightweight convolution, DRFAConvP, is proposed to further optimize multi-scale feature fusion efficiency and promote model lightweighting. Finally, the Wise-IoU loss function is adopted to replace the traditional bounding box loss. This is coupled with a dynamic non-monotonic focusing mechanism formulated using the concept of outlier degrees. This mechanism intelligently assigns elevated gradient weights to anchor boxes of moderate quality by assessing their relative outlier degree, while concurrently diminishing the gradient contributions from both high-quality and low-quality anchor boxes. Consequently, this approach enhances the model’s localization accuracy for small targets in complex scenes. Experimental evaluations on the HIT-UAV dataset corroborate that YOLO-SRMX achieves an mAP50 of 82.8%, representing a 7.81% improvement over the baseline YOLOv8s model; an F1 score of 80%, marking a 3.9% increase; and a substantial 65.3% reduction in computational cost (GFLOPs). YOLO-SRMX demonstrates an exceptional trade-off between detection accuracy and operational efficiency, thereby underscoring its considerable potential for efficient and precise object detection on resource-constrained UAV platforms. Full article
Show Figures

Figure 1

Back to TopTop