Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (342)

Search Parameters:
Keywords = star image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2893 KiB  
Article
Insulator Defect Detection Based on Improved YOLO11n Algorithm Under Complex Environmental Conditions
by Shoutian Dong, Yiqi Qin, Benrui Li, Qi Zhang and Yu Zhao
Electronics 2025, 14(14), 2898; https://doi.org/10.3390/electronics14142898 - 20 Jul 2025
Viewed by 226
Abstract
Detecting defects in transmission line insulators is crucial to prevent power grid failures as power systems continue to expand. This study introduces YOL011n-SSA, an enhanced insulator defect detection technique method that addresses the challenges of effectively identifying flaws in complex environments. First, this [...] Read more.
Detecting defects in transmission line insulators is crucial to prevent power grid failures as power systems continue to expand. This study introduces YOL011n-SSA, an enhanced insulator defect detection technique method that addresses the challenges of effectively identifying flaws in complex environments. First, this study incorporates the StarNet network into the backbone of the model. By stacking multiple layers of star operations, the model reduces both parameter count and model size, improving its adaptability to real-time object detection tasks. Secondly, the SOPN feature pyramid network is introduced into the neck part of the model. By optimizing the multi-scale feature fusion of the richer information obtained after expanding the channel dimension, the detection efficiency for low-resolution images and small objects is improved. Then, the ADown module was adopted to improve the backbone and neck parts of the model. It effectively reduces parameter count and significantly lowers the computational cost by implementing downsampling operations between different layers of the feature map, thereby enhancing the practicality of the model. Meanwhile, by introducing the NWD to improve the evaluation index of the loss function, the detection model’s capability in assessing the similarities among various small-object defects is enhanced. Experimental results were obtained using an expanded dataset based on a public dataset, incorporating three types of insulator defects under complex environmental conditions. The results demonstrate that the YOLO11n-SSA algorithm achieved an mAP@0.5 of 0.919, an mAP@0.5:0.95 of 70.7%, a precision of 0.95, and a recall of 0.875, representing improvements of 3.9%, 5.5%, 2%, and 5.7%, respectively, when compared to the original YOLO1ln method. The detection time per image is 0.0134 s. Compared to other mainstream algorithms, the YOLO11n-SSA algorithm demonstrates superior detection accuracy and real-time performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 7588 KiB  
Article
Dual-Purpose Star Tracker and Space Debris Detector: Miniature Instrument for Small Satellites
by Beltran N. Arribas, João G. Maia, João P. Castanheira, Joel Filho, Rui Melicio, Hugo Onderwater, Paulo Gordo, R. Policarpo Duarte and André R. R. Silva
J. Sens. Actuator Netw. 2025, 14(4), 75; https://doi.org/10.3390/jsan14040075 - 16 Jul 2025
Viewed by 309
Abstract
This paper presents the conception, design and real miniature instrument implementation of a dual-purpose sensor for small satellites that can act as a star tracker and space debris detector. In the previous research work, the authors conceived, designed and implemented a breadboard consisting [...] Read more.
This paper presents the conception, design and real miniature instrument implementation of a dual-purpose sensor for small satellites that can act as a star tracker and space debris detector. In the previous research work, the authors conceived, designed and implemented a breadboard consisting of a computer laptop, a camera interface and camera controller, an image sensor, an optics system, a temperature sensor and a temperature controller. It showed that the instrument was feasible. In this paper, a new real star tracker miniature instrument is designed, physically realized and tested. The implementation follows a New Space approach; it is made with Commercial Off-the-Shelf (COTS) components with space heritage. The instrument’s development, implementation and testing are presented. Full article
Show Figures

Figure 1

24 pages, 26359 KiB  
Article
Evaluating the Interferometric Performance of China’s Dual-Star SAR Satellite Constellation in Large Deformation Scenarios: A Case Study in the Jinchuan Mining Area, Gansu
by Zixuan Ge, Wenhao Wu, Jiyuan Hu, Nijiati Muhetaer, Peijie Zhu, Jie Guo, Zhihui Li, Gonghai Zhang, Yuxing Bai and Weijia Ren
Remote Sens. 2025, 17(14), 2451; https://doi.org/10.3390/rs17142451 - 15 Jul 2025
Viewed by 273
Abstract
Mining activities can trigger geological disasters, including slope instability and surface subsidence, posing a serious threat to the surrounding environment and miners’ safety. Consequently, the development of reasonable, effective, and rapid deformation monitoring methods in mining areas is essential. Traditional synthetic aperture radar(SAR) [...] Read more.
Mining activities can trigger geological disasters, including slope instability and surface subsidence, posing a serious threat to the surrounding environment and miners’ safety. Consequently, the development of reasonable, effective, and rapid deformation monitoring methods in mining areas is essential. Traditional synthetic aperture radar(SAR) satellites are often limited by their revisiting period and image resolution, leading to unwrapping errors and decorrelation issues in the central mining area, which pose challenges in deformation monitoring in mining areas. In this study, persistent scatterer interferometric synthetic aperture radar (PS-InSAR) technology is used to monitor and analyze surface deformation of the Jinchuan mining area in Jinchang City, based on SAR images from the small satellites “Fucheng-1” and “Shenqi”, launched by the Tianyi Research Institute in Hunan Province, China. Notably, the dual-star constellation offers high-resolution SAR data with a spatial resolution of up to 3 m and a minimum revisit period of 4 days. We also assessed the stability of the dual-star interferometric capability, imaging quality, and time-series monitoring capability of the “Fucheng-1” and “Shenqi” satellites and performed a comparison with the time-series results from Sentinel-1A. The results show that the phase difference (SPD) and phase standard deviation (PSD) mean values for the “Fucheng-1” and “Shenqi” interferograms show improvements of 21.47% and 35.47%, respectively, compared to Sentinel-1A interferograms. Additionally, the processing results of the dual-satellite constellation exhibit spatial distribution characteristics highly consistent with those of Sentinel-1A, while demonstrating relatively better detail representation capabilities at certain measurement points. In the context of rapid deformation monitoring in mining areas, they show a higher revisit frequency and spatial resolution, demonstrating high practical value. Full article
Show Figures

Figure 1

19 pages, 2610 KiB  
Article
Influence of Flow Field on the Imaging Quality of Star Sensors for Hypersonic Vehicles in near Space
by Siyao Wu, Ting Sun, Fei Xing, Haonan Liu, Kang Yang, Jiahui Song, Shijie Yu and Lianqing Zhu
Sensors 2025, 25(14), 4341; https://doi.org/10.3390/s25144341 - 11 Jul 2025
Viewed by 163
Abstract
When hypersonic vehicles fly in near space, the flow field near the optical window leads to light displacement, jitter, blurring, and energy attenuation of the star sensor. This ultimately affects the imaging quality and navigation accuracy. In order to investigate the impact of [...] Read more.
When hypersonic vehicles fly in near space, the flow field near the optical window leads to light displacement, jitter, blurring, and energy attenuation of the star sensor. This ultimately affects the imaging quality and navigation accuracy. In order to investigate the impact of aerodynamic optical effects on imaging, the fourth-order Runge–Kutta and the fourth-order Adams–Bashforth–Moulton (ABM) predictor-corrector methods are used for ray tracing on the density data. A comparative analysis of the imaging quality results from the two methods reveals their respective strengths and limitations. The influence of the optical system is included in the image quality calculations to make the results more representative of real data. The effects of altitude, velocity, and angle of attack on the imaging quality are explored when the optical window is located at the tail of the vehicle. The results show that altitude significantly affects imaging results, and higher altitudes reduce the impact of the flow field on imaging quality. When the optical window is located at the tail of the vehicle, the relationship between velocity and offset is no longer simply linear. This research provides theoretical support for analyzing the imaging quality and navigation accuracy of a star sensor when a vehicle is flying at hypersonic speeds in near space. Full article
Show Figures

Figure 1

26 pages, 863 KiB  
Systematic Review
Examining the Design Characteristics of Mnemonics Serious Games on the App Stores: A Systematic Heuristic Review
by Kingson Fung and Kiemute Oyibo
Appl. Sci. 2025, 15(14), 7772; https://doi.org/10.3390/app15147772 - 10 Jul 2025
Viewed by 184
Abstract
Research shows mnemonics promote knowledge retention in different contexts; hence, they are increasingly being used in serious games aimed to support long-term learning while providing “edutainment.” However, there is limited research on their effectiveness. As such, we conducted a systematic review of 32 [...] Read more.
Research shows mnemonics promote knowledge retention in different contexts; hence, they are increasingly being used in serious games aimed to support long-term learning while providing “edutainment.” However, there is limited research on their effectiveness. As such, we conducted a systematic review of 32 mnemonics mobile apps and evaluated them using two established frameworks from the literature. Our analysis revealed that most of the games teach language or medicine, take the form of puzzles or quizzes, and feature acronyms and/or images, with players rating them at least three out of five stars on average. All 32 apps supported feedback, interactivity, and challenge. A few supported agency, identity and self-presence, while many did not support key characteristics such as social and spatial presences. The overall finding indicates a need to create a mnemonics-based and tailored framework to guide the design of mnemonics games in the future to make them more effective. Full article
(This article belongs to the Special Issue Virtual Reality and Serious Games: Developments and Applications)
Show Figures

Figure 1

23 pages, 17655 KiB  
Article
Colony-YOLO: A Lightweight Micro-Colony Detection Network Based on Improved YOLOv8n
by Meihua Wang, Junhui Luo, Kai Lin, Yuankai Chen, Xinpeng Huang, Jiping Liu, Anbang Wang and Deqin Xiao
Microorganisms 2025, 13(7), 1617; https://doi.org/10.3390/microorganisms13071617 - 9 Jul 2025
Viewed by 240
Abstract
The detection of colony-forming units (CFUs) is a time-consuming but essential task in mulberry bacterial blight research. To overcome the problem of inaccurate small-target detection and high computational consumption in mulberry bacterial blight colony detection task, a mulberry bacterial blight colony dataset (MBCD) [...] Read more.
The detection of colony-forming units (CFUs) is a time-consuming but essential task in mulberry bacterial blight research. To overcome the problem of inaccurate small-target detection and high computational consumption in mulberry bacterial blight colony detection task, a mulberry bacterial blight colony dataset (MBCD) consisting of 310 images and 23,524 colonies is presented. Based on the MBCD, a colony detection model named Colony-YOLO is proposed. Firstly, the lightweight backbone network StarNet is employed, aiming to enhance feature extraction capabilities while reducing computational complexity. Next, C2f-MLCA is designed by embedding MLCA (Mixed Local Channel Attention) into the C2f module of YOLOv8 to integrate local and global feature information, thereby enhancing feature representation capabilities. Furthermore, the Shape-IoU loss function is implemented to prioritize geometric consistency between predicted and ground truth bounding boxes. Experiment results show that the Colony-YOLO achieved an mAP of 96.1% on MBCDs, which is 4.8% higher than the baseline YOLOv8n, with FLOPs and Params reduced by 1.8 G and 0.8 M, respectively. Comprehensive evaluations demonstrate that our method excels in detection accuracy while maintaining lower complexity, making it effective for colony detection in practical applications. Full article
(This article belongs to the Section Microbial Biotechnology)
Show Figures

Figure 1

22 pages, 2200 KiB  
Article
Spherical Polar Pattern Matching for Star Identification
by Jingneng Fu, Ling Lin and Qiang Li
Sensors 2025, 25(13), 4201; https://doi.org/10.3390/s25134201 - 5 Jul 2025
Viewed by 288
Abstract
To endow a star sensor with strong robustness, low algorithm complexity, and a small database, this paper proposes an all-sky star identification algorithm based on spherical polar pattern matching. The proposed algorithm consists of three main steps. First, the guide star is rotated [...] Read more.
To endow a star sensor with strong robustness, low algorithm complexity, and a small database, this paper proposes an all-sky star identification algorithm based on spherical polar pattern matching. The proposed algorithm consists of three main steps. First, the guide star is rotated to be a polar star, and the polar and azimuth angles of neighboring stars are used as polar pattern elements of the guide star. Then, the relative azimuth histogram is applied to the spherical polar pattern matching, and a star pair after spherical polar pattern matching is identified through angular distance cross-verification. Finally, a reference star image is generated from the identified star pair to complete the matching process of all guide stars in the field of view. The proposed algorithm is verified by simulation experiments. The simulation results show that for a star sensor with a medium field of view (15° × 15°, 1024 × 1024 pixel) and a limiting magnitude of 6.0 Mv, the required database size is 161 KB. When false and missing star spots account for 50% of the guide stars and the star spot extraction error is 1.0 pixel, the average star identification time is 0.35 ms (@i7-4790), and the identification probability is 99.9%. However, when false and missing star spots account for 100% of the guide stars and the star spot extraction error is 5.0 pixel, the average star identification time is less than 2.0 ms, and the identification probability is 97.1%. Full article
(This article belongs to the Special Issue Advanced Optical Sensors Based on Machine Learning: 2nd Edition)
Show Figures

Figure 1

25 pages, 7590 KiB  
Article
A Lightweight Method for Road Defect Detection in UAV Remote Sensing Images with Complex Backgrounds and Cross-Scale Fusion
by Wenya Zhang, Xiang Li, Lina Wang, Danfei Zhang, Pengfei Lu, Lei Wang and Chuanxiang Cheng
Remote Sens. 2025, 17(13), 2248; https://doi.org/10.3390/rs17132248 - 30 Jun 2025
Viewed by 286
Abstract
The accuracy of road damage detection models based on UAV remote sensing images is generally low, mainly due to the challenges posed by the complex background of road damage, diverse forms, and necessary computational requirements. To tackle the issue, this paper presents CSGEH-YOLO, [...] Read more.
The accuracy of road damage detection models based on UAV remote sensing images is generally low, mainly due to the challenges posed by the complex background of road damage, diverse forms, and necessary computational requirements. To tackle the issue, this paper presents CSGEH-YOLO, a lightweight model tailored for UAV-based road damage detection in intricate environments. (1) The star operation from StarNet is integrated into the C2f backbone network, enhancing its capacity to capture intricate details in complex scenes. Moreover, the CAA attention mechanism is employed to strengthen the model’s global feature extraction abilities; (2) a cross-scale feature fusion strategy known as GFPN is developed to tackle the problem of diverse target scales in road damage detection; (3) to reduce computational resource consumption, a lightweight detection head called EP-Detect has been specifically designed to decrease the model’s computational complexity and the number of parameters; and (4) the model’s localization capability for road damage targets is enhanced by integrating an optimized regression loss function called WiseIoUv3. Experimental findings indicate that the CSGEH-YOLO algorithm surpasses the baseline YOLOv8s, achieving a 3.1% improvement in mAP. It also reduces model parameters by 4% and computational complexity to 78%. In contrast to alternative methods, the model proposed in this paper significantly reduces computational complexity while improving accuracy. It offers robust support for deploying UAV-based road damage detection models. Full article
Show Figures

Figure 1

22 pages, 20537 KiB  
Article
Er:YAG Laser Applications for Debonding Different Ceramic Restorations: An In Vitro Study
by Ruxandra Elena Luca, Anișoara Giumancă-Borozan, Iosif Hulka, Ioana-Roxana Munteanu, Carmen Darinca Todea and Mariana Ioana Miron
Medicina 2025, 61(7), 1189; https://doi.org/10.3390/medicina61071189 - 30 Jun 2025
Viewed by 329
Abstract
Background and Objectives: Conventional methods for removing cemented fixed prosthetic restorations (FPRs) are unreliable and lead to unsatisfactory outcomes. At their best, they allow the tooth to be saved at the expense of a laborious process that also wears down rotating tools [...] Read more.
Background and Objectives: Conventional methods for removing cemented fixed prosthetic restorations (FPRs) are unreliable and lead to unsatisfactory outcomes. At their best, they allow the tooth to be saved at the expense of a laborious process that also wears down rotating tools and handpieces and occasionally results in abutment fractures. Restorations are nearly never reusable in any of these situations. Erbium-doped yttrium-aluminum-garnet (Er:YAG) and erbium-chromium yttrium-scandium-gallium-garnet (Er,Cr:YSGG) lasers casafely and effectively remove FPRs, according to scientific studiesre. This study sets out to examine the impact of Er:YAG laser radiation on the debonding of different ceramic restorations, comparing the behavior of various ceramic prosthetic restoration types under laser radiation action and evaluating the integrity of prosthetic restorations and dental surfaces exposed to laser radiation. Materials and Methods: The study included a total of 16 removed teeth, each prepared on opposite surfaces as abutments.y. Based on the previously defined groups, four types of ceramic restorations were included in the study: feldspathic (F), lithium disilicates (LD), layered zirconia (LZ), and monolithic zirconia (MZ). The thickness of the prosthetic restorations was measured at three points, and two different materials were used for cementation. The Er:YAG Fotona StarWalker MaQX laser was used to debond the ceramic FPR at a distance of 10 mm using an R14 sapphire tip with 275 mJ, 20 Hz, 5.5 W, with air cooling (setting 1 of 9) and water. After debonding, the debonded surface was visualized under electron microscopy. Results: A total of 23 ceramic FPRs were debonded, of which 12 were intact and the others fractured into two or three pieces. The electron microscopy images showed that debonding took place without causing any harm to the tooth structure. The various restoration types had the following success rates: 100% for the LZ and F groups, 87% for the LD group, and 0% for the MZ group. In terms of cement type, debonding ceramic FPRs cemented with RELYX was successful 75% of the time, compared to Variolink DC’s 69% success rate. Conclusions: In summary, the majority of ceramic prosthetic restorations can be successfully and conservatively debonded with Er:YAG radiation. Full article
(This article belongs to the Special Issue Advancements in Dental Medicine, Oral Anesthesiology and Surgery)
Show Figures

Figure 1

8 pages, 2549 KiB  
Communication
Blinkverse 2.0: Updated Host Galaxies for Fast Radio Bursts
by Jiaying Xu, Chao-Wei Tsai, Sean E. Lake, Yi Feng, Xiang-Lei Chen, Di Li, Han Wang, Xuerong Guo, Jingjing Hu and Xiaodong Ge
Universe 2025, 11(7), 206; https://doi.org/10.3390/universe11070206 - 24 Jun 2025
Viewed by 192
Abstract
Studying the host galaxies of fast radio bursts (FRBs) is critical to understanding the formation processes of their sources and, hence, the mechanisms by which they radiate. Toward this end, we have extended the Blinkverse database version 1.0, which already included burst information [...] Read more.
Studying the host galaxies of fast radio bursts (FRBs) is critical to understanding the formation processes of their sources and, hence, the mechanisms by which they radiate. Toward this end, we have extended the Blinkverse database version 1.0, which already included burst information about FRBs observed by various telescopes, by adding information about 92 published FRB host galaxies to make version 2.0. Each FRB host has 18 parameters describing it, including redshift, stellar mass, star-formation rate, emission line fluxes, etc. In particular, each FRB host includes images collated by FASTView, streamlining the process of looking for clues to understanding the origin of FRBs. FASTView is a tool and API for quickly exploring astronomical sources using archival imaging, photometric, and spectral data. This effort represents the first step in building Blinkverse into a comprehensive tool for facilitating source observation and analysis. Full article
(This article belongs to the Special Issue Planetary Radar Astronomy)
Show Figures

Figure 1

13 pages, 1093 KiB  
Article
A Hybrid Deep Learning Framework for Accurate Cell Segmentation in Whole Slide Images Using YOLOv11, StarDist, and SAM2
by Julius Bamwenda, Mehmet Siraç Özerdem, Orhan Ayyıldız and Veysı Akpolat
Bioengineering 2025, 12(6), 674; https://doi.org/10.3390/bioengineering12060674 - 19 Jun 2025
Viewed by 630
Abstract
Accurate segmentation of cellular structures in whole slide images (WSIs) is essential for quantitative analysis in computational pathology. However, the complexity and scale of WSIs present significant challenges for conventional segmentation methods. In this study, we propose a novel hybrid deep learning framework [...] Read more.
Accurate segmentation of cellular structures in whole slide images (WSIs) is essential for quantitative analysis in computational pathology. However, the complexity and scale of WSIs present significant challenges for conventional segmentation methods. In this study, we propose a novel hybrid deep learning framework that integrates three complementary approaches, YOLOv11, StarDist, and Segment Anything Model v2 (SAM2), to achieve robust and precise cell segmentation. The proposed pipeline utilizes YOLOv11 as an object detector to localize regions of interest, generating bounding boxes or preliminary masks that are subsequently used either as prompts to guide SAM2 or to filter segmentation outputs. StarDist is employed to model cell and nuclear boundaries with high geometric precision using star-convex polygon representations, which are particularly effective in densely packed cellular regions. The framework was evaluated on a unique WSI dataset comprising 256 × 256 image tiles annotated with high-resolution cell-level masks. Quantitative evaluations using the Dice coefficient, intersection over union (IoU), F1-score, precision, and recall demonstrated that the proposed method significantly outperformed individual baseline models. The integration of object detection and prompt-based segmentation led to enhanced boundary accuracy, improved localization, and greater robustness across varied tissue types. This work contributes a scalable and modular solution for advancing automated histopathological image analysis. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning Applications in Healthcare)
Show Figures

Figure 1

21 pages, 4507 KiB  
Article
GSTD-DETR: A Detection Algorithm for Small Space Targets Based on RT-DETR
by Yijian Zhang, Huichao Guo, Yang Zhao, Laixian Zhang, Chenglong Luan, Yingchun Li and Xiaoyu Zhang
Electronics 2025, 14(12), 2488; https://doi.org/10.3390/electronics14122488 - 19 Jun 2025
Viewed by 505
Abstract
Ground-based optical equipment for detecting geostationary orbit space targets typically involves long-exposure imaging, facing challenges such as small and blurred target images, complex backgrounds, and star streaks obstructing the view. To address these issues, this study proposes a GSTD-DETR model based on Real-Time [...] Read more.
Ground-based optical equipment for detecting geostationary orbit space targets typically involves long-exposure imaging, facing challenges such as small and blurred target images, complex backgrounds, and star streaks obstructing the view. To address these issues, this study proposes a GSTD-DETR model based on Real-Time Detection Transformer (RT-DETR), which aims to balance model efficiency and detection accuracy. First, we introduce a Dynamic Cross-Stage Partial (DynCSP) backbone network for feature extraction and fusion, which enhances the network’s representational capability by reducing convolutional parameters and improving information exchange between channels. This effectively reduces the model’s parameter count and computational complexity. Second, we propose a ResFine model with a feature pyramid designed for small target detection, enhancing its ability to perceive small targets. Additionally, we improve the detection head and incorporate a Dynamic Multi-Channel Attention mechanism, which strengthens the focus on critical regions. Finally, we designed an Area-Weighted NWD loss function to improve detection accuracy. The experimental results show that compared to RT-DETR-r18, the GSTD-DETR model reduces the parameter count by 29.74% on the SpotGEO dataset. Its AP50 and AP50:95 improve by 1.3% and 4.9%, reaching 88.6% and 49.9%, respectively. The GSTD-DETR model demonstrates superior performance in the detection accuracy of faint and small space targets. Full article
Show Figures

Figure 1

20 pages, 1482 KiB  
Article
Research on Person Pose Estimation Based on Parameter Inverted Pyramid and High-Dimensional Feature Enhancement
by Guofeng Ma and Qianyi Zhang
Symmetry 2025, 17(6), 941; https://doi.org/10.3390/sym17060941 - 13 Jun 2025
Viewed by 616
Abstract
Heating, Ventilation and Air Conditioning (HVAC) systems are significant carbon emitters in buildings, and precise regulation is crucial for achieving carbon neutrality. Computer vision-based occupant behavior prediction provides vital data for demand-driven control strategies. Real-time multi-person pose estimation faces challenges in balancing speed [...] Read more.
Heating, Ventilation and Air Conditioning (HVAC) systems are significant carbon emitters in buildings, and precise regulation is crucial for achieving carbon neutrality. Computer vision-based occupant behavior prediction provides vital data for demand-driven control strategies. Real-time multi-person pose estimation faces challenges in balancing speed and accuracy, especially in complex environments. Traditional top-down methods become computationally expensive as the number of people increases, while bottom-up methods struggle with key point mismatches in dense crowds. This paper introduces the Efficient-RTMO model, which leverages the Parameter Inverted Image Pyramid (PIIP) with hierarchical multi-scale symmetry for lightweight processing of high-resolution images and a deeper network for low-resolution images. This approach reduces computational complexity, particularly in dense crowd scenarios, and incorporates a dynamic sparse connectivity mechanism via the star-shaped dynamic feed-forward network (StarFFN). By optimizing the symmetry structure, it improves inference efficiency and ensures effective feature fusion. Experimental results on the COCO dataset show that Efficient-RTMO outperforms the baseline RTMO model, achieving more than 2× speed improvement and a 0.3 AP increase. Ablation studies confirm that PIIP and StarFFN enhance robustness against occlusions and scale variations, demonstrating their synergistic effectiveness. Full article
Show Figures

Figure 1

12 pages, 3124 KiB  
Article
Imaging Features and Clinical Characteristics of Granular Cell Tumors: A Single-Center Investigation
by Hui Gu, Lan Yu and Yu Wu
Diagnostics 2025, 15(11), 1336; https://doi.org/10.3390/diagnostics15111336 - 26 May 2025
Viewed by 498
Abstract
Background/Objectives: Granular cell tumors (GCTs) are rare neurogenic tumors with Schwann cell differentiation. Although most are benign, 1–2% exhibit malignant behavior. The imaging features of GCTs remain poorly characterized due to their rarity and anatomic variability. This study aims to elucidate the manifestations [...] Read more.
Background/Objectives: Granular cell tumors (GCTs) are rare neurogenic tumors with Schwann cell differentiation. Although most are benign, 1–2% exhibit malignant behavior. The imaging features of GCTs remain poorly characterized due to their rarity and anatomic variability. This study aims to elucidate the manifestations of GCTs in multimodal imaging across different anatomic locations. Methods: We retrospectively analyzed 66 histopathologically confirmed GCT cases (2011–2024), assessing their clinical presentations, pathological characteristics, and imaging findings from ultrasound (n = 31), CT (n = 14), MRI (n = 8), and endoscopy (n = 15). Two radiologists independently reviewed the imaging features (location, size, morphology, signal/density, and enhancement). Results: The cohort (mean age: 42 ± 12 years; 72.7% female) showed tendency in location towards soft tissue (48.4%), the digestive tract (30.3%), the respiratory system (7.6%), the breasts (7.6%), and the sellar region (6.1%). Six cases (9.1%) were malignant. The key imaging findings by modality were as follows: Ultrasound: Well-circumscribed hypoechoic masses in soft tissue (96.1%) and irregular margins in the breasts (80%, BI-RADS 4B) were found. MRI: The sellar GCTs exhibited T1-isointensity, variable T2-signals (with 50% showing “star-like crack signs”), and heterogeneous enhancements. The soft tissue GCTs were T1-hypointense (75%) with variable T2-signals. CT: Pulmonary/laryngeal GCTs appeared as well-defined hypodense masses with mild/moderate enhancements. Endoscopy: Submucosal/muscularis hypoechoic nodules with smooth surfaces were found. Malignant GCTs were larger (mean: 93 mm vs. 30 mm) but lacked pathognomonic imaging features. Three malignant cases demonstrated metastases. Conclusions: GCTs exhibit distinct imaging patterns based on their anatomical location. While certain features (e.g., star-like crack signs) are suggestive, imaging cannot reliably differentiate benign from malignant variants. Histopathological confirmation remains essential to diagnosis, particularly given the potential for malignant transformations (at 9.1% in our series). Multimodal imaging guides the localization and biopsy planning, but clinical–radiological–pathological correlation is crucial for the optimal management. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

15 pages, 1201 KiB  
Article
Perspective Transformation and Viewpoint Attention Enhancement for Generative Adversarial Networks in Endoscopic Image Augmentation
by Laimonas Janutėnas and Dmitrij Šešok
Appl. Sci. 2025, 15(10), 5655; https://doi.org/10.3390/app15105655 - 19 May 2025
Viewed by 402
Abstract
This study presents an enhanced version of the StarGAN model, with a focus on medical applications, particularly endoscopic image augmentation. Our model incorporates novel Perspective Transformation and Viewpoint Attention Modules for StarGAN that improve image classification accuracy in a multiclass classification task. The [...] Read more.
This study presents an enhanced version of the StarGAN model, with a focus on medical applications, particularly endoscopic image augmentation. Our model incorporates novel Perspective Transformation and Viewpoint Attention Modules for StarGAN that improve image classification accuracy in a multiclass classification task. The Perspective Transformation Module enables the generation of more diverse viewing angles, while the Viewpoint Attention Module helps focus on diagnostically significant regions. We evaluate the performance of our enhanced architecture using the Kvasir v2 dataset, which contains 8000 images across eight gastrointestinal disease classes, comparing it against baseline models including VGG-16, ResNet-50, DenseNet-121, InceptionNet-V3, and EfficientNet-B7. Experimental results demonstrate that our approach achieves better performance in all models for this eight-class classification problem, increasing accuracy on average by 0.7% on VGG-16 and 0.63% on EfficientNet-B7 models. The addition of perspective transformation capabilities enables more diverse examples to augment the database and provide more samples of specific illnesses. Our approach offers a promising solution for medical image generation, enabling effective training with fewer data samples, which is particularly valuable in medical model development where data are often scarce due to challenges in acquisition. These improvements demonstrate significant potential for advancing machine learning disease classification systems in gastroenterology and medical image augmentation as a whole. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop