Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (487)

Search Parameters:
Keywords = texture metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3038 KiB  
Article
BCA-MVSNet: Integrating BIFPN and CA for Enhanced Detail Texture in Multi-View Stereo Reconstruction
by Ning Long, Zhengxu Duan, Xiao Hu and Mingju Chen
Electronics 2025, 14(15), 2958; https://doi.org/10.3390/electronics14152958 - 24 Jul 2025
Abstract
The 3D point cloud generated by MVSNet has good scene integrity but lacks sensitivity to details, causing holes and non-dense areas in flat and weak-texture regions. To address this problem and enhance the point cloud information of weak-texture areas, the BCA-MVSNet network is [...] Read more.
The 3D point cloud generated by MVSNet has good scene integrity but lacks sensitivity to details, causing holes and non-dense areas in flat and weak-texture regions. To address this problem and enhance the point cloud information of weak-texture areas, the BCA-MVSNet network is proposed in this paper. The network integrates the Bidirectional Feature Pyramid Network (BIFPN) into the feature processing of the MVSNet backbone network to accurately extract the features of weak-texture regions. In the feature map fusion stage, the Coordinate Attention (CA) mechanism is introduced into 3DU-Net to obtain the position information on the channel dimension related to the direction, improve the detail feature extraction, optimize the depth map and improve the depth accuracy. The experimental results show that BCA-MVSNet not only improves the accuracy of detail texture reconstruction, but also effectively controls the computational overhead. In the DTU dataset, the Overall and Comp metrics of BCA-MVSNet are reduced by 10.2% and 2.6%, respectively; in the Tanksand Temples dataset, the Mean metrics of the eight scenarios are improved by 6.51%. Three scenes are shot by binocular camera, and the reconstruction quality is excellent in the weak-texture area by combining the camera parameters and the BCA-MVSNet model. Full article
25 pages, 6911 KiB  
Article
Image Inpainting Algorithm Based on Structure-Guided Generative Adversarial Network
by Li Zhao, Tongyang Zhu, Chuang Wang, Feng Tian and Hongge Yao
Mathematics 2025, 13(15), 2370; https://doi.org/10.3390/math13152370 - 24 Jul 2025
Abstract
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a [...] Read more.
To address the challenges of image inpainting in scenarios with extensive or irregular missing regions—particularly detail oversmoothing, structural ambiguity, and textural incoherence—this paper proposes an Image Structure-Guided (ISG) framework that hierarchically integrates structural priors with semantic-aware texture synthesis. The proposed methodology advances a two-stage restoration paradigm: (1) Structural Prior Extraction, where adaptive edge detection algorithms identify residual contours in corrupted regions, and a transformer-enhanced network reconstructs globally consistent structural maps through contextual feature propagation; (2) Structure-Constrained Texture Synthesis, wherein a multi-scale generator with hybrid dilated convolutions and channel attention mechanisms iteratively refines high-fidelity textures under explicit structural guidance. The framework introduces three innovations: (1) a hierarchical feature fusion architecture that synergizes multi-scale receptive fields with spatial-channel attention to preserve long-range dependencies and local details simultaneously; (2) spectral-normalized Markovian discriminator with gradient-penalty regularization, enabling adversarial training stability while enforcing patch-level structural consistency; and (3) dual-branch loss formulation combining perceptual similarity metrics with edge-aware constraints to align synthesized content with both semantic coherence and geometric fidelity. Our experiments on the two benchmark datasets (Places2 and CelebA) have demonstrated that our framework achieves more unified textures and structures, bringing the restored images closer to their original semantic content. Full article
Show Figures

Figure 1

24 pages, 4796 KiB  
Article
Comprehensive Experimental Optimization and Image-Driven Machine Learning Prediction of Tribological Performance in MWCNT-Reinforced Bio-Based Epoxy Nanocomposites
by Pavan Hiremath, Srinivas Shenoy Heckadka, Gajanan Anne, Ranjan Kumar Ghadai, G. Divya Deepak and R. C. Shivamurthy
J. Compos. Sci. 2025, 9(8), 385; https://doi.org/10.3390/jcs9080385 - 22 Jul 2025
Abstract
This study presents a multi-modal investigation into the wear behavior of bio-based epoxy composites reinforced with multi-walled carbon nanotubes (MWCNTs) at 0–0.75 wt%. A Taguchi L16 orthogonal array was employed to systematically assess the influence of MWCNT content, load (20–50 N), and sliding [...] Read more.
This study presents a multi-modal investigation into the wear behavior of bio-based epoxy composites reinforced with multi-walled carbon nanotubes (MWCNTs) at 0–0.75 wt%. A Taguchi L16 orthogonal array was employed to systematically assess the influence of MWCNT content, load (20–50 N), and sliding speed (1–2.5 m/s) on wear rate (WR), coefficient of friction (COF), and surface roughness (Ra). Statistical analysis revealed that MWCNT content contributed up to 85.35% to wear reduction, with 0.5 wt% identified as the optimal reinforcement level, achieving the lowest WR (3.1 mm3/N·m) and Ra (0.7 µm). Complementary morphological characterization via SEM and AFM confirmed microstructural improvements at optimal loading and identified degradation features (ploughing, agglomeration) at 0 wt% and 0.75 wt%. Regression models (R2 > 0.95) effectively captured the nonlinear wear response, while a Random Forest model trained on GLCM-derived image features (e.g., correlation, entropy) yielded WR prediction accuracy of R2 ≈ 0.93. Key image-based predictors were found to correlate strongly with measured tribological metrics, validating the integration of surface texture analysis into predictive modeling. This integrated framework combining experimental design, mathematical modeling, and image-based machine learning offers a robust pathway for designing high-performance, sustainable nanocomposites with data-driven diagnostics for wear prediction. Full article
(This article belongs to the Special Issue Bio-Abio Nanocomposites)
Show Figures

Figure 1

27 pages, 1868 KiB  
Article
SAM2-DFBCNet: A Camouflaged Object Detection Network Based on the Heira Architecture of SAM2
by Cao Yuan, Libang Liu, Yaqin Li and Jianxiang Li
Sensors 2025, 25(14), 4509; https://doi.org/10.3390/s25144509 - 21 Jul 2025
Viewed by 165
Abstract
Camouflaged Object Detection (COD) aims to segment objects that are highly integrated with their background, presenting significant challenges such as low contrast, complex textures, and blurred boundaries. Existing deep learning methods often struggle to achieve robust segmentation under these conditions. To address these [...] Read more.
Camouflaged Object Detection (COD) aims to segment objects that are highly integrated with their background, presenting significant challenges such as low contrast, complex textures, and blurred boundaries. Existing deep learning methods often struggle to achieve robust segmentation under these conditions. To address these limitations, this paper proposes a novel COD network, SAM2-DFBCNet, built upon the SAM2 Hiera architecture. Our network incorporates three key modules: (1) the Camouflage-Aware Context Enhancement Module (CACEM), which fuses local and global features through an attention mechanism to enhance contextual awareness in low-contrast scenes; (2) the Cross-Scale Feature Interaction Bridge (CSFIB), which employs a bidirectional convolutional GRU for the dynamic fusion of multi-scale features, effectively mitigating representation inconsistencies caused by complex textures and deformations; and (3) the Dynamic Boundary Refinement Module (DBRM), which combines channel and spatial attention mechanisms to optimize boundary localization accuracy and enhance segmentation details. Extensive experiments on three public datasets—CAMO, COD10K, and NC4K—demonstrate that SAM2-DFBCNet outperforms twenty state-of-the-art methods, achieving maximum improvements of 7.4%, 5.78%, and 4.78% in key metrics such as S-measure (Sα), F-measure (Fβ), and mean E-measure (Eϕ), respectively, while reducing the Mean Absolute Error (M) by 37.8%. These results validate the superior performance and robustness of our approach in complex camouflage scenarios. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

22 pages, 16125 KiB  
Article
Toward an Efficient and Robust Process–Structure Prediction Framework for Filigree L-PBF 316L Stainless Steel Structures
by Yu Qiao, Marius Grad and Aida Nonn
Metals 2025, 15(7), 812; https://doi.org/10.3390/met15070812 - 20 Jul 2025
Viewed by 400
Abstract
Additive manufacturing (AM), particularly laser powder bed fusion (L-PBF), provides unmatched design flexibility for creating intricate steel structures with minimal post-processing. However, adopting L-PBF for high-performance applications is difficult due to the challenge of predicting microstructure evolution. This is because the process is [...] Read more.
Additive manufacturing (AM), particularly laser powder bed fusion (L-PBF), provides unmatched design flexibility for creating intricate steel structures with minimal post-processing. However, adopting L-PBF for high-performance applications is difficult due to the challenge of predicting microstructure evolution. This is because the process is sensitive to many parameters and has a complex thermal history. Thin-walled geometries present an added challenge because their dimensions often approach the scale of individual grains. Thus, microstructure becomes a critical factor in the overall integrity of the component. This study focuses on applying cellular automata (CA) modeling to establish robust and efficient process–structure relationships in L-PBF of 316L stainless steel. The CA framework simulates solidification-driven grain evolution and texture development across various processing conditions. Model predictions are evaluated against experimental electron backscatter diffraction (EBSD) data, with additional quantitative comparisons based on texture and morphology metrics. The results demonstrate that CA simulations calibrated with relevant process parameters can effectively reproduce key microstructural features, including grain size distributions, aspect ratios, and texture components, observed in thin-walled L-PBF structures. This work highlights the strengths and limitations of CA-based modeling and supports its role in reliably designing and optimizing complex L-PBF components. Full article
Show Figures

Graphical abstract

23 pages, 2695 KiB  
Article
Estimation of Subtropical Forest Aboveground Biomass Using Active and Passive Sentinel Data with Canopy Height
by Yi Wu, Yu Chen, Chunhong Tian, Ting Yun and Mingyang Li
Remote Sens. 2025, 17(14), 2509; https://doi.org/10.3390/rs17142509 - 18 Jul 2025
Viewed by 236
Abstract
Forest biomass is closely related to carbon sequestration capacity and can reflect the level of forest management. This study utilizes four machine learning algorithms, namely Multivariate Stepwise Regression (MSR), K-Nearest Neighbors (k-NN), Artificial Neural Network (ANN), and Random Forest (RF), to estimate forest [...] Read more.
Forest biomass is closely related to carbon sequestration capacity and can reflect the level of forest management. This study utilizes four machine learning algorithms, namely Multivariate Stepwise Regression (MSR), K-Nearest Neighbors (k-NN), Artificial Neural Network (ANN), and Random Forest (RF), to estimate forest aboveground biomass (AGB) in Chenzhou City, Hunan Province, China. In addition, a canopy height model, constructed from a digital surface model (DSM) derived from Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) and an ICESat-2-corrected SRTM DEM, is incorporated to quantify its impact on the accuracy of AGB estimation. The results indicate the following: (1) The incorporation of multi-source remote sensing data significantly improves the accuracy of AGB estimation, among which the RF model performs the best (R2 = 0.69, RMSE = 24.26 t·ha−1) compared with the single-source model. (2) The canopy height model (CHM) obtained from InSAR-LiDAR effectively alleviates the signal saturation effect of optical and SAR data in high-biomass areas (>200 t·ha−1). When FCH is added to the RF model combined with multi-source remote sensing data, the R2 of the AGB estimation model is improved to 0.74. (3) In 2018, AGB in Chenzhou City shows clear spatial heterogeneity, with a mean of 51.87 t·ha−1. Biomass increases from the western hilly part (32.15–68.43 t·ha−1) to the eastern mountainous area (89.72–256.41 t·ha−1), peaking in Dongjiang Lake National Forest Park (256.41 t·ha−1). This study proposes a comprehensive feature integration framework that combines red-edge spectral indices for capturing vegetation physiological status, SAR-derived texture metrics for assessing canopy structural heterogeneity, and canopy height metrics to characterize forest three-dimensional structure. This integrated approach enables the robust and accurate monitoring of carbon storage in subtropical forests. Full article
(This article belongs to the Collection Feature Paper Special Issue on Forest Remote Sensing)
Show Figures

Figure 1

18 pages, 11724 KiB  
Article
Hydrogen–Rock Interactions in Carbonate and Siliceous Reservoirs: A Petrophysical Perspective
by Rami Doukeh, Iuliana Veronica Ghețiu, Timur Vasile Chiș, Doru Bogdan Stoica, Gheorghe Brănoiu, Ibrahim Naim Ramadan, Ștefan Alexandru Gavrilă, Marius Gabriel Petrescu and Rami Harkouss
Appl. Sci. 2025, 15(14), 7957; https://doi.org/10.3390/app15147957 - 17 Jul 2025
Viewed by 508
Abstract
Underground hydrogen storage (UHS) in carbonate and siliceous formations presents a promising solution for managing intermittent renewable energy. However, experimental data on hydrogen–rock interactions under representative subsurface conditions remain limited. This study systematically investigates mineralogical and petrophysical alterations in dolomite, calcite-rich limestone, and [...] Read more.
Underground hydrogen storage (UHS) in carbonate and siliceous formations presents a promising solution for managing intermittent renewable energy. However, experimental data on hydrogen–rock interactions under representative subsurface conditions remain limited. This study systematically investigates mineralogical and petrophysical alterations in dolomite, calcite-rich limestone, and quartz-dominant siliceous cores subjected to high-pressure hydrogen (100 bar, 70 °C, 100 days). Distinct from prior research focused on diffraction peak shifts, our analysis prioritizes quantitative changes in mineral concentration (%) as a direct metric of reactivity and structural integrity, offering more robust insights into long-term storage viability. Hydrogen exposure induced significant dolomite dissolution, evidenced by reduced crystalline content (from 12.20% to 10.53%) and accessory phase loss, indicative of partial decarbonation and ankerite-like formation via cation exchange. Conversely, limestone exhibited more pronounced carbonate reduction (vaterite from 6.05% to 4.82% and calcite from 2.35% to 0%), signaling high reactivity, mineral instability, and potential pore clogging from secondary precipitation. In contrast, quartz-rich cores demonstrated exceptional chemical inertness, maintaining consistent mineral concentrations. Furthermore, Brunauer–Emmett–Teller (BET) surface area and Barrett–Joyner–Halenda (BJH) pore distribution analyses revealed enhanced porosity and permeability in dolomite (pore volume increased >10×), while calcite showed declining properties and quartz showed negligible changes. SEM-EDS supported these trends, detailing Fe migration and textural evolution in dolomite, microfissuring in calcite, and structural preservation in quartz. This research establishes a unique experimental framework for understanding hydrogen–rock interactions under reservoir-relevant conditions. It provides crucial insights into mineralogical compatibility and structural resilience for UHS, identifying dolomite as a highly promising host and highlighting calcitic rocks’ limitations for long-term hydrogen containment. Full article
(This article belongs to the Topic Exploitation and Underground Storage of Oil and Gas)
Show Figures

Figure 1

24 pages, 20337 KiB  
Article
MEAC: A Multi-Scale Edge-Aware Convolution Module for Robust Infrared Small-Target Detection
by Jinlong Hu, Tian Zhang and Ming Zhao
Sensors 2025, 25(14), 4442; https://doi.org/10.3390/s25144442 - 16 Jul 2025
Viewed by 285
Abstract
Infrared small-target detection remains a critical challenge in military reconnaissance, environmental monitoring, forest-fire prevention, and search-and-rescue operations, owing to the targets’ extremely small size, sparse texture, low signal-to-noise ratio, and complex background interference. Traditional convolutional neural networks (CNNs) struggle to detect such weak, [...] Read more.
Infrared small-target detection remains a critical challenge in military reconnaissance, environmental monitoring, forest-fire prevention, and search-and-rescue operations, owing to the targets’ extremely small size, sparse texture, low signal-to-noise ratio, and complex background interference. Traditional convolutional neural networks (CNNs) struggle to detect such weak, low-contrast objects due to their limited receptive fields and insufficient feature extraction capabilities. To overcome these limitations, we propose a Multi-Scale Edge-Aware Convolution (MEAC) module that enhances feature representation for small infrared targets without increasing parameter count or computational cost. Specifically, MEAC fuses (1) original local features, (2) multi-scale context captured via dilated convolutions, and (3) high-contrast edge cues derived from differential Gaussian filters. After fusing these branches, channel and spatial attention mechanisms are applied to adaptively emphasize critical regions, further improving feature discrimination. The MEAC module is fully compatible with standard convolutional layers and can be seamlessly embedded into various network architectures. Extensive experiments on three public infrared small-target datasets (SIRSTD-UAVB, IRSTDv1, and IRSTD-1K) demonstrate that networks augmented with MEAC significantly outperform baseline models using standard convolutions. When compared to eleven mainstream convolution modules (ACmix, AKConv, DRConv, DSConv, LSKConv, MixConv, PConv, ODConv, GConv, and Involution), our method consistently achieves the highest detection accuracy and robustness. Experiments conducted across multiple versions, including YOLOv10, YOLOv11, and YOLOv12, as well as various network levels, demonstrate that the MEAC module achieves stable improvements in performance metrics while slightly increasing computational and parameter complexity. These results validate the MEAC module’s significant advantages in enhancing the detection of small and weak objects and suppressing interference from complex backgrounds. These results validate MEAC’s effectiveness in enhancing weak small-target detection and suppressing complex background noise, highlighting its strong generalization ability and practical application potential. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 14879 KiB  
Article
Research on AI-Driven Classification Possibilities of Ball-Burnished Regular Relief Patterns Using Mixed Symmetrical 2D Image Datasets Derived from 3D-Scanned Topography and Photo Camera
by Stoyan Dimitrov Slavov, Lyubomir Si Bao Van, Marek Vozár, Peter Gogola and Diyan Minkov Dimitrov
Symmetry 2025, 17(7), 1131; https://doi.org/10.3390/sym17071131 - 15 Jul 2025
Viewed by 283
Abstract
The present research is related to the application of artificial intelligence (AI) approaches for classifying surface textures, specifically regular reliefs patterns formed by ball burnishing operations. A two-stage methodology is employed, starting with the creation of regular reliefs (RRs) on test parts by [...] Read more.
The present research is related to the application of artificial intelligence (AI) approaches for classifying surface textures, specifically regular reliefs patterns formed by ball burnishing operations. A two-stage methodology is employed, starting with the creation of regular reliefs (RRs) on test parts by ball burnishing, followed by 3D topography scanning with Alicona device and data preprocessing with Gwyddion, and Blender software, where the acquired 3D topographies are converted into a set of 2D images, using various virtual camera movements and lighting to simulate the symmetrical fluctuations around the tool-path of the real camera. Four pre-trained convolutional neural networks (DenseNet121, EfficientNetB0, MobileNetV2, and VGG16) are used as a base for transfer learning and tested for their generalization performance on different combinations of synthetic and real image datasets. The models were evaluated by using confusion matrices and four additional metrics. The results show that the pretrained VGG16 model generalizes the best regular reliefs textures (96%), in comparison with the other models, if it is subjected to transfer learning via feature extraction, using mixed dataset, which consist of 34,037 images in following proportions: non-textured synthetic (87%), textured synthetic (8%), and real captured (5%) images of such a regular relief. Full article
Show Figures

Figure 1

25 pages, 16927 KiB  
Article
Improving Individual Tree Crown Detection and Species Classification in a Complex Mixed Conifer–Broadleaf Forest Using Two Machine Learning Models with Different Combinations of Metrics Derived from UAV Imagery
by Jeyavanan Karthigesu, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Geomatics 2025, 5(3), 32; https://doi.org/10.3390/geomatics5030032 - 13 Jul 2025
Viewed by 428
Abstract
Individual tree crown detection (ITCD) and tree species classification are critical for forest inventory, species-specific monitoring, and ecological studies. However, accurately detecting tree crowns and identifying species in structurally complex forests with overlapping canopies remains challenging. This study was conducted in a complex [...] Read more.
Individual tree crown detection (ITCD) and tree species classification are critical for forest inventory, species-specific monitoring, and ecological studies. However, accurately detecting tree crowns and identifying species in structurally complex forests with overlapping canopies remains challenging. This study was conducted in a complex mixed conifer–broadleaf forest in northern Japan, aiming to improve ITCD and species classification by employing two machine learning models and different combinations of metrics derived from very high-resolution (2.5 cm) UAV red–green–blue (RGB) and multispectral (MS) imagery. We first enhanced ITCD by integrating different combinations of metrics into multiresolution segmentation (MRS) and DeepForest (DF) models. ITCD accuracy was evaluated across dominant forest types and tree density classes. Next, nine tree species were classified using the ITCD outputs from both MRS and DF approaches, applying Random Forest and DF models, respectively. Incorporating structural, textural, and spectral metrics improved MRS-based ITCD, achieving F-scores of 0.44–0.58. The DF model, which used only structural and spectral metrics, achieved higher F-scores of 0.62–0.79. For species classification, the Random Forest model achieved a Kappa value of 0.81, while the DF model attained a higher Kappa value of 0.91. These findings demonstrate the effectiveness of integrating UAV-derived metrics and advanced modeling approaches for accurate ITCD and species classification in heterogeneous forest environments. The proposed methodology offers a scalable and cost-efficient solution for detailed forest monitoring and species-level assessment. Full article
Show Figures

Figure 1

36 pages, 25361 KiB  
Article
Remote Sensing Image Compression via Wavelet-Guided Local Structure Decoupling and Channel–Spatial State Modeling
by Jiahui Liu, Lili Zhang and Xianjun Wang
Remote Sens. 2025, 17(14), 2419; https://doi.org/10.3390/rs17142419 - 12 Jul 2025
Viewed by 393
Abstract
As the resolution and data volume of remote sensing imagery continue to grow, achieving efficient compression without sacrificing reconstruction quality remains a major challenge, given that traditional handcrafted codecs often fail to balance rate-distortion performance and computational complexity, while deep learning-based approaches offer [...] Read more.
As the resolution and data volume of remote sensing imagery continue to grow, achieving efficient compression without sacrificing reconstruction quality remains a major challenge, given that traditional handcrafted codecs often fail to balance rate-distortion performance and computational complexity, while deep learning-based approaches offer superior representational capacity. However, challenges remain in achieving a balance between fine-detail adaptation and computational efficiency. Mamba, a state–space model (SSM)-based architecture, offers linear-time complexity and excels at capturing long-range dependencies in sequences. It has been adopted in remote sensing compression tasks to model long-distance dependencies between pixels. However, despite its effectiveness in global context aggregation, Mamba’s uniform bidirectional scanning is insufficient for capturing high-frequency structures such as edges and textures. Moreover, existing visual state–space (VSS) models built upon Mamba typically treat all channels equally and lack mechanisms to dynamically focus on semantically salient spatial regions. To address these issues, we present an innovative architecture for distant sensing image compression, called the Multi-scale Channel Global Mamba Network (MGMNet). MGMNet integrates a spatial–channel dynamic weighting mechanism into the Mamba architecture, enhancing global semantic modeling while selectively emphasizing informative features. It comprises two key modules. The Wavelet Transform-guided Local Structure Decoupling (WTLS) module applies multi-scale wavelet decomposition to disentangle and separately encode low- and high-frequency components, enabling efficient parallel modeling of global contours and local textures. The Channel–Global Information Modeling (CGIM) module enhances conventional VSS by introducing a dual-path attention strategy that reweights spatial and channel information, improving the modeling of long-range dependencies and edge structures. We conducted extensive evaluations on three distinct remote sensing datasets to assess the MGMNet. The results of the investigations revealed that MGMNet outperforms the current SOTA models across various performance metrics. Full article
Show Figures

Figure 1

14 pages, 1106 KiB  
Article
Metastatic Melanoma Prognosis Prediction Using a TC Radiomic-Based Machine Learning Model: A Preliminary Study
by Antonino Guerrisi, Maria Teresa Maccallini, Italia Falcone, Alessandro Valenti, Ludovica Miseo, Sara Ungania, Vincenzo Dolcetti, Fabio Valenti, Marianna Cerro, Flora Desiderio, Fabio Calabrò, Virginia Ferraresi and Michelangelo Russillo
Cancers 2025, 17(14), 2304; https://doi.org/10.3390/cancers17142304 - 10 Jul 2025
Viewed by 250
Abstract
Background/Objective: The approach to the clinical management of metastatic melanoma patients is undergoing a significant transformation. The availability of a large amount of data from medical images has made Artificial Intelligence (AI) applications an innovative and cutting-edge solution that could revolutionize the [...] Read more.
Background/Objective: The approach to the clinical management of metastatic melanoma patients is undergoing a significant transformation. The availability of a large amount of data from medical images has made Artificial Intelligence (AI) applications an innovative and cutting-edge solution that could revolutionize the surveillance and management of these patients. In this study, we develop and validate a machine-learning model based on radiomic data extracted from a computed tomography (CT) analysis of patients with metastatic melanoma (MM). This approach was designed to accurately predict prognosis and identify the potential key factors associated with prognosis. Methods: To achieve this goal, we used radiomic pipelines to extract the quantitative features related to lesion texture, morphology, and intensity from high-quality CT images. We retrospectively collected a cohort of 58 patients with metastatic melanoma, from which a total of 60 CT series were used for model training, and 70 independent CT series were employed for external testing. Model performance was evaluated using metrics such as sensitivity, specificity, and AUC (area under the curve), demonstrating particularly favorable results compared to traditional methods. Results: The model used in this study presented a ROC-AUC curve of 82% in the internal test and, in combination with AI, presented a good predictive ability regarding lesion outcome. Conclusions: Although the cohort size was limited and the data were collected retrospectively from a single institution, the findings provide a promising basis for further validation in larger and more diverse patient populations. This approach could directly support clinical decision-making by providing accurate and personalized prognostic information. Full article
(This article belongs to the Special Issue Radiomics and Imaging in Cancer Analysis)
Show Figures

Graphical abstract

24 pages, 10538 KiB  
Article
Effects of Refrigerated Storage on the Physicochemical, Color and Rheological Properties of Selected Honey
by Joanna Piepiórka-Stepuk, Monika Sterczyńska, Marta Stachnik and Piotr Pawłowski
Agriculture 2025, 15(14), 1476; https://doi.org/10.3390/agriculture15141476 - 10 Jul 2025
Viewed by 320
Abstract
The paper presents a study of changes in selected physicochemical properties of honeys during their refrigerated storage at 8 ± 1 °C for 24 weeks. On the basis of the study of primary pollen, the botanical identification of the variety of honeys was [...] Read more.
The paper presents a study of changes in selected physicochemical properties of honeys during their refrigerated storage at 8 ± 1 °C for 24 weeks. On the basis of the study of primary pollen, the botanical identification of the variety of honeys was made—rapeseed, multiflower and buckwheat honey. The samples were stored for 24 weeks in dark, hermetically sealed glass containers in a refrigerated chamber (8 ± 1 °C, 73 ± 2% relative humidity). The comprehensive suite of analyses comprised sugar profiling (ion chromatography), moisture content determination (refractometry), pH and acidity measurement (titration), electrical conductivity, color assessment in the CIELab system (ΔE and BI indices), texture parameters (penetration testing), rheological properties (rheometry), and microscopic evaluation of crystal morphology; all data were subjected to statistical treatment (ANOVA, Tukey’s test, Pearson correlations). The changes in these parameters were examined at 1, 2, 3, 6, 12, and 24 weeks of storage. A slight but significant increase in moisture content was observed (most pronounced in rapeseed honey), while all parameters remained within the prescribed limits and showed no signs of fermentation. The honeys’ color became markedly lighter. Already in the first weeks of storage, an increase in the L* value and elevated ΔE indices were recorded. The crystallization process proceeded in two distinct phases—initial nucleation (occurring fastest in rapeseed honey) followed by the formation of crystal agglomerates—which resulted in rising hardness and cohesion up to weeks 6–12, after which these metrics gradually declined; simultaneously, a rheological shift was noted, with viscosity increasing and the flow behavior changing from Newtonian to pseudoplastic, especially in rapeseed honey. Studies show that refrigerated storage accelerates honey crystallization, as lower temperatures promote the formation of glucose crystals. This accelerated crystallization may have practical applications in the production of creamed honey, where controlled crystal formation is essential for achieving a smooth, spreadable texture. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Graphical abstract

21 pages, 7528 KiB  
Article
A Fine-Tuning Method via Adaptive Symmetric Fusion and Multi-Graph Aggregation for Human Pose Estimation
by Yinliang Shi, Zhaonian Liu, Bin Jiang, Tianqi Dai and Yuanfeng Lian
Symmetry 2025, 17(7), 1098; https://doi.org/10.3390/sym17071098 - 9 Jul 2025
Viewed by 281
Abstract
Human Pose Estimation (HPE) aims to accurately locate the positions of human key points in images or videos. However, the performance of HPE is often significantly reduced in practical application scenarios due to environmental interference. To address this challenge, we propose a ladder [...] Read more.
Human Pose Estimation (HPE) aims to accurately locate the positions of human key points in images or videos. However, the performance of HPE is often significantly reduced in practical application scenarios due to environmental interference. To address this challenge, we propose a ladder side-tuning method for the Vision Transformer (ViT) pre-trained model based on multi-path feature fusion to improve the accuracy of HPE in highly interfering environments. First, we extract the global features, frequency features and multi-scale spatial features through the ViT pre-trained model, the discrete wavelet convolutional network and the atrous spatial pyramid pooling network (ASPP). By comprehensively capturing the information of the human body and the environment, the ability of the model to analyze local details, textures, and spatial information is enhanced. In order to efficiently fuse these features, we devise an adaptive symmetric feature fusion strategy, which dynamically adjusts the intensity of feature fusion according to the similarity among features to achieve the optimal fusion effect. Finally, a multi-graph feature aggregation method is developed. We construct graph structures of different features and deeply explore the subtle differences among the features based on the dual fusion mechanism of points and edges to ensure the information integrity. The experimental results demonstrate that our method achieves 4.3% and 4.2% improvements in the AP metric on the MS COCO dataset and a custom high-interference dataset, respectively, compared with the HRNet. This highlights its superiority for human pose estimation tasks in both general and interfering environments. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Computer Vision and Graphics)
Show Figures

Figure 1

19 pages, 51503 KiB  
Article
LSANet: Lightweight Super Resolution via Large Separable Kernel Attention for Edge Remote Sensing
by Tingting Yong and Xiaofang Liu
Appl. Sci. 2025, 15(13), 7497; https://doi.org/10.3390/app15137497 - 3 Jul 2025
Viewed by 286
Abstract
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information [...] Read more.
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information critical to downstream tasks. Super-resolution (SR) techniques thus emerge as a pivotal solution to enhance the spatial fidelity of remote sensing images via computational approaches. While deep learning-based SR methods have advanced reconstruction accuracy, their high computational complexity and large parameter counts restrict practical deployment in real-world remote sensing scenarios—particularly on edge or low-power devices. To address this gap, we propose LSANet, a lightweight SR network customized for remote sensing imagery. The core of LSANet is the large separable kernel attention mechanism, which efficiently expands the receptive field while retaining low computational overhead. By integrating this mechanism into an enhanced residual feature distillation module, the network captures long-range dependencies more effectively than traditional shallow residual blocks. Additionally, a residual feature enhancement module, leveraging contrast-aware channel attention and hierarchical skip connections, strengthens the extraction and integration of multi-level discriminative features. This design preserves fine textures and ensures smooth information propagation across the network. Extensive experiments on public datasets such as UC Merced Land Use and NWPU-RESISC45 demonstrate LSANet’s competitive or superior performance compared to state-of-the-art methods. On the UC Merced Land Use dataset, LSANet achieves a PSNR of 34.33, outperforming the best-baseline HSENet with its PSNR of 34.23 by 0.1. For SSIM, LSANet reaches 0.9328, closely matching HSENet’s 0.9332 while demonstrating excellent metric-balancing performance. On the NWPU-RESISC45 dataset, LSANet attains a PSNR of 35.02, marking a significant improvement over prior methods, and an SSIM of 0.9305, maintaining strong competitiveness. These results, combined with the notable reduction in parameters and floating-point operations, highlight the superiority of LSANet in remote sensing image super-resolution tasks. Full article
Show Figures

Figure 1

Back to TopTop