Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (891)

Search Parameters:
Keywords = histogram difference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10613 KB  
Article
Dehazing of Panchromatic Remote Sensing Images Based on Histogram Features
by Hao Wang, Yalin Ding, Xiaoqin Zhou, Guoqin Yuan and Chao Sun
Remote Sens. 2025, 17(20), 3479; https://doi.org/10.3390/rs17203479 (registering DOI) - 18 Oct 2025
Abstract
During long-range imaging, the turbid medium in the atmosphere absorbs and scatters light, resulting in reduced contrast, a narrowed dynamic range, and obscure detail information in remote sensing images. The prior-based method has the advantages of good real-time performance and a wide application [...] Read more.
During long-range imaging, the turbid medium in the atmosphere absorbs and scatters light, resulting in reduced contrast, a narrowed dynamic range, and obscure detail information in remote sensing images. The prior-based method has the advantages of good real-time performance and a wide application range. However, few of the existing prior-based methods are applicable to the dehazing of panchromatic images. In this paper, we innovatively propose a prior-based dehazing method for panchromatic remote sensing images through statistical histogram features. First, the hazy image is divided into plain image patches and mixed image patches according to the histogram features. Then, the features of the average occurrence differences between adjacent gray levels (AODAGs) of plain image patches and the features of the average distance to the gray-level gravity center (ADGG) of mixed image patches are, respectively, calculated. Then, the transmission map is obtained according to the statistical relation equation. Then, the atmospheric light of each image patch is calculated separately based on the maximum gray level of the image patch using the threshold segmentation method. Finally, the dehazed image is obtained based on the physical model. Extensive experiments in synthetic and real-world panchromatic hazy remote sensing images show that the proposed algorithm outperforms state-of-the-art dehazing methods in both efficiency and dehazing effect. Full article
29 pages, 10629 KB  
Article
Content-Adaptive Reversible Data Hiding with Multi-Stage Prediction Schemes
by Hsiang-Cheh Huang, Feng-Cheng Chang and Hong-Yi Li
Sensors 2025, 25(19), 6228; https://doi.org/10.3390/s25196228 - 8 Oct 2025
Viewed by 296
Abstract
With the proliferation of image-capturing and display-enabled IoT devices, ensuring the authenticity and integrity of visual data has become increasingly critical, especially in light of emerging cybersecurity threats and powerful generative AI tools. One of the major challenges in such sensor-based systems is [...] Read more.
With the proliferation of image-capturing and display-enabled IoT devices, ensuring the authenticity and integrity of visual data has become increasingly critical, especially in light of emerging cybersecurity threats and powerful generative AI tools. One of the major challenges in such sensor-based systems is the ability to protect privacy while maintaining data usability. Reversible data hiding has attracted growing attention due to its reversibility and ease of implementation, making it a viable solution for secure image communication in IoT environments. In this paper, we propose reversible data hiding techniques tailored to the content characteristics of images. Our approach leverages subsampling and quadtree partitioning, combined with multi-stage prediction schemes, to generate a predicted image aligned with the original. Secret information is embedded by analyzing the difference histogram between the original and predicted images, and enhanced through multi-round rotation techniques and a multi-level embedding strategy to boost capacity. By employing both subsampling and quadtree decomposition, the embedding strategy dynamically adapts to the inherent characteristics of the input image. Furthermore, we investigate the trade-off between embedding capacity and marked image quality. Experimental results demonstrate improved embedding performance, high visual fidelity, and low implementation complexity, highlighting the method’s suitability for resource-constrained IoT applications. Full article
Show Figures

Figure 1

23 pages, 4556 KB  
Article
Radiomics-Based Detection of Germ Cell Neoplasia In Situ Using Volumetric ADC and FA Histogram Features: A Retrospective Study
by Maria-Veatriki Christodoulou, Ourania Pappa, Loukas Astrakas, Evangeli Lampri, Thanos Paliouras, Nikolaos Sofikitis, Maria I. Argyropoulou and Athina C. Tsili
Cancers 2025, 17(19), 3220; https://doi.org/10.3390/cancers17193220 - 2 Oct 2025
Viewed by 383
Abstract
Background/Objectives: Germ Cell Neoplasia In Situ (GCNIS) is considered the precursor lesion for the majority of testicular germ cell tumors (TGCTs). The aim of this study was to evaluate whether first-order radiomics features derived from volumetric diffusion tensor imaging (DTI) metrics—specifically apparent diffusion [...] Read more.
Background/Objectives: Germ Cell Neoplasia In Situ (GCNIS) is considered the precursor lesion for the majority of testicular germ cell tumors (TGCTs). The aim of this study was to evaluate whether first-order radiomics features derived from volumetric diffusion tensor imaging (DTI) metrics—specifically apparent diffusion coefficient (ADC) and fractional anisotropy (FA) histogram parameters—can detect GCNIS. Methods: This study included 15 men with TGCTs and 10 controls. All participants underwent scrotal MRI, including DTI. Volumetric ADC and FA histogram metrics were calculated for the following tissues: group 1, TGCT; group 2: testicular parenchyma adjacent to tumor, histologically positive for GCNIS; and group 3, normal testis. Non-parametric statistics were used to assess differences in ADC and FA histogram parameters among the three groups. Pearson’s correlation analysis was followed by ordinal regression analysis to identify key predictive histogram parameters. Results: Widespread distributional differences (p < 0.05) were observed for many ADC and FA variables, with both TGCTs and GCNIS showing significant divergence from normal testes. Among the ADC statistics, the 10th percentile and skewness (p = 0.042), range (p = 0.023), interquartile range (p = 0.021), total energy (p = 0.033), entropy and kurtosis (p = 0.027) proved the most significant predictors for tissue classification. FA_energy (p = 0.039) was the most significant fingerprint of the carcinogenesis among the FA metrics. These parameters correctly characterized 88.8% of TGCTs, 87.5% of GCNIS tissues and 100% of normal testes. Conclusion: Radiomics features derived from volumetric ADC and FA histograms have promising potential to differentiate TGCTs, GCNIS, and normal testicular tissue, aiding early detection and characterization of pre-cancerous lesions. Full article
(This article belongs to the Special Issue Updates on Imaging of Common Urogenital Neoplasms 2nd Edition)
Show Figures

Figure 1

12 pages, 4050 KB  
Article
Low Radiation Doses to Gross Tumor Volume in Metabolism Guided Lattice Irradiation Based on Lattice-01 Study: Dosimetric Evaluation and Potential Clinical Research Implication
by Giuseppe Iatì, Giacomo Ferrantelli, Stefano Pergolizzi, Gianluca Ferini, Valeria Venuti, Federico Chillari, Miriam Sciacca, Valentina Zagardo, Carmelo Siragusa, Anna Santacaterina, Anna Brogna and Silvana Parisi
J. Pers. Med. 2025, 15(10), 470; https://doi.org/10.3390/jpm15100470 - 2 Oct 2025
Viewed by 284
Abstract
Purpose: This paper aims to calculate the gross tumor volume (GTV) receiving low radiation doses in patients submitted to “metabolism-guided” lattice radiation therapy and relative possible implications with clinical outcomes. Material and Methods: We reviewed plans for treating voluminous masses via [...] Read more.
Purpose: This paper aims to calculate the gross tumor volume (GTV) receiving low radiation doses in patients submitted to “metabolism-guided” lattice radiation therapy and relative possible implications with clinical outcomes. Material and Methods: We reviewed plans for treating voluminous masses via “metabolism-guided” LATTICE-01 irradiation. The aim was to deliver high-dose radiation to spherical deposits (vertices) within a bulky tumor mass. These were placed at tumor areas with differing PET metabolism. We evaluated the relationships between GTV volumes and dose-volumetric histograms (mean, maximum, minimum, and % GTV received 0.5, 1, 2, 3 Gy). Results: Sixty-two plans of treatment met the inclusion criteria as established. The median GTV volume was 315.9 cc (range = 10.54–2605.9 cc). A median of two Vertices was allocated within the GTVs (range 1–9) and were planned to receive a dose of ≥10 Gy/1 fraction (median 12 Gy, range 10–15 Gy). Median V3Gy percentage was 51.58% (range 2–100%), median V2Gy percentage was 67.80% (range 1.60–100%), median V1Gy percentage was 83.70% (range 0.80–100%), and median V0.5Gy percentage was 88.49% (range 17.60–100%). Conclusions: In the present series, we performed a dosimetric evaluation of the GTV’s volume exposed to low doses during the metabolic guided lattice irradiation process. Combining high- and low-dose radiotherapy based on a spatially fractionated (LATTICE) approach could reactivate the immune system against cancer cells. These observations could be useful for planning prospective studies on immunotherapy combined with the lattice technique. Full article
Show Figures

Figure 1

25 pages, 14971 KB  
Article
Targeting Anti-Apoptotic Bcl-2 Proteins with Triterpene-Heterocyclic Derivatives: A Combined Dual Docking and Molecular Dynamics Study
by Marius Mioc, Silvia Gruin, Armand Gogulescu, Oana Bătrîna, Mihaela Jorgovan, Bogdan-Ionuț Mara and Codruța Șoica
Molecules 2025, 30(19), 3919; https://doi.org/10.3390/molecules30193919 - 29 Sep 2025
Viewed by 343
Abstract
Anti-apoptotic Bcl-2 family proteins (Bcl-2, Bcl-xL, and Mcl-1), are often overexpressed in cancer, which aids tumor growth and treatment resistance. As a result, these proteins are excellent candidates for novel anticancer drugs. Within this study a virtual library of betuline derivatives was built [...] Read more.
Anti-apoptotic Bcl-2 family proteins (Bcl-2, Bcl-xL, and Mcl-1), are often overexpressed in cancer, which aids tumor growth and treatment resistance. As a result, these proteins are excellent candidates for novel anticancer drugs. Within this study a virtual library of betuline derivatives was built and screened for possible Bcl-2, Bcl-XL, and Mcl-1 inhibitors. For every target, molecular docking simulations were performed using two different engines (AutoDock Vina and Glide). The ligands that most frequently appeared among the top candidates were shortlisted after comparing the top-20 hits from both docking scoring functions. To assess binding stability, five of these promising compounds were chosen and run through 100 ns molecular dynamics (MD) simulations in complex with every target protein. Key persistent intermolecular contacts were identified from MD contact frequency histograms, and stability was evaluated using root-mean-square deviation (RMSD) profiles of protein–ligand complexes following equilibration. Additionally, Prime MM-GBSA binding energies (ΔG_bind) for the 15 docked complexes were computed, and ligand efficiency was reported. Two substances, BOxNaf1 and BT3, stood out among the screened derivatives as the most stable binders to all three Bcl-2 family targets according to the dual docking and MD analysis approach. When the MM-GBSA and RMSF/rGyr data are considered alongside docking and MD stability, BOxNaf1 and BOxPhCl1 emerge as the most compelling dual/multi-target candidates, whereas BT3, though MD stable, shows weaker MM-GBSA energetics and is retained as a lower-priority backup chemotype. Full article
(This article belongs to the Special Issue Molecular Docking in Drug Discovery, 2nd Edition)
Show Figures

Figure 1

14 pages, 2563 KB  
Article
Attempting to Determine the Relationship of Mandibular Third Molars to the Mandibular Canal on Digital Panoramic Radiography; Using CBCT as Gold Standard
by Hilal Isra Erkan, Osman Yalcin, Umut Pamukcu and Kahraman Gungor
Fractal Fract. 2025, 9(9), 612; https://doi.org/10.3390/fractalfract9090612 - 22 Sep 2025
Viewed by 411
Abstract
(1) Background: It is important to know, radiologically, the relationship of Mandibular third molars (M3) to the mandibular canal to minimize postoperative complications by causing damage to the inferior alveolar vessels and nerve during extraction. This study aimed to evaluate the usability of [...] Read more.
(1) Background: It is important to know, radiologically, the relationship of Mandibular third molars (M3) to the mandibular canal to minimize postoperative complications by causing damage to the inferior alveolar vessels and nerve during extraction. This study aimed to evaluate the usability of various image analyses or high-risk radiographic findings in determining the relationship of M3s to the mandibular canal on Digital Panoramic Radiography (DPR). (2) Methods: DPRs of 60 patients with bilateral mandibular M3s in the dental arch, determined one of them to be related to the mandibular canal unilaterally by Cone Beam Computed Tomography (CBCT), were included. The high-risk radiological signs of M3s and Fractal Analysis (FA) and Histogram Analysis (HA) measurements of the trabecular bone around the M3s’ roots were compared. The Independent t-test, Kolmogorov–Smirnov, Mann–Whitney U, and Chi-Square tests were used for statistical analyses. (3) Results: DPR signs, such as radiolucency and bifurcation at the root apex, discontinuity of the mandibular canal cortex, and superimposition of the tooth root and mandibular canal, were observed statistically significantly more frequently for mandibular canal-related M3s (p < 0.05). As an objective image analysis, Lacunarity showed a statistically significant difference between related and unrelated M3s for measurements made inside and outside the mandibular canal (p < 0.05). (4) Conclusions: This study demonstrated that the discontinuity of the mandibular canal cortex and Lacunarity measured on DPR could help determine the relationship of the mandibular M3s to the mandibular canal. Full article
Show Figures

Figure 1

26 pages, 18433 KB  
Article
Integrating Elevation Frequency Histogram and Multi-Feature Gaussian Mixture Model for Ground Filtering of UAV LiDAR Point Clouds in Densely Vegetated Areas
by Chuanxin Liu, Hongtao Wang, Baokun Feng, Cheng Wang, Xiangda Lei and Jianyang Chang
Remote Sens. 2025, 17(18), 3261; https://doi.org/10.3390/rs17183261 - 21 Sep 2025
Viewed by 461
Abstract
Unmanned aerial vehicle (UAV)-based light detection and ranging (LiDAR) technology enables the acquisition of high-precision three-dimensional point clouds of the Earth’s surface. These data serve as a fundamental input for applications such as digital terrain model (DTM) construction and terrain analysis. Nevertheless, accurately [...] Read more.
Unmanned aerial vehicle (UAV)-based light detection and ranging (LiDAR) technology enables the acquisition of high-precision three-dimensional point clouds of the Earth’s surface. These data serve as a fundamental input for applications such as digital terrain model (DTM) construction and terrain analysis. Nevertheless, accurately extracting ground points in densely vegetated areas remains challenging. This study proposes a point cloud filtering method for the separation of ground points by integrating elevation frequency histograms and a multi-feature Gaussian mixture model (GMM). Firstly, local elevation frequency histograms are employed to estimate the elevation range for the coarse identification of ground points. Then, GMM is applied to refine the ground segmentation by integrating geometric features, intensity, and spectral information represented by the green leaf index (GLI). Finally, Mahalanobis distance is introduced to optimize the segmentation result, thereby improving the overall stability and robustness of the method in complex terrain and vegetated environments. The proposed method was validated on three study areas with different vegetation cover and terrain conditions, achieving an average OA of 94.14%, IoUg of 88.45%, IoUng of 88.35%, and F1-score of 93.85%. Compared to existing ground filtering algorithms (e.g., CSF, SBF, and PMF), the proposed method performs well in all study areas, highlighting its robustness and effectiveness in complex environments, especially in areas densely covered by low vegetation. Full article
Show Figures

Figure 1

16 pages, 1247 KB  
Article
Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis
by Ouafa Sijilmassi
J. Imaging 2025, 11(9), 321; https://doi.org/10.3390/jimaging11090321 - 19 Sep 2025
Viewed by 361
Abstract
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick [...] Read more.
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick texture features from the Gray-Level Co-occurrence Matrix. Significant differences in features such as energy, contrast, correlation, and entropy were found between healthy and pathological retinas. Pathological retinas exhibited lower textural complexity and higher uniformity, which correlates with vascular thinning and structural changes observed in DR and glaucoma. In parallel, the global color distribution of the full fundus area was analyzed without segmentation. RGB intensity histograms were calculated for each channel and averaged across groups. Statistical tests revealed significant differences, particularly in the green and blue channels. The Mahalanobis distance quantified the separability of the groups per channel. These results indicate that pathological changes in retinal tissue can also lead to detectable chromatic shifts in the fundus. The findings underscore the potential of both vascular texture and color features as non-invasive biomarkers for early retinal disease detection and classification. Full article
(This article belongs to the Special Issue Emerging Technologies for Less Invasive Diagnostic Imaging)
Show Figures

Figure 1

25 pages, 12760 KB  
Article
Intelligent Face Recognition: Comprehensive Feature Extraction Methods for Holistic Face Analysis and Modalities
by Thoalfeqar G. Jarullah, Ahmad Saeed Mohammad, Musab T. S. Al-Kaltakchi and Jabir Alshehabi Al-Ani
Signals 2025, 6(3), 49; https://doi.org/10.3390/signals6030049 - 19 Sep 2025
Viewed by 786
Abstract
Face recognition technology utilizes unique facial features to analyze and compare individuals for identification and verification purposes. This technology is crucial for several reasons, such as improving security and authentication, effectively verifying identities, providing personalized user experiences, and automating various operations, including attendance [...] Read more.
Face recognition technology utilizes unique facial features to analyze and compare individuals for identification and verification purposes. This technology is crucial for several reasons, such as improving security and authentication, effectively verifying identities, providing personalized user experiences, and automating various operations, including attendance monitoring, access management, and law enforcement activities. In this paper, comprehensive evaluations are conducted using different face detection and modality segmentation methods, feature extraction methods, and classifiers to improve system performance. As for face detection, four methods are proposed: OpenCV’s Haar Cascade classifier, Dlib’s HOG + SVM frontal face detector, Dlib’s CNN face detector, and Mediapipe’s face detector. Additionally, two types of feature extraction techniques are proposed: hand-crafted features (traditional methods: global local features) and deep learning features. Three global features were extracted, Scale-Invariant Feature Transform (SIFT), Speeded Robust Features (SURF), and Global Image Structure (GIST). Likewise, the following local feature methods are utilized: Local Binary Pattern (LBP), Weber local descriptor (WLD), and Histogram of Oriented Gradients (HOG). On the other hand, the deep learning-based features fall into two categories: convolutional neural networks (CNNs), including VGG16, VGG19, and VGG-Face, and Siamese neural networks (SNNs), which generate face embeddings. For classification, three methods are employed: Support Vector Machine (SVM), a one-class SVM variant, and Multilayer Perceptron (MLP). The system is evaluated on three datasets: in-house, Labelled Faces in the Wild (LFW), and the Pins dataset (sourced from Pinterest) providing comprehensive benchmark comparisons for facial recognition research. The best performance accuracy for the proposed ten-feature extraction methods applied to the in-house database in the context of the facial recognition task achieved 99.8% accuracy by using the VGG16 model combined with the SVM classifier. Full article
Show Figures

Figure 1

31 pages, 8445 KB  
Article
HIRD-Net: An Explainable CNN-Based Framework with Attention Mechanism for Diabetic Retinopathy Diagnosis Using CLAHE-D-DoG Enhanced Fundus Images
by Muhammad Hassaan Ashraf, Muhammad Nabeel Mehmood, Musharif Ahmed, Dildar Hussain, Jawad Khan, Younhyun Jung, Mohammed Zakariah and Deema Mohammed AlSekait
Life 2025, 15(9), 1411; https://doi.org/10.3390/life15091411 - 8 Sep 2025
Viewed by 886
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, underscoring the need for accurate and early diagnosis to prevent disease progression. Although fundus imaging serves as a cornerstone of Computer-Aided Diagnosis (CAD) systems, several challenges persist, including lesion scale variability, blurry [...] Read more.
Diabetic Retinopathy (DR) is a leading cause of vision impairment globally, underscoring the need for accurate and early diagnosis to prevent disease progression. Although fundus imaging serves as a cornerstone of Computer-Aided Diagnosis (CAD) systems, several challenges persist, including lesion scale variability, blurry morphological patterns, inter-class imbalance, limited labeled datasets, and computational inefficiencies. To address these issues, this study proposes an end-to-end diagnostic framework that integrates an enhanced preprocessing pipeline with a novel deep learning architecture, Hierarchical-Inception-Residual-Dense Network (HIRD-Net). The preprocessing stage combines Contrast Limited Adaptive Histogram Equalization (CLAHE) with Dilated Difference of Gaussian (D-DoG) filtering to improve image contrast and highlight fine-grained retinal structures. HIRD-Net features a hierarchical feature fusion stem alongside multiscale, multilevel inception-residual-dense blocks for robust representation learning. The Squeeze-and-Excitation Channel Attention (SECA) is introduced before each Global Average Pooling (GAP) layer to refine the Feature Maps (FMs). It further incorporates four GAP layers for multi-scale semantic aggregation, employs the Hard-Swish activation to enhance gradient flow, and utilizes the Focal Loss function to mitigate class imbalance issues. Experimental results on the IDRiD-APTOS2019, DDR, and EyePACS datasets demonstrate that the proposed framework achieves 93.46%, 82.45% and 79.94% overall classification accuracy using only 4.8 million parameters, highlighting its strong generalization capability and computational efficiency. Furthermore, to ensure transparent predictions, an Explainable AI (XAI) approach known as Gradient-weighted Class Activation Mapping (Grad-CAM) is employed to visualize HIRD-Net’s decision-making process. Full article
(This article belongs to the Special Issue Advanced Machine Learning for Disease Prediction and Prevention)
Show Figures

Figure 1

19 pages, 7987 KB  
Article
A Local Thresholding Algorithm for Image Segmentation by Using Gradient Orientation Histogram
by Lijie Dong, Kailong Zhang, Mingyue He, Shenxin Zhong and Congjie Ou
Appl. Sci. 2025, 15(17), 9808; https://doi.org/10.3390/app15179808 - 7 Sep 2025
Viewed by 718
Abstract
This paper proposes a new local thresholding method to further explore the relationship between gradients and image patterns. In most studies, the image gradient histogram is simply divided into K bins that have the same intervals in angular space. This kind of empirical [...] Read more.
This paper proposes a new local thresholding method to further explore the relationship between gradients and image patterns. In most studies, the image gradient histogram is simply divided into K bins that have the same intervals in angular space. This kind of empirical approaches may not fully capture the correlation information between pixels. In this paper, a variance-based idea is applied to the gradient orientation histogram. It clusters pixels into subsets with different angular intervals. Analyzing these subsets with similar common patterns respectively will help to assist in achieving the optimal thresholds for image segmentation. For the result assessments, the proposed algorithm is compared with other 1-D and 2-D histogram-based thresholding methods, as well as hybrid local–global thresholding methods. It is shown that the proposed algorithm can effectively recognize the common features of the images that belong to the same category, and maintain the stable performances when the number of thresholds increases. Furthermore, the processing time of the present algorithm is competitive with those of other algorithms, which shows the potential application in real-time scenes. Full article
Show Figures

Figure 1

22 pages, 3492 KB  
Article
Comparison and Competition of Traditional and Visualized Secondary Mathematics Education Approaches: Random Sampling and Mathematical Models Under Neural Network Approach
by Lei Zhang
Mathematics 2025, 13(17), 2793; https://doi.org/10.3390/math13172793 - 30 Aug 2025
Viewed by 439
Abstract
Graphic design and image processes have a vital role in information technologies and safe, memorable learning activities, which can meet the need for modern and visual aids in the field of education. In this article, the concepts of comparison and competition have been [...] Read more.
Graphic design and image processes have a vital role in information technologies and safe, memorable learning activities, which can meet the need for modern and visual aids in the field of education. In this article, the concepts of comparison and competition have been presented using grades or numbers obtained for two different intelligence quotient (IQ) classes of students. The two classes are categorized as learners having textual (un-visualized) and visualized aids. We use the results and outcomes of the random sampling data of the two classes in the parameters of four different, competitive, two-compartmental mathematical models. One of the compartments is for students who only learn through textual learning, and the other one is for students who have access to visualized text resources. Four of the mathematical models were solved numerically, and their grades were obtained by different iterations using the data of the mean of different random sampling tests taken for thirty months; each sampling involved thirty students. The said data are also drawn by using a neural network approach, showing the fitting curves for all the data, the training data, the validation data, and the testing data with histogram, aggression, mean square error, and absolute error. The obtained dynamics are also compared with neural network dynamics. The results of the scenario pointed out that the best results (determined through high grades) were obtained among the students of visual aid learners, as compared to textual and conventional learners. The visualized resources, constructed within the mathematics syllabus domain, may help to upgrade multidimensional mathematical education and the learning activities of intermediate-level students. For this, the findings of the present study are helpful for education policymakers: there is a directive to focus on visual-based learning, utilizing data from various surveys, profile checks, and questionnaires. Furthermore, the techniques presented in this article will be beneficial for those seeking to build a better understanding of the various methods and ideas related to mathematics education. Full article
(This article belongs to the Special Issue Advances in Nonlinear Analysis: Theory, Methods and Applications)
Show Figures

Figure 1

25 pages, 9720 KB  
Article
ICESat-2 Water Photon Denoising and Water Level Extraction Method Combining Elevation Difference Exponential Attenuation Model with Hough Transform
by Xilai Ju, Yongjian Li, Song Ji, Danchao Gong, Hao Liu, Zhen Yan, Xining Liu and Hao Niu
Remote Sens. 2025, 17(16), 2885; https://doi.org/10.3390/rs17162885 - 19 Aug 2025
Viewed by 650
Abstract
For addressing the technical challenges of photon denoising and water level extraction in ICESat-2 satellite-based water monitoring applications, this paper proposes an innovative solution integrating Gaussian function fitting with Hough transform. The method first employs histogram Gaussian fitting to achieve coarse denoising of [...] Read more.
For addressing the technical challenges of photon denoising and water level extraction in ICESat-2 satellite-based water monitoring applications, this paper proposes an innovative solution integrating Gaussian function fitting with Hough transform. The method first employs histogram Gaussian fitting to achieve coarse denoising of water body regions. Subsequently, a probability attenuation model based on elevation differences between adjacent photons is constructed to accomplish refined denoising through iterative optimization of adaptive thresholds. Building upon this foundation, the Hough transform technique from image processing is introduced into photon cloud processing, enabling robust water level extraction from ICESat-2 data. Through rasterization, discrete photon distributions are converted into image space, where straight lines conforming to the photon distribution are then mapped as intersection points of sinusoidal curves in Hough space. Leveraging the noise-resistant characteristics of the Hough space accumulator, the interference from residual noise photons is effectively eliminated, thereby achieving high-precision water level line extraction. Experiments were conducted across five typical water bodies (Qinghai Lake, Long Land, Ganquan Island, Qilian Yu Islands, and Miyun Reservoir). The results demonstrate that the proposed denoising method outperforms DBSCAN and OPTICS algorithms in terms of accuracy, precision, recall, F1-score, and computational efficiency. In water level estimation, the absolute error of the Hough transform-based line detection method remains below 2 cm, significantly surpassing the performance of mean value, median value, and RANSAC algorithms. This study provides a novel technical framework for effective global water level monitoring. Full article
Show Figures

Figure 1

16 pages, 1734 KB  
Article
Image Encryption Using Chaotic Maps: Development, Application, and Analysis
by Alexandru Dinu and Madalin Frunzete
Mathematics 2025, 13(16), 2588; https://doi.org/10.3390/math13162588 - 13 Aug 2025
Cited by 2 | Viewed by 991
Abstract
Image encryption plays a critical role in ensuring the confidentiality and integrity of visual information, particularly in applications involving secure transmission and storage. While traditional cryptographic algorithms like AES are widely used, they may not fully exploit the properties of image data, such [...] Read more.
Image encryption plays a critical role in ensuring the confidentiality and integrity of visual information, particularly in applications involving secure transmission and storage. While traditional cryptographic algorithms like AES are widely used, they may not fully exploit the properties of image data, such as high redundancy and spatial correlation. In recent years, chaotic systems have emerged as promising candidates for lightweight and secure encryption schemes, but comprehensive comparisons between different chaotic maps and standardized methods are still lacking. This study investigates the use of three classical chaotic systems—Henon, tent, and logistic maps—for image encryption, and evaluates their performance both visually and statistically. The research is motivated by the need to assess whether these well-known chaotic systems, when used with proper statistical sampling, can match or surpass conventional methods in terms of encryption robustness and complexity. We propose a key generation method based on chaotic iterations, statistically filtered for independence, and apply it to a one-time-pad-like encryption scheme. The encryption quality is validated over a dataset of 100 JPEG images of size 512×512, using multiple evaluation metrics, including MSE, PSNR, NPCR, EQ, and UACI. Results are benchmarked against the AES algorithm to ensure interpretability and reproducibility. Our findings reveal that while the AES algorithm remains the fastest and most uniform in histogram flattening, certain chaotic systems, such as the tent and logistic maps, offer comparable or superior results in visual encryption quality and pixel-level unpredictability. The analysis highlights that visual encryption performance does not always align with statistical metrics, underlining the importance of multi-faceted evaluation. These results contribute to the growing body of research in chaos-based image encryption and provide practical guidelines for selecting encryption schemes tailored to specific application requirements, such as efficiency, visual secrecy, or implementation simplicity. Full article
Show Figures

Figure 1

21 pages, 1837 KB  
Article
Learning Data Heterogeneity with Dirichlet Diffusion Trees
by Shuning Huo and Hongxiao Zhu
Mathematics 2025, 13(16), 2568; https://doi.org/10.3390/math13162568 - 11 Aug 2025
Viewed by 371
Abstract
Characterizing complex heterogeneous structures in high-dimensional data remains a significant challenge. Traditional approaches often rely on summary statistics such as histograms, skewness, or kurtosis, which—despite their simplicity—are insufficient for capturing nuanced patterns of heterogeneity. Motivated by a brain tumor study, we consider data [...] Read more.
Characterizing complex heterogeneous structures in high-dimensional data remains a significant challenge. Traditional approaches often rely on summary statistics such as histograms, skewness, or kurtosis, which—despite their simplicity—are insufficient for capturing nuanced patterns of heterogeneity. Motivated by a brain tumor study, we consider data in the form of point clouds, where each observation consists of a variable number of points. Our goal is to detect differences in the heterogeneity structures across distinct groups of observations. To this end, we employ the Dirichlet Diffusion Tree (DDT) to characterize the latent heterogeneity structure of each observation. We further extend the DDT framework by introducing a regression component that links covariates to the hyperparameters of the latent trees. We develop a Markov chain Monte Carlo algorithm for posterior inference, which alternatively updates the latent tree structures and the regression coefficients. The effectiveness of our proposed method is evaluated by a simulation study and a real-world application in brain tumor imaging. Full article
(This article belongs to the Special Issue Statistical Theory and Application, 2nd Edition)
Show Figures

Figure 1

Back to TopTop