Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,928)

Search Parameters:
Keywords = graphic applications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 10243 KB  
Article
Development of a Predictive Tool for Real Estate Analysis Using Machine Learning Techniques
by Ricardo Francisco Reier Forradellas and Gregorio Acedo Benítez
Int. J. Financial Stud. 2026, 14(5), 130; https://doi.org/10.3390/ijfs14050130 - 11 May 2026
Viewed by 384
Abstract
The real estate market is a complex and dynamic sector that plays a key role in economic stability and wealth generation. In many regions, real estate assets represent around 80% of household wealth, while rising housing prices have turned access to housing into [...] Read more.
The real estate market is a complex and dynamic sector that plays a key role in economic stability and wealth generation. In many regions, real estate assets represent around 80% of household wealth, while rising housing prices have turned access to housing into a major social and economic challenge. In this context, the availability of accurate and accessible information is essential for decision-making by buyers, investors, and public administrations. This study proposes the development of an advanced technological tool based on Artificial Intelligence and Machine Learning techniques to predict and analyze real estate market dynamics within a specific geographic area. Using the city of Madrid as a case study, the research presents a digital application capable of estimating the market value of a property by analyzing comparable recently sold properties and incorporating key housing characteristics. By entering an address and a set of property features, the system generates a precise and data-driven valuation. The results demonstrate that AI-based approaches can significantly improve the accuracy and accessibility of real estate valuation processes. The proposed methodology enables real-time price estimation, graphical comparisons, and dynamic market analysis. Furthermore, the framework is scalable and can be extended to other geographic areas where relevant data are available, providing valuable insights for both academic research and practical decision-making in the real estate sector. Full article
(This article belongs to the Special Issue Machine Learning Applications in Computational Finance)
Show Figures

Figure 1

32 pages, 1357 KB  
Article
Solving Geometry Problems: A Text–Formula–Image Multimodal Parsing and Fusion Model
by Pengpeng Jian, Zongxiang Song, Ting Song and Yanli Wang
Symmetry 2026, 18(5), 821; https://doi.org/10.3390/sym18050821 (registering DOI) - 10 May 2026
Viewed by 246
Abstract
Solving geometry problems is a critical challenge in education, for it demands the integration of textual semantic descriptions, mathematical formula logic and spatial graphical information, as well as rigorous geometric theorem application and stepwise logical deduction. These are core capabilities that underpin the [...] Read more.
Solving geometry problems is a critical challenge in education, for it demands the integration of textual semantic descriptions, mathematical formula logic and spatial graphical information, as well as rigorous geometric theorem application and stepwise logical deduction. These are core capabilities that underpin the realization of personalized intelligent tutoring and efficient educational resource allocation. Traditional geometry problem solving methods often suffer from deficiencies in accuracy and the fusion of text, formula and image features. Hence, this paper proposes a method of solving geometry problems based on a text–formula–image (TFI) multimodal parsing and fusion model. The TFI parser employs a self-attention multilayer Transformer to enhance the extraction of logical relations among geometric text expressions. Meanwhile, it parses formulas into tree structures to overcome the loss of formula structural features, which utilizes symbolic embedding and tree-structured encoding to preserve hierarchical logical information and yields unified formula representations via a multi-granularity fusion module. The TFI parser also leverages a Feature Pyramid Network (FPN) for the accurate detection of geometric and non-geometric instances, resolves the issues of blurred segmentation for slender geometric elements and the inaccurate localization of small-sized symbols through mask averaging and RoIAlign, and generates high-dimensional image features using DenseNet-121. The TFI multimodal fusion model integrates a contrastive learning mechanism and constructs fused feature representations by stacking self-attention and cross-attention layers. This design effectively narrows the semantic gap between text, formula, and image features, addressing the inadequacy of traditional fusion approaches in deep cross-modal feature alignment. An attention-augmented Gated Recurrent Unit (GRU) network processes the fused TFI features to produce target operation trees and geometry solutions, ensuring interpretable and precise reasoning performance. The proposed method is evaluated on the PGDP5K and GeoEval datasets, and it achieves an average accuracy of 59.63% in geometry problem solving, which validates its effectiveness. This paradigm offers a viable technical approach for uniformly modeling complex educational tasks, including geometry problem solving and timetable scheduling. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Human-Computer Interaction)
Show Figures

Figure 1

46 pages, 13555 KB  
Article
Adjustable Half-Hyperbolic Convolution (HHC)-Type Operators with Symmetrized Kernel for Multivariate Approximation
by George A. Anastassiou, Seda Karateke and Metin Zontul
Symmetry 2026, 18(5), 813; https://doi.org/10.3390/sym18050813 (registering DOI) - 9 May 2026
Viewed by 216
Abstract
In this study, we develop a class of multivariate HHC-type operators generated by adjustable half-hyperbolic tangent activation functions and a symmetrized kernel structure. The analysis is carried out within the classical framework of positive linear operators (PLOs), which allows a deeper investigation of [...] Read more.
In this study, we develop a class of multivariate HHC-type operators generated by adjustable half-hyperbolic tangent activation functions and a symmetrized kernel structure. The analysis is carried out within the classical framework of positive linear operators (PLOs), which allows a deeper investigation of their approximation behavior. By means of the modulus of continuity, we obtain quantitative convergence estimates toward the identity operator together with explicit bounds for the approximation error. The proposed construction is developed in a multivariate setting, where both simultaneous approximation with respect to several variables and iterated applications of the operators are examined. It is shown that the convergence properties remain stable under iteration, which further strengthens the analytical framework. The proposed operators also preserve important structural features of the approximated functions, including convexity and differentiability, and therefore provide a mathematically controlled approach to multivariate approximation. In contrast to earlier univariate HHC-type studies, the present work introduces a symmetry-enhanced multivariate operator structure together with explicit multivariate error analysis and a broader comparative numerical investigation. The numerical study, supported by Python 3.13 computations, combines regression-based error metrics with graphical analysis in order to illustrate convergence and compare the behavior of the classical, Kantorovich-type, and quadrature-type forms of the operators. Overall, the results contribute to the theory of convolution-type PLOs and provide a meticulous approximation framework with potential relevance to computational mathematics and learning-oriented operator design. Full article
(This article belongs to the Special Issue Applications Based on AI in Mathematics and Asymmetry/Symmetry)
30 pages, 1059 KB  
Article
Formalizing the Implicit Mechanisms in UAV Energy Model Selection Through Decision Tree and Analytic Hierarchy Process
by Israel Kolaïgué Bayaola, Jean Louis Ebongué Kedieng Fendji, Blaise Omer Yenke, Marcellin Atemkeng and Christiana Ibidun Obagbuwa
Drones 2026, 10(5), 358; https://doi.org/10.3390/drones10050358 - 8 May 2026
Viewed by 172
Abstract
The growing deployment of unmanned aerial vehicles (UAVs) in energy-constrained applications has highlighted the need for appropriate energy consumption models. However, selecting between physics-based (white-box) and data-driven (black-box) modeling paradigms remains a largely implicit process. Researchers often navigate undocumented trade-offs among required predictive [...] Read more.
The growing deployment of unmanned aerial vehicles (UAVs) in energy-constrained applications has highlighted the need for appropriate energy consumption models. However, selecting between physics-based (white-box) and data-driven (black-box) modeling paradigms remains a largely implicit process. Researchers often navigate undocumented trade-offs among required predictive accuracy, empirical data availability, and access to aerodynamic testing infrastructure without a formalized structure. This study proposes a two-stage decision-making framework to formalize UAV energy model selection. In the first stage, a qualitative decision tree is inductively derived from a corpus of 23 recent studies, explicitly mapping infrastructural and informational constraints to five distinct modeling regimes. In the second stage, the Analytic Hierarchy Process (AHP) is applied to quantitatively evaluate the feasible alternatives based on context-specific criteria: accuracy, interpretability, development cost, and customization adaptability. The structural logic of the framework is evaluated against an independent set of 24 holdout studies, demonstrating a high degree of consistency between the framework’s recommendations and the methodologies employed in the literature. Furthermore, the quantitative AHP scoring introduces “fallback flexibility,” enabling researchers to mathematically identify alternative modeling strategies when primary experimental conditions are compromised. Supported by an open-source Python graphical interface, this framework aims to reduce methodological ambiguity and support more structured, reproducible model selection in UAV energy research. Full article
16 pages, 35426 KB  
Article
JefiFast: Accelerating Jefimenko’s Equations with Memory-Centric Optimizations and Multi-GPU Parallelism
by Bing He, Shengyu Peng, Nan Sun, Guoliang Li, Xiaofei Zhu, Peng Xu and Xiaowei Shen
Physics 2026, 8(2), 43; https://doi.org/10.3390/physics8020043 - 7 May 2026
Viewed by 139
Abstract
As a foundation for numerical solvers in computational electromagnetics, particularly for multiphysics and electromagnetic compatibility applications, Jefimenko’s equations offer a generalized solution to Maxwell’s equations, enabling the direct computation of electromagnetic fields from time-dependent source distributions without the boundary-condition artifacts inherent to grid-based [...] Read more.
As a foundation for numerical solvers in computational electromagnetics, particularly for multiphysics and electromagnetic compatibility applications, Jefimenko’s equations offer a generalized solution to Maxwell’s equations, enabling the direct computation of electromagnetic fields from time-dependent source distributions without the boundary-condition artifacts inherent to grid-based methods. However, the numerical integration of these equations is computationally intensive, typically scaling as O(NsNo) for Ns source points and No observation points. In this paper, we present JefiFast, a highly optimized graphics processing unit (GPU) implementation that significantly outperforms the state-of-the-art JefiGPU algorithm. We identify that previous implementations are strictly memory-bound due to inefficient global memory transactions and a lack of data reuse. JefiFast addresses these bottlenecks through four key optimizations: (i) a packed memory layout (PML) using an array-of-structures approach to ensure coalesced memory access for source densities and their derivatives; (ii) geometry-aware shared memory tiling strategies that maximize L2 (level-2) cache hit rates and on-chip data reuse; (iii) pre-computation of time derivatives to minimize redundant arithmetic operations; and (iv) a robust observation domain decomposition strategy that enables linear scaling across multiple GPUs. Benchmarks demonstrate that JefiFast achieves speedups ranging from 4.08 times (for 303 grids on a single NVIDIA V100 graphic processor) to 84.51 times (for 503 grids on 4 NVIDIA V100 processors) compared to the baseline. Notably, for a 503 grid on a single GPU, JefiFast reduces execution time from about 51 min to just about 2.6 min (19.54 times speedup). These performance advances make high-resolution relativistic heavy-ion collision simulations feasible in near real-time. Full article
33 pages, 3735 KB  
Article
Artificial Neural Network-Based Classification of Industrial Sustainability Profiles for Differentiated Fiscal Policy Design in Remanufacturing Processes
by Marta Lilia Eraña-Díaz, Juana Enríquez-Urbano, Beatriz Martínez-Bahena, Jazmin Yanel Juárez-Chávez, Alfonso D’Granda-Trejo and Javier De-la-Rosa-Mondragon
Processes 2026, 14(9), 1501; https://doi.org/10.3390/pr14091501 - 6 May 2026
Viewed by 312
Abstract
The design of differentiated fiscal instruments for industrial sustainability requires robust, data-driven tools capable of capturing the heterogeneity of environmental performance across manufacturing units—a challenge that conventional econometric approaches address only partially, given the non-linear nature of operational–environmental interactions in reconfigurable production systems. [...] Read more.
The design of differentiated fiscal instruments for industrial sustainability requires robust, data-driven tools capable of capturing the heterogeneity of environmental performance across manufacturing units—a challenge that conventional econometric approaches address only partially, given the non-linear nature of operational–environmental interactions in reconfigurable production systems. This study introduces a two-phase computational framework that integrates unsupervised machine learning and supervised classification to generate evidence-based sustainability profiles for fiscal policy targeting. Its principal contribution is the combination of K-Means clustering with a binary artificial neural network (ANN) classifier, operationalized through an accessible decision-support interface that enables differentiated incentive allocation without requiring programming expertise from policymakers. A dataset of 1000 manufacturing records comprising seven operational and technological input variables—material usage, production capacity, reconfiguration time, downtime, AI optimization, IoT connectivity, and predictive maintenance—and three environmental output indicators—energy consumption, carbon emissions, and waste generation—was analyzed. In Phase One, K-Means segmentation with k = 6, selected through multi-criteria convergence (Silhouette = 0.102; Elbow, Davies–Bouldin, and Calinski–Harabasz indices), identified six distinct sustainability profiles with marked environmental differentiation. In Phase Two, a binary ANN classifier (architecture: 7 → 64 → 32 → 1 neurons; ReLU and sigmoid activations) was trained to distinguish the reference cluster C0 (low environmental impact: energy 145.1 kWh, emissions 45.2 CO2-eq) from the high-impact cluster C1 (emissions 67.8 CO2-eq, waste 41.5 kg). The trained classifier achieved an overall accuracy of 75.4% and an AUC-ROC of 0.774 on the held-out test set, with a macro-averaged F1-score of 0.753 and a Cohen’s kappa coefficient of 0.508, indicating moderate-to-substantial agreement beyond chance. Class C1 (high-impact establishments) achieved a precision of 0.794 and a recall of 0.730, supporting reliable identification of manufacturing units that would most benefit from targeted fiscal support. The framework is deployed through a Gradio-based graphical interface incorporating a traffic-light sustainability classification (green/yellow/red), enabling direct and interactive application by tax authorities and industrial policymakers. The modular architecture supports adaptation to larger or sector-specific datasets, making it transferable across industrial policy contexts. Full article
Show Figures

Graphical abstract

35 pages, 27039 KB  
Article
A Complete Grocery Pick-and-Pack Application Using a Computationally Lightweight Vision-Based Mobile Manipulator
by Thanavin Mansakul, Gilbert Tang, Phil Webb, Jamie Rice, Daniel Oakley and James Fowler
Sensors 2026, 26(9), 2860; https://doi.org/10.3390/s26092860 - 3 May 2026
Viewed by 1075
Abstract
Mobile manipulators have become essential platforms for autonomous tasks that demand high-quality performance and efficient operational processes. This paper presents a complete grocery pick-and-pack system for a mobile manipulator, integrating a graphical user interface (GUI) with an end-to-end vision-based grasp detection pipeline designed [...] Read more.
Mobile manipulators have become essential platforms for autonomous tasks that demand high-quality performance and efficient operational processes. This paper presents a complete grocery pick-and-pack system for a mobile manipulator, integrating a graphical user interface (GUI) with an end-to-end vision-based grasp detection pipeline designed for lightweight computation. The system is evaluated on the Grocery Pick-and-Pack Benchmark (Level-3), the most challenging level due to deformable objects, dimensional constraints, and strict grasp-point requirements. Experimental results demonstrate an average success rate of 92% across five item classes, with the deformable sweet bag the most challenging at 60% and an average execution time of 7.5 s on an edge device. The system achieves strong computational efficiency, reflected by a compute-to-speed ratio (CSR) of 0.008, with a total model size of only 30.9 MB. Performance is further validated across multiple hardware platforms and under real competition scenarios in the European Robotics League 2025. The findings highlight the practical impact of lightweight, vision-based mobile manipulation and provide insights into current challenges and future research directions for autonomous robotic applications. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

40 pages, 42122 KB  
Article
Artificial Intelligence for Learning 2D Debris-Flow Dynamics: Application of Fourier Neural Operators and Synthetic Data to a Case Study in Central Italy
by Mauricio Secchi, Antonio Pasculli and Nicola Sciarra
Land 2026, 15(5), 759; https://doi.org/10.3390/land15050759 (registering DOI) - 29 Apr 2026
Viewed by 249
Abstract
Physics-based simulation of debris flows over complex terrain is essential for hazard assessment, but repeated numerical integration is costly when many scenarios must be explored. We develop a general deep-learning surrogate modelling framework for two-dimensional (2D) debris-flow propagation, here applied to the Morino–Rendinara [...] Read more.
Physics-based simulation of debris flows over complex terrain is essential for hazard assessment, but repeated numerical integration is costly when many scenarios must be explored. We develop a general deep-learning surrogate modelling framework for two-dimensional (2D) debris-flow propagation, here applied to the Morino–Rendinara area (central Italy) using a three-dimensional (3D) Fourier Neural Operator (FNO) trained on synthetic simulations generated by a validated in-house finite-volume shallow-water solver. The solver reproduces debris-flow propagation over complex terrain and is specifically developed for artificial intelligence (AI) applications. It is based on a depth-averaged 2D formulation using the Harten–Lax–van Leer–Contact (HLLC) approximate Riemann solver, hydrostatic reconstruction, positivity-preserving wet–dry treatment, and Voellmy-type basal friction, and was verified through analytical benchmarks, numerical tests, and back-analyses of real events. The dataset was built from four site-specific release settings derived from real topography, combining different released volumes and bulk densities while preserving local geomorphological and rheological characteristics. Each simulation was stored as a full spatio-temporal tensor and used to train an FNO conditioned on coordinates, topography, friction parameters, bulk density, and initial release thickness. Training used a novel loss to emphasize active-flow areas and improve velocity reconstruction, and was performed using a graphics processing unit (GPU). The surrogate shows effective generalization to within-distribution validation samples, with global relative mean squared errors of 5.49% for flow thickness, 5.34% for velocity component u, and 2.60% for v, and mean R2 values of 0.95, 0.94, and 0.97. For a representative sample, the surrogate predicts the full spatio-temporal solution in 0.52 s, versus about 47 s for the first-order finite-volume solver, corresponding to a speed-up of about 91×, with an even larger gap expected for higher-order solvers, since, whilst the computation time of the solver increases as its complexity increases, the computation time of the FNO remains essentially unchanged. These results indicate that the proposed FNO is a reliable site-specific surrogate for rapid approximation of 2D debris-flow dynamics over real terrain, with potential for uncertainty propagation, Monte Carlo analysis, large-ensemble simulation, and hazard-oriented scenario assessment. Full article
Show Figures

Figure 1

21 pages, 550 KB  
Article
Sheffer-Type General-λ-Matrix Polynomials and Their Structural Properties
by Ghazala Yasmin, Aditi Sharma, Georgia Irina Oros and Shahid Ahmad Wani
Symmetry 2026, 18(5), 760; https://doi.org/10.3390/sym18050760 (registering DOI) - 28 Apr 2026
Viewed by 253
Abstract
In this paper, a new class of special polynomials, called the Sheffer-type general-λ-matrix polynomials, is introduced within the framework of the monomiality principle. This family is obtained by combining the structure of Sheffer sequences with the theory of general-λ matrix [...] Read more.
In this paper, a new class of special polynomials, called the Sheffer-type general-λ-matrix polynomials, is introduced within the framework of the monomiality principle. This family is obtained by combining the structure of Sheffer sequences with the theory of general-λ matrix polynomials, which leads to a unified formulation encompassing several polynomial families. Fundamental properties of the proposed polynomials are established, including their generating function, explicit series representation, summation formulas, quasi-monomial structure, differential relations, and determinant representation. The proposed framework addresses an important problem in the theory of special functions: the systematic construction of matrix-valued polynomial families that simultaneously generalize both classical scalar polynomials and existing matrix polynomial hierarchies. Such a unified structure is of broad significance, with applications in quantum mechanics (wave function expansions), mathematical physics (matrix differential equations and spectral problems), approximation theory, and the study of special functions in the matrix domain. Several hybrid forms of the proposed family are derived through appropriate choices of the defining functions, which yield polynomial subclasses related to classical families such as Hermite, Laguerre, Bessel, and Poisson–Charlier polynomials. These subclasses illustrate how the proposed framework provides a systematic approach for constructing and studying generalized polynomial structures. In each case, the matrix parameter L introduces a new layer of structural richness not present in the scalar setting, enabling the modelling of phenomena governed by matrix-valued spectral data. Furthermore, a numerical and graphical investigation of selected hybrid forms is carried out using Mathematica (version 14.3, 2025; Wolfram Research, Inc.). Surface plots, distributions of complex zeros, and real-zero patterns are presented for different parameter values, highlighting the influence of the parameters on the behavior and structural characteristics of the polynomials. Full article
Show Figures

Figure 1

17 pages, 1325 KB  
Review
Advances and Challenges in Pharmacokinetic Modeling for PET Imaging: Compartment Models, Input Functions, and Quantitative Techniques
by James Hao Wang, Meltem Uyanik, Xue Li, Weijie Chen, Zhijin He, Caitlin Randell and Alan McMillan
Tomography 2026, 12(5), 63; https://doi.org/10.3390/tomography12050063 - 28 Apr 2026
Viewed by 288
Abstract
Pharmacokinetic modeling in Positron Emission Tomography (PET) imaging has become a cornerstone in cancer research, offering insights into tumor development and progression. These models facilitate the quantification of radiotracer distribution and metabolism, enabling precise measurement of physiological parameters essential for cancer diagnosis, staging, [...] Read more.
Pharmacokinetic modeling in Positron Emission Tomography (PET) imaging has become a cornerstone in cancer research, offering insights into tumor development and progression. These models facilitate the quantification of radiotracer distribution and metabolism, enabling precise measurement of physiological parameters essential for cancer diagnosis, staging, and treatment monitoring. However, accurate pharmacokinetic modeling depends on reliable input function acquisition and partial volume correction techniques to minimize biases in quantitative PET metrics. This review provides a comprehensive overview of current methodologies and advancements in pharmacokinetic modeling for PET oncology imaging. We discuss techniques for acquiring input functions, including arterial, venous, and image-derived input functions (IDIFs), along with population-based input functions (PBIFs). Their strengths, limitations, and clinical applications are critically evaluated. Additionally, we examine quantitative methods such as partial volume correction (PVC) that mitigate the spatial resolution limitations of PET, improving radiotracer quantification in small or heterogeneous tumors. Furthermore, we explore advanced kinetic modeling techniques, including compartmental models, graphical approaches, and data-driven methods, highlighting recent innovations such as machine learning and Bayesian modeling. Key areas for future research in PET pharmacokinetic modeling include integrating hybrid imaging modalities, developing robust patient-specific input functions, and leveraging machine learning to streamline modeling processes. These advancements aim to enhance the precision and clinical utility of PET imaging in oncology, leading to more personalized cancer treatment strategies. Full article
Show Figures

Figure 1

18 pages, 303 KB  
Article
Graphical Homotopy Theory for Intersection Graphs of Semigroups via Path Spaces and Uniform Structures with Applications to Graphical Total Semigroups
by Maryam F. Alshammari, Fozaiyah Alhubairah and Amin Saif
Mathematics 2026, 14(9), 1472; https://doi.org/10.3390/math14091472 - 27 Apr 2026
Viewed by 206
Abstract
In this article, we study the homotopy aspects of intersection graphs of topological semigroups. We begin by defining the top intersection graph TGX and investigating how the algebraic and topological properties of a topological semigroup are reflected in the global structure [...] Read more.
In this article, we study the homotopy aspects of intersection graphs of topological semigroups. We begin by defining the top intersection graph TGX and investigating how the algebraic and topological properties of a topological semigroup are reflected in the global structure of this graph. In particular, we characterize when TGX is totally disconnected, bipartite, or planar in terms of the order and factorization of the underlying semigroup. We then introduce the notions of HTG-semigroups, graphical homomorphisms, and graphical homotopy relations, thereby developing a graphical homotopy framework. Within this setting, we study Gr-homotopy equivalences, Gr-contractible spaces, and retraction phenomena, including DGr-retracts and homotopy extension properties. Finally, we introduce graphical total semigroups and equip the set of Gr-path homotopy classes [Xpe] with a natural Δ-uniform topology. We show that this topology is compatible with the induced semigroup operation, yielding a topological semigroup structure. Overall, this work provides a unified algebraic, topological, and graph-theoretic perspective, and opens the door to further applications of homotopy theory in the study of intersection graphs of topological semigroups. Full article
Show Figures

Figure 1

13 pages, 353 KB  
Article
On Uniformly δ-Geometric Convex Functions
by Yamin Sayyari, Hasan Barsam and Loredana Ciurdariu
Fractal Fract. 2026, 10(5), 289; https://doi.org/10.3390/fractalfract10050289 - 24 Apr 2026
Viewed by 233
Abstract
In this paper, we give some new Jensen, Jensen–Mercer, and Hermite–Hadamard inequalities for uniformly δ-geometric convex functions. In addition, some limit bounds for Caputo–Fabrizio fractional integral operators are established as an application in the case of uniformly δ-geometric convex functions. Some [...] Read more.
In this paper, we give some new Jensen, Jensen–Mercer, and Hermite–Hadamard inequalities for uniformly δ-geometric convex functions. In addition, some limit bounds for Caputo–Fabrizio fractional integral operators are established as an application in the case of uniformly δ-geometric convex functions. Some new examples and graphical representations are provided in order to illustrate the validity of our results. Full article
(This article belongs to the Section General Mathematics, Analysis)
Show Figures

Figure 1

19 pages, 9670 KB  
Article
The Comparison of Selected Approaches to 3D Reconstruction of Anatomical Structures Based on Synthetic Data for Use in Medical Diagnostics
by Miłosz Komada, Zbigniew Omiotek, Piotr Lichograj, Magda Konieczna and Natalia Krukar
Electronics 2026, 15(9), 1812; https://doi.org/10.3390/electronics15091812 - 24 Apr 2026
Viewed by 295
Abstract
There are numerous benefits associated with creating digital copies of anatomical structures, which can be used during patient diagnosis. Such models can be used not only for visualization, but also in order to assess the condition of the patient. As advances in both [...] Read more.
There are numerous benefits associated with creating digital copies of anatomical structures, which can be used during patient diagnosis. Such models can be used not only for visualization, but also in order to assess the condition of the patient. As advances in both medical imaging and 3D graphics are made, it is necessary to determine areas of application of the known reconstruction algorithms. Specifically, it is crucial to find advantages and disadvantages of known approaches to mesh generation, depending on the properties of the object and compare the quality of their results. In order to provide reliable ground-truth data, three 3D models with features resembling those identified in anatomical structures have been created. Based on these meshes, sets of CT-like DICOM images have been generated. Five different reconstruction approaches were proposed: using 3D occupancy information directly, two ways of obtaining point clouds and two methods that utilize Signed Distance Field. A neural network architecture for the SDF upsampling has also been presented. The obtained results justify the popularity of the Marching Cubes algorithm, as it produced accurate reconstructions most reliably. However, for certain scenarios, promising alternatives have been found. The presented outcomes make it clear that the approach to reconstruction must be tailored to the specific problem. Full article
Show Figures

Figure 1

18 pages, 2014 KB  
Article
Emotional Distress Symptom Networks in Patients with Gynecological Malignancies: A Cross-Sectional Study
by Haowen Huang, Ting Liu, La Pan, Shuo Man, Ling Xia and Yuan Wang
Healthcare 2026, 14(9), 1136; https://doi.org/10.3390/healthcare14091136 - 23 Apr 2026
Viewed by 233
Abstract
Background: Emotional distress (ED) is common among patients with gynecological malignancies and is associated with reduced quality of life and suboptimal health outcomes. Total-score approaches may overlook the complex interrelationships among individual emotional symptoms. Objective: This study provides a theory-informed contextual application and [...] Read more.
Background: Emotional distress (ED) is common among patients with gynecological malignancies and is associated with reduced quality of life and suboptimal health outcomes. Total-score approaches may overlook the complex interrelationships among individual emotional symptoms. Objective: This study provides a theory-informed contextual application and empirical boundary test of symptom network analysis, organized by the Stress Process Model (SPM), to examine not only how ED symptoms cluster and connect with psychosocial correlates and quality-of-life domains, but also whether psychosocial stratification is reflected in altered symptom topology or primarily in differences in distress burden. Methods: A cross-sectional study was conducted among 415 patients with gynecological malignancies recruited from a tertiary hospital in China. ED was assessed using the Brief Profile of Mood States-Short Form (BPOMS-SF30). An exploratory three-track screening strategy was used to derive a focused 16-node set of frequent negative mood symptoms. Gaussian graphical models with EBICglasso regularization were estimated for the symptom network and for extended networks including demographic/clinical variables, SPM-related psychosocial variables, and quality-of-life indicators. Results: The ED network showed dense positive connectivity, with strong within-domain clustering and several cross-domain associations. Exhaustion, restlessness, and irritability were relatively more relationally prominent in the primary network, although centrality stability was low to moderate across models. Fatigue-related symptoms were closely connected with anxiety, depressive symptoms, and impaired quality of life. Among psychosocial variables, self-perceived burden showed the strongest conditional association with fatigue. Adjusting for demographic and clinical variables did not materially alter the core symptom network, and no significant subgroup differences in global strength or overall structure were observed across psychosocial strata. Conclusions: In this sample, psychosocial risk stratification appeared to relate more to the overall severity and burden of distress than to major reorganization of symptom topology. The study therefore contributes primarily as a theory-informed contextual application of network methods and as an empirical boundary test showing that several psychosocial strata did not exhibit major topological differences. Because the retained nodes were selected for prevalence, association strength, and selection stability, the observed prominence of fatigue- and activation-related symptoms should be interpreted as conditional on this focused symptom subset. Overall, the findings are correlational, exploratory, and hypothesis-generating. Full article
(This article belongs to the Special Issue Coping with Emotional Distress)
Show Figures

Figure 1

31 pages, 24728 KB  
Article
Interpretable Machine Learning for Predicting Splitting Strength of Asphalt Concrete: Insights from SHAP Analysis
by Jianglei Xing, Xiao Tan, Yihao Li, Dongzhao Jin, Pengwei Guo, Yuhuan Wang and Huiya Niu
Materials 2026, 19(8), 1636; https://doi.org/10.3390/ma19081636 - 19 Apr 2026
Viewed by 595
Abstract
This paper proposes an interpretable machine learning approach for predicting the splitting strength of asphalt concrete and supporting data-driven mixture design. A database consisting of 296 samples was constructed, and 14 input variables related to asphalt properties, aggregate gradation, and fiber characteristics were [...] Read more.
This paper proposes an interpretable machine learning approach for predicting the splitting strength of asphalt concrete and supporting data-driven mixture design. A database consisting of 296 samples was constructed, and 14 input variables related to asphalt properties, aggregate gradation, and fiber characteristics were selected for modeling. Eight machine learning models, namely TabPFN, ANN, SVR, RF, XGBoost, LightGBM, FLAML, and FT-Transformer, were developed and compared. The results show that all eight models achieved satisfactory predictive capability, whereas TabPFN overall achieved the best performance in the Monte Carlo cross-validation, with the lowest average RMSE of 0.34 ± 0.10, the lowest average MAE of 0.21 ± 0.02, a relatively low average MAD of 0.14 ± 0.03, the highest average R2 of 0.85 ± 0.08, and the highest composite score of 0.81. SHAP analysis further indicated that splitting strength prediction was mainly governed by a limited number of dominant variables, among which Ag9.5, AC, Du, FT, and Ag4.75 were the most effective parameters. In addition, favorable parameter ranges for improving splitting strength were quantified, such as Ag9.5 < 66.8%, AC < 5.4 wt.%, Du > 134.7 cm and Ag4.75 < 45.0%. Finally, a graphic user interface platform integrating prediction and SHapley Additive exPlanations-based interpretation was developed to improve the accessibility and practical applicability of the proposed framework. Full article
Show Figures

Figure 1

Back to TopTop