Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (837)

Search Parameters:
Keywords = analytical representation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 6437 KB  
Article
A Novel Methodology for Identifying the Top 1% Scientists Using a Composite Performance Index
by Alexey Remizov, Shazim Ali Memon and Saule Sadykova
Publications 2025, 13(4), 55; https://doi.org/10.3390/publications13040055 (registering DOI) - 2 Nov 2025
Abstract
There is a growing need for comprehensive and transparent frameworks in bibliometric evaluation that support fairer assessments and capture the multifaceted nature of research performance. This study proposes a novel methodology for identifying top-performing researchers based on a composite performance index (CPI). Unlike [...] Read more.
There is a growing need for comprehensive and transparent frameworks in bibliometric evaluation that support fairer assessments and capture the multifaceted nature of research performance. This study proposes a novel methodology for identifying top-performing researchers based on a composite performance index (CPI). Unlike existing rankings, this framework presents a multidimensional approach by integrating sixteen weighted bibliometrics metrics, spanning research productivity, citation, publications in top journal percentiles, authorship roles, and international collaboration, into a single CPI, enabling a more nuanced and equitable evaluation of researcher performance. Data were retrieved from SciVal for 1996–2025. Two ranking exercises were conducted with Kazakhstan as the analytical unit. Subject-specific rankings identified the top 1% authors within different research areas, while subject-independent rankings highlighted the overall top 1%. CPI distributions varied markedly across disciplines. A comparative analysis with the Stanford/Elsevier global top 2% list was conducted as additional benchmarking. The results highlight that academic excellence depends on a broad spectrum of strengths beyond just productivity, particularly in competitive disciplines. The CPI provides a consistent and adaptable tool for assessing and recognizing research performance; however, future refinements should enhance data coverage, improve representation of early-career researchers, and integrate qualitative aspects. Full article
Show Figures

Figure 1

89 pages, 1735 KB  
Article
Quantum Field Theory of 3+1 Dimensional BTZ Gravity: Graviton Self-Energy, Axion Interactions, and Dark Matter in the Ultrahyperfunction Framework
by Hameeda Mir, Angelo Plastino, Behnam Pourhassan and Mario Carlos Rocca
Axioms 2025, 14(11), 810; https://doi.org/10.3390/axioms14110810 (registering DOI) - 31 Oct 2025
Abstract
We present a comprehensive quantum field theoretical analysis of graviton self-energy and mass generation in 3+1 dimensional BTZ black hole spacetime, incorporating axion interactions within the framework of dark matter theory. Using a novel mathematical approach based on ultrahyperfunctions, generalizations of Schwartz tempered [...] Read more.
We present a comprehensive quantum field theoretical analysis of graviton self-energy and mass generation in 3+1 dimensional BTZ black hole spacetime, incorporating axion interactions within the framework of dark matter theory. Using a novel mathematical approach based on ultrahyperfunctions, generalizations of Schwartz tempered distributions to the complex plane, we derive exact quantum relativistic expressions for graviton and axion self-energies without requiring ad hoc regularization procedures. Our approach extends the Gupta–Feynman quantization framework to BTZ gravity while introducing a new constraint that eliminates unitarity violations inherent in previous formulations, thereby avoiding the need for ghost fields. Through systematic application of generalized Feynman parameters, we evaluate both bradyonic and tachyonic graviton modes, revealing distinct quantum correction patterns that depend critically on momentum, energy, and mass parameters. Key findings include (1) natural graviton mass generation through cosmological constant interactions, yielding m2=2|Λ|/κ(1κ); (2) qualitatively different quantum behaviors between bradyonic and tachyonic modes, with bradyonic corrections reaching amplitudes 6 times larger than their tachyonic counterparts; (3) the discovery of momentum-dependent quantum dissipation effects that provide natural ultraviolet regulation; and (4) the first explicit analytical expressions and graphical representations for 17 distinct graviton self-energy contributions. The ultrahyperfunction formalism proves essential for handling the non-renormalizable nature of the theory, providing mathematically rigorous treatment of highly singular integrals while maintaining Lorentz invariance. Our results suggest observable consequences in gravitational wave propagation through frequency-dependent dispersive effects and modifications to black hole thermodynamics, potentially bridging theoretical quantum gravity with experimental constraints. Full article
14 pages, 871 KB  
Article
SMAD: Semi-Supervised Android Malware Detection via Consistency on Fine-Grained Spatial Representations
by Suchul Lee and Seokmin Han
Electronics 2025, 14(21), 4246; https://doi.org/10.3390/electronics14214246 - 30 Oct 2025
Viewed by 39
Abstract
Malware analytics suffer from scarce, delayed, and privacy-constrained labels, limiting fully supervised detection and hampering responsiveness to zero-day threats. We propose SMAD, a Semi-supervised Android Malicious App Detector that integrates a segmentation-oriented backbone—to extract pixel-level, multi-scale features from APK imagery—with a dual-branch consistency [...] Read more.
Malware analytics suffer from scarce, delayed, and privacy-constrained labels, limiting fully supervised detection and hampering responsiveness to zero-day threats. We propose SMAD, a Semi-supervised Android Malicious App Detector that integrates a segmentation-oriented backbone—to extract pixel-level, multi-scale features from APK imagery—with a dual-branch consistency objective that enforces predictive agreement between two parallel branches on the same image. We evaluate SMAD on CICMalDroid2020 under label budgets of 0.5, 0.25, and 0.125 and show that it achieves higher accuracy, macro-precision, macro-recall, and macro-F1 with smoother learning curves than supervised training, a recursive pseudo-labeling baseline, a FixMatch baseline, and a confidence-thresholded consistency ablation. A backbone ablation (replacing the dense encoder with WideResNet) indicates that pixel-level, multi-scale features under agreement contribute substantially to these gains. We observe a coverage–precision trade-off: hard confidence gating filters noise but lowers early-training performance, whereas enforcing consistency on dense, pixel-level representations yields sustained label-efficiency gains for image-based malware detection. Consequently, SMAD offers a practical path to high-utility detection under tight labeling budgets—a setting common in real-world security applications. Full article
Show Figures

Figure 1

32 pages, 2280 KB  
Article
Symmetry-Aware Feature Representations and Model Optimization for Interpretable Machine Learning
by Mehtab Alam, Abdullah Alourani, Ashraf Ali and Firoj Ahamad
Symmetry 2025, 17(11), 1821; https://doi.org/10.3390/sym17111821 - 29 Oct 2025
Viewed by 169
Abstract
This paper investigates the role of symmetry and asymmetry in the learning process of modern machine learning models, with a specific focus on feature representation and optimization. We introduce a novel symmetry-aware learning framework that identifies and preserves symmetric properties within high-dimensional datasets, [...] Read more.
This paper investigates the role of symmetry and asymmetry in the learning process of modern machine learning models, with a specific focus on feature representation and optimization. We introduce a novel symmetry-aware learning framework that identifies and preserves symmetric properties within high-dimensional datasets, while allowing model asymmetries to capture essential discriminative cues. Through analytical modeling and empirical evaluations on benchmark datasets, we demonstrate how symmetrical transformations of features (e.g., rotation, mirroring, permutation invariance) impact learning efficiency, interpretability, and generalization. Furthermore, we explore asymmetric regularization techniques that prioritize informative deviations from symmetry in model parameters, thereby improving classification and clustering performance. The proposed approach is validated using a variety of classifiers including neural networks and tested across domains such as image recognition, biomedical data, and social networks. Our findings highlight the critical importance of leveraging domain-specific symmetries to enhance both the performance and explainability of machine learning systems. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Data Mining & Machine Learning)
Show Figures

Figure 1

15 pages, 294 KB  
Article
Conics and Transformations Defined by the Parallelians of a Triangle
by Helena Koncul, Boris Odehnal and Ivana Božić Dragun
Mathematics 2025, 13(21), 3424; https://doi.org/10.3390/math13213424 - 27 Oct 2025
Viewed by 135
Abstract
For any point P in the Euclidean plane of a triangle Δ, the six parallelians of P lie on a single conic, which shall be called the parallelian conic of P with respect to Δ. We provide a synthetic and an [...] Read more.
For any point P in the Euclidean plane of a triangle Δ, the six parallelians of P lie on a single conic, which shall be called the parallelian conic of P with respect to Δ. We provide a synthetic and an analytic proof of this fact. Then, we studied the shape of this particular conic, depending on the choice of the pivot point P. This led to the finding that the only circular parallelian conic is the first Lemoine circle. Points on the Steiner inellipse produce parabolae, and those on a certain central line yield equilateral hyperbolae. The hexagon built by the parallelians has an inconic I and the tangents of P at the parallelians define some triangles and hexagons with several circum- and inconics. Certain pairings of conics, together with in- and circumscribed polygons, give rise to different kinds of porisms. Further, the inconics and circumconics of the triangles and hexagons span exponential pencils of conics in which any pair of subsequent conics defines a new conic as the polar image of the inconic with regard to the circumconic. This allows us to construct chains of nested porisms. The trilinear representations of the centers of the appearing conics, as well as the perspectors of some deduced triangles, depending on the indeterminate coordinates of P, define some algebraic transformations that establish algebraic relations between well- and lesser-known triangle centers. We completed our studies by compiling a list of possible porisms between any pair of conics. Further, we describe the possible loci of pivot points so that the mentioned conics allow for porisms of polygons with arbitrary numbers of vertices. Full article
(This article belongs to the Section B: Geometry and Topology)
Show Figures

Figure 1

19 pages, 3386 KB  
Article
Wellbore Stability in Interbedded Weak Formations Utilizing a Shear-Based Method: Numerical Realization and Analysis
by Yuanhong Han, Qian Gao, Deliang Fu, Desheng Zhou, Ahmad Ghassemi, Zhiyu Zhou, Hongyong Guo and Haiyang Wang
Processes 2025, 13(11), 3389; https://doi.org/10.3390/pr13113389 - 23 Oct 2025
Viewed by 166
Abstract
This study employs a finite element approach to investigate wellbore stability in interbedded weak formations, such as unconsolidated layers, with a focus on the failure-tendency method, which is derived according to the principle of Mohr–Coulomb theory. The numerical model is successfully verified through [...] Read more.
This study employs a finite element approach to investigate wellbore stability in interbedded weak formations, such as unconsolidated layers, with a focus on the failure-tendency method, which is derived according to the principle of Mohr–Coulomb theory. The numerical model is successfully verified through analytical solutions for stress distributions around a borehole. Through finite element modeling, the method captures critical shear failure thresholds, exemplifying how variations in horizontal stress anisotropy, orientation of interbedded weak layers, and mechanical properties of layered geological formations impact wellbore stability in stratified formations. Results indicate that the potential unstable regions, aligned in the direction of minimum principal stress, and the range of unstable regions gradually enlarge as the internal cohesive strength decreases. By modeling heterogeneous rock sequences with explicit representation of interbedded weak layers and stress anisotropy, the analysis reveals that interbedded weak layers are prone to shear-driven borehole breakouts due to stress redistribution and relatively lower internal cohesive strength. As compressive stresses concentrate at interfaces between stiff and compliant layers, breakouts are induced at those weak layers along the interfaces; this type of failure is also manifested through a field borehole breakout observation. Simulation results reveal the significant influences of the mechanical properties of layered formations and in situ stress on the distribution of instability regions around a borehole. The study underscores the necessity of layer-specific geomechanical models to predict shear failure in complex layered geological formations and offers insights for optimizing drilling parameters to enhance wellbore stability in anisotropic, stratified subsurface environments. Full article
Show Figures

Figure 1

28 pages, 1211 KB  
Article
Information-Theoretic Reliability Analysis of Consecutive r-out-of-n:G Systems via Residual Extropy
by Anfal A. Alqefari, Ghadah Alomani, Faten Alrewely and Mohamed Kayid
Entropy 2025, 27(11), 1090; https://doi.org/10.3390/e27111090 - 22 Oct 2025
Viewed by 237
Abstract
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex [...] Read more.
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex lifetime models, providing effective tools when closed-form expressions are unavailable. Preservation properties under classical stochastic orders and aging notions are examined, together with monotonicity and characterization results that offer deeper insights into system uncertainty. A conditional formulation, in which all components are assumed operational at a given time, is also investigated, yielding new theoretical findings. From an inferential perspective, we propose a maximum likelihood estimator of residual extropy under exponential lifetimes, supported by simulation studies and real-world reliability data. These contributions highlight residual extropy as a powerful information-theoretic tool for modeling, estimation, and decision-making in multicomponent reliability systems, thereby aligning with the objectives of statistical inference through entropy-like measures. Full article
(This article belongs to the Special Issue Recent Progress in Uncertainty Measures)
Show Figures

Figure 1

21 pages, 416 KB  
Article
On Generalized Wirtinger Inequalities for (k,ψ)-Caputo Fractional Derivatives and Applications
by Muhammad Samraiz, Humaira Javaid and Ishtiaq Ali
Fractal Fract. 2025, 9(11), 678; https://doi.org/10.3390/fractalfract9110678 - 22 Oct 2025
Viewed by 211
Abstract
The primary aim of this study is to establish new Wirtinger-type inequalities involving fractional derivatives, which are essential tools in analysis and applied mathematics. We derive generalized Wirtinger-type inequalities incorporating the (k,ψ)-Caputo fractional derivatives using Taylor’s expansion. The [...] Read more.
The primary aim of this study is to establish new Wirtinger-type inequalities involving fractional derivatives, which are essential tools in analysis and applied mathematics. We derive generalized Wirtinger-type inequalities incorporating the (k,ψ)-Caputo fractional derivatives using Taylor’s expansion. The inequalities are derived in Lp spaces (p>1) through Hölder’s inequality. A detailed analytical discussion is provided to further examine the derived inequalities. The theoretical findings are validated through numerical examples and graphical representations. Furthermore, the novelty and applicability of the proposed technique are demonstrated through the applications of the resulting inequalities to derive new results related to the arithmetic–geometric mean inequality. Full article
Show Figures

Figure 1

17 pages, 954 KB  
Article
Transportation Link Risk Analysis Through Stochastic Link Fundamental Flow Diagram
by Orlando Giannattasio and Antonino Vitetta
Future Transp. 2025, 5(4), 150; https://doi.org/10.3390/futuretransp5040150 - 21 Oct 2025
Viewed by 227
Abstract
This paper proposes a method for assessing societal risk along a traffic link by integrating a stochastic formulation of the fundamental diagram. The approach accounts for uncertainty in vehicle speed due to user heterogeneity, vehicle characteristics, and environmental conditions. The risk index is [...] Read more.
This paper proposes a method for assessing societal risk along a traffic link by integrating a stochastic formulation of the fundamental diagram. The approach accounts for uncertainty in vehicle speed due to user heterogeneity, vehicle characteristics, and environmental conditions. The risk index is decomposed into occurrence, vulnerability, and exposure components, with the occurrence probability modeled as a function of stochastic speed. The inverse gamma distribution is adopted to represent speed variability, enabling analytical tractability and control over dispersion. Numerical results show that urban and suburban environments exhibit distinct sensitivity to model parameters, particularly the gamma shape parameter η and the composite parameter c = β · v0 obtained by the product of the occurrence parameter β and the free speed flow v0. Graphical representations illustrate the impact of uncertainty on risk estimation. The proposed framework enhances existing deterministic methods by incorporating probabilistic elements, offering a foundation for future applications in traffic safety management and infrastructure design. Full article
Show Figures

Figure 1

25 pages, 2044 KB  
Article
South African Industry Practitioners on Building Energy Simulation Software: Implementation Challenges and Opportunities
by Henry Odiri Igugu, Jacques Laubscher and Tariené Gaum
Buildings 2025, 15(20), 3789; https://doi.org/10.3390/buildings15203789 - 21 Oct 2025
Viewed by 414
Abstract
Building Energy Modelling (BEM) practitioners play a crucial role in delivering energy-efficient buildings by analysing building performance using simulation tools. However, their experiences while using BEM software to predict building energy performance are understudied. In addition, research that directly engages with practitioners and [...] Read more.
Building Energy Modelling (BEM) practitioners play a crucial role in delivering energy-efficient buildings by analysing building performance using simulation tools. However, their experiences while using BEM software to predict building energy performance are understudied. In addition, research that directly engages with practitioners and stakeholders is particularly lacking in the Global South (GS), where the bulk of new building construction takes place. This study explores the implementation challenges and opportunities associated with BEM software among South African industry practitioners, focusing on their experiences in utilising BEM tools. Structured interviews were conducted with 19 South African industry specialists, supplemented by quantitative data collected through a questionnaire. Qualitative data from the interviews were analysed using MAXQDA 24 Analytics Pro to identify key themes, while quantitative data were visualised to compare software preferences. The analysis indicated that DesignBuilder is widely used, followed by BSIMAC. These tools highlight the largest opportunities for supporting active South African practitioners. The respondents highlighted the need for user-friendly interfaces, standardised methodologies, and improved training to address entry barriers and inconsistent simulation outcomes. Mixed opinions exist regarding the preference for tools with visual representations of 3D geometry, primarily influenced by the field of specialisation and how it impacts client engagement. The research concludes that while BEM software is critical for advancing sustainable design, its effective implementation is hindered in South Africa and potentially in the GS. Recommendations include developing more intuitive software interfaces, establishing standardised modelling approaches, and creating structured training programmes and professional forums to enhance practitioner proficiency, knowledge transfer across contexts, and industry-wide adoption. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

23 pages, 1089 KB  
Article
On the Qualitative Stability Analysis of Fractional-Order Corruption Dynamics via Equilibrium Points
by Qiliang Chen, Kariyanna Naveen, Doddabhadrappla Gowda Prakasha and Haci Mehmet Baskonus
Fractal Fract. 2025, 9(10), 666; https://doi.org/10.3390/fractalfract9100666 - 16 Oct 2025
Viewed by 245
Abstract
The primary objective of this study is to provide a more precise and beneficial mathematical model for assessing corruption dynamics by utilizing non-local derivatives. This research aims to provide solutions that accurately capture the complexities and practical behaviors of corruption. To illustrate how [...] Read more.
The primary objective of this study is to provide a more precise and beneficial mathematical model for assessing corruption dynamics by utilizing non-local derivatives. This research aims to provide solutions that accurately capture the complexities and practical behaviors of corruption. To illustrate how corruption levels within a community change over time, a non-linear deterministic mathematical model has been developed. The authors present a non-integer order model that divides the population into five subgroups: susceptible, exposed, corrupted, recovered, and honest individuals. To study these corruption dynamics, we employ a new method for solving a time-fractional corruption model, which we term the q-homotopy analysis transform approach. This approach produces an effective approximation solution for the investigated equations, and data is shown as 3D plots and graphs, which give a clear physical representation. The stability and existence of the equilibrium points in the considered model are mathematically proven, and we examine the stability of the model and the equilibrium points, clarifying the conditions required for a stable solution. The resulting solutions, given in series form, show rapid convergence and accurately describe the model’s behaviour with minimal error. Furthermore, the solution’s uniqueness and convergence have been demonstrated using fixed-point theory. The proposed technique is better than a numerical approach, as it does not require much computational work, with minimal time consumed, and it removes the requirement for linearization, perturbations, and discretization. In comparison to previous approaches, the proposed technique is a competent tool for examining an analytical outcomes from the projected model, and the methodology used herein for the considered model is proved to be both efficient and reliable, indicating substantial progress in the field. Full article
Show Figures

Figure 1

24 pages, 985 KB  
Article
Artificial Intelligence-Driven Diagnostics in Eye Care: A Random Forest Approach for Data Classification and Predictive Modeling
by Luís F. F. M. Santos, Miguel Ángel Sánchez-Tena, Cristina Alvarez-Peregrina and Clara Martinez-Perez
Algorithms 2025, 18(10), 647; https://doi.org/10.3390/a18100647 - 15 Oct 2025
Viewed by 257
Abstract
Artificial intelligence and machine learning have increasingly transformed optometry, enabling automated classification and predictive modeling of eye conditions. In this study, we introduce Optometry Random Forest, an artificial intelligence-based system for automated classification and forecasting of optometric data. The proposed methodology leverages Random [...] Read more.
Artificial intelligence and machine learning have increasingly transformed optometry, enabling automated classification and predictive modeling of eye conditions. In this study, we introduce Optometry Random Forest, an artificial intelligence-based system for automated classification and forecasting of optometric data. The proposed methodology leverages Random Forest models, trained on academic optometric datasets, to classify key diagnostic categories, including Contactology, Dry Eye, Low Vision, Myopia, Pediatrics, and Refractive Surgery. Additionally, an autoRegressive integrated moving average based forecasting model is incorporated to predict future research trends in optometry until 2030. Comparing the one-shot and epoch-trained Optometry Random Forest, the findings indicate that the epoch-trained model consistently outperforms the one-shot model, achieving superior classification accuracy (97.17%), precision (97.28%), and specificity (100%). Moreover, the comparative analysis with Optometry Bidirectional Encoder Representations from Transformers demonstrates that the Optometry Random Forest excels in classification reliability and predictive analytics, positioning it as a robust artificial intelligence tool for clinical decision-making and resource allocation. This research highlights the potential of Random Forest models in medical artificial intelligence, offering a scalable and interpretable solution for automated diagnosis, predictive analytics, and artificial intelligence-enhanced decision support in optometry. Future work should focus on integrating real-world clinical datasets to further refine classification performance and enhance the potential for artificial intelligence-driven patient care. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Graphical abstract

51 pages, 9631 KB  
Review
Review of Physics-Informed Neural Networks: Challenges in Loss Function Design and Geometric Integration
by Sergiy Plankovskyy, Yevgen Tsegelnyk, Nataliia Shyshko, Igor Litvinchev, Tetyana Romanova and José Manuel Velarde Cantú
Mathematics 2025, 13(20), 3289; https://doi.org/10.3390/math13203289 - 15 Oct 2025
Viewed by 1026
Abstract
Physics-Informed Neural Networks (PINNs) represent a transformative approach to solving partial differential equation (PDE)-based boundary value problems by embedding physical laws into the learning process, addressing challenges such as non-physical solutions and data scarcity, which are inherent in traditional neural networks. This review [...] Read more.
Physics-Informed Neural Networks (PINNs) represent a transformative approach to solving partial differential equation (PDE)-based boundary value problems by embedding physical laws into the learning process, addressing challenges such as non-physical solutions and data scarcity, which are inherent in traditional neural networks. This review analyzes critical challenges in PINN development, focusing on loss function design, geometric information integration, and their application in engineering modeling. We explore advanced strategies for constructing loss functions—including adaptive weighting, energy-based, and variational formulations—that enhance optimization stability and ensure physical consistency across multiscale and multiphysics problems. We emphasize geometry-aware learning through analytical representations—signed distance functions (SDFs), phi-functions, and R-functions—with complementary strengths: SDFs enable precise local boundary enforcement, whereas phi/R capture global multi-body constraints in irregular domains; in practice, hybrid use is effective for engineering problems. We also examine adaptive collocation sampling, domain decomposition, and hard-constraint mechanisms for boundary conditions to improve convergence and accuracy and discuss integration with commercial CAE via hybrid schemes that couple PINNs with classical solvers (e.g., FEM) to boost efficiency and reliability. Finally, we consider emerging paradigms—Physics-Informed Kolmogorov–Arnold Networks (PIKANs) and operator-learning frameworks (DeepONet, Fourier Neural Operator)—and outline open directions in standardized benchmarks, computational scalability, and multiphysics/multi-fidelity modeling for digital twins and design optimization. Full article
Show Figures

Figure 1

18 pages, 6196 KB  
Article
MSIMG: A Density-Aware Multi-Channel Image Representation Method for Mass Spectrometry
by Fengyi Zhang, Boyong Gao, Yinchu Wang, Lin Guo, Wei Zhang and Xingchuang Xiong
Sensors 2025, 25(20), 6363; https://doi.org/10.3390/s25206363 - 15 Oct 2025
Viewed by 363
Abstract
Extracting key features for phenotype classification from high-dimensional and complex mass spectrometry (MS) data presents a significant challenge. Conventional data representation methods, such as traditional peak lists or grid-based imaging strategies, are often hampered by information loss and compromised signal integrity, thereby limiting [...] Read more.
Extracting key features for phenotype classification from high-dimensional and complex mass spectrometry (MS) data presents a significant challenge. Conventional data representation methods, such as traditional peak lists or grid-based imaging strategies, are often hampered by information loss and compromised signal integrity, thereby limiting the performance of downstream deep learning models. To address this issue, we propose a novel data representation framework named MSIMG. Inspired by object detection in computer vision, MSIMG introduces a data-driven, “density-peak-centric” patch selection strategy. This strategy employs density map estimation and non-maximum suppression algorithms to locate the centers of signal-dense regions, which serve as anchors for dynamic, content-aware patch extraction. This process transforms raw mass spectrometry data into a multi-channel image representation with higher information fidelity. Extensive experiments conducted on two public clinical mass spectrometry datasets demonstrate that MSIMG significantly outperforms both the traditional peak list method and the grid-based MetImage approach. This study confirms that the MSIMG framework, through its content-aware patch selection, provides a more information-dense and discriminative data representation paradigm for deep learning models. Our findings highlight the decisive impact of data representation on model performance and successfully demonstrate the immense potential of applying computer vision strategies to analytical chemistry data, paving the way for the development of more robust and precise clinical diagnostic models. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

11 pages, 978 KB  
Article
An Analytical Solution to the 1D Drainage Problem
by Konstantinos Kalimeris and Leonidas Mindrinos
Mathematics 2025, 13(20), 3279; https://doi.org/10.3390/math13203279 - 14 Oct 2025
Viewed by 230
Abstract
We derive an analytical solution to the one-dimensional linearized Boussinesq equation with mixed boundary conditions (Dirichlet–Neumann), formulated to describe drainage in porous media. The solution is obtained via the unified transform method (Fokas method), extending its previous applications in infiltration problems and illustrating [...] Read more.
We derive an analytical solution to the one-dimensional linearized Boussinesq equation with mixed boundary conditions (Dirichlet–Neumann), formulated to describe drainage in porous media. The solution is obtained via the unified transform method (Fokas method), extending its previous applications in infiltration problems and illustrating its utility in soil hydrology. An explicit integral representation is constructed, considering different types of initial conditions. Numerical examples are presented to demonstrate the accuracy of the solution, with direct comparisons to the classical Fourier series approach. Full article
(This article belongs to the Special Issue Soliton Theory and Integrable Systems in Mathematical Physics)
Show Figures

Figure 1

Back to TopTop