Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,479)

Search Parameters:
Keywords = hierarchical structure model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6586 KB  
Article
Automatic Grade Classification in Prostate Histopathological Images Using EfficientNet and Ordinal Focal Loss
by Woshington Valdeci de Sousa Rodrigues, Armando Luz, José Denes Lima Araújo, João Diniz and Antonio Oseas
Bioengineering 2026, 13(5), 503; https://doi.org/10.3390/bioengineering13050503 (registering DOI) - 26 Apr 2026
Abstract
The automatic classification of ISUP (International Society of Urological Pathology) grade groups in prostate histopathological images remains challenging due to the high similarity between adjacent classes, class imbalance, and label noise. In this work, we propose a deep learning pipeline based on EfficientNet [...] Read more.
The automatic classification of ISUP (International Society of Urological Pathology) grade groups in prostate histopathological images remains challenging due to the high similarity between adjacent classes, class imbalance, and label noise. In this work, we propose a deep learning pipeline based on EfficientNet convolutional neural networks combined with a hybrid loss function that integrates ordinal regression and Focal Loss to better capture the ordered nature of ISUP grades. A noise-filtering strategy based on the entropy of predictions from multiple EfficientNet models was first applied to identify and remove high-uncertainty samples from the training set. The problem was then reformulated as an ordinal regression task to explicitly model the hierarchical relationship among grades. Experiments conducted on the PANDA dataset demonstrate that removing noisy samples improved performance from κ=0.826 to κ=0.833. Incorporating ordinal loss further increased performance to κ=0.851. The best configuration, combining ordinal regression and Focal Loss, achieved κ=0.857 and an accuracy of 0.669, while reducing severe misclassifications and concentrating errors among adjacent classes. These results indicate that explicitly modeling ordinal structure and mitigating label noise are effective strategies for improving prostate cancer grading systems. Full article
24 pages, 6533 KB  
Article
Deep Basis Non-Negative Matrix Factorization with Multi-Centroid Contrastive Learning
by Guoqing Luo, Yuan Wan, Hubo Tan and Zaichun Sun
Mathematics 2026, 14(9), 1452; https://doi.org/10.3390/math14091452 (registering DOI) - 26 Apr 2026
Abstract
Non-negative Matrix Factorization (NMF) is a fundamental technique in unsupervised learning for data representation and clustering tasks. Although deep NMF methods have been developed to uncover hierarchical latent features, many existing approaches primarily rely on coefficient-matrix-based decomposition or single-centroid representations. This often limits [...] Read more.
Non-negative Matrix Factorization (NMF) is a fundamental technique in unsupervised learning for data representation and clustering tasks. Although deep NMF methods have been developed to uncover hierarchical latent features, many existing approaches primarily rely on coefficient-matrix-based decomposition or single-centroid representations. This often limits the integration of intra-class structural features during deep decomposition, resulting in ambiguous and incomplete local feature representations. Moreover, these frameworks often exhibit feature blurring and inadequate discriminability across hierarchical levels. This paper introduces a novel Deep Basis Non-negative Matrix Factorization with Multi-Centroid Contrastive Learning (DBMCNMF) algorithm that addresses these limitations through innovative architectural design. The proposed method integrates multi-centroid representation learning with contrastive regularization constraints within a deep basis matrix factorization framework. The algorithm uses Gaussian similarity measures to establish attractive and repulsive regularization terms that preserve manifold topology while promoting discriminative clustering. DBMCNMF uses multiple centroids instead of single-centroid methods to comprehensively cover complex data distributions and capture local geometric structures that are typically inaccessible to conventional methods. The proposed model is evaluated on several benchmark image datasets. The results indicate that DBMCNMF consistently outperforms traditional single-centroid methods by achieving higher clustering accuracy and preserving the underlying manifold structure more effectively. Full article
35 pages, 5864 KB  
Review
The State of Practice in Application of Natural Language Processing in Transportation Safety Analysis
by Mohammadjavad Bazdar, Hyun Kim, Branislav Dimitrijevic and Joyoung Lee
Appl. Sci. 2026, 16(9), 4223; https://doi.org/10.3390/app16094223 (registering DOI) - 25 Apr 2026
Abstract
This paper provides a systematic review of recent applications of NLP methods for analyzing traffic crash reports, with a focus on estimating crash severity, crash duration, and crash causation. The review covers prior research using probabilistic topic modeling methods such as LDA, STM, [...] Read more.
This paper provides a systematic review of recent applications of NLP methods for analyzing traffic crash reports, with a focus on estimating crash severity, crash duration, and crash causation. The review covers prior research using probabilistic topic modeling methods such as LDA, STM, and hierarchical Dirichlet processes in addition to research using transformer-based language models, which include encoder-based models like BERT and PubMedBERT as well as decoder-based models like GPT, GPT2, ChatGPT, GPT-3, and LLaMA. The review starts with a systematic literature selection process with predefined inclusion criteria. We categorize the reviewed studies into the following application areas: crash severity prediction, risk factor identification in crashes, and road safety analysis. The results show several complementary advantages of using different NLP techniques to achieve different analytical goals. Topic models allow for interpretable and exploratory pattern discovery, while encoder models are well-suited for structured prediction problems. Decoder models have the additional flexibility to perform zero-shot and few-shot reasoning, which makes them useful for reasoning about under-sampled or under-reported data. Across the literature, hybrid methods that combine text and structured data outperform individual methods in terms of prediction accuracy and broad applicability. Challenges across the literature include class imbalance, lack of standardization in preprocessing and evaluation methods, and the tradeoff between prediction accuracy and interpretability of prediction models. These findings highlight the importance of aligning model selection with data availability and operational constraints, pointing toward future research directions in hybrid modeling frameworks, standardized evaluation protocols, and real-world deployment of NLP-driven traffic safety systems. Full article
(This article belongs to the Special Issue Traffic Safety Measures and Assessment: 2nd Edition)
Show Figures

Figure 1

22 pages, 5563 KB  
Article
A Spectrum-Driven Hierarchical Learning Network for Aero-Engine Defect Segmentation
by Yining Xie, Aoqi Shen, Haochen Qi, Jing Zhao, Jianpeng Li, Xichun Pan and Anlong Zhang
Computation 2026, 14(5), 99; https://doi.org/10.3390/computation14050099 (registering DOI) - 25 Apr 2026
Abstract
Aero-engine defects often exhibit micro-scale and high-frequency characteristics under complex metallic textures, which makes precise segmentation difficult. Most existing pixel-level methods rely on spatial-domain modeling and lack frequency-domain decoupling. As a result, high-frequency details are easily hidden by low-frequency background information. In addition, [...] Read more.
Aero-engine defects often exhibit micro-scale and high-frequency characteristics under complex metallic textures, which makes precise segmentation difficult. Most existing pixel-level methods rely on spatial-domain modeling and lack frequency-domain decoupling. As a result, high-frequency details are easily hidden by low-frequency background information. In addition, repeated downsampling weakens the representation of fine-grained structures, leading to inaccurate boundary localization and limited robustness. To address these issues, a spectrum-driven hierarchical learning network is proposed for aero-engine defect segmentation. First, a dual-band spectral module is constructed using the discrete cosine transform to separate high-frequency and low-frequency components, providing stable and physically meaningful frequency-domain priors for the network. Second, a detail-guided module is designed where high-frequency features adaptively guide skip connections, compensating information loss during encoding and improving boundary recovery. Furthermore, a low-frequency-driven region-aware modeling module is developed. The internal defect regions, boundary areas, and background regions are modeled hierarchically. A dynamic hyper-kernel generation mechanism performs region-sensitive convolutional modeling, improving adaptation to complex structural variations. Extensive experiments on the Turbo19 and NEU-Seg datasets demonstrate that the proposed method produces accurate defect boundaries and achieves mIoU scores of 89.82% and 91.44%, improving over the second-best method by 5.22% and 4.42%, respectively. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

16 pages, 4351 KB  
Article
Representation-Centric Deep Learning for Multi-Class, Multi-Organ Histopathology Image Classification
by Li Hao and Ma Ning
Algorithms 2026, 19(5), 336; https://doi.org/10.3390/a19050336 (registering DOI) - 25 Apr 2026
Abstract
Imaging-based multi-omics derived from digital histopathology provides a valuable approach for characterizing tumor heterogeneity from routine clinical specimens. However, robust multi-cancer histopathological analysis remains challenging due to pronounced intra-tumor variability, inter-organ morphological overlap, and sensitivity to staining and acquisition variations, which can limit [...] Read more.
Imaging-based multi-omics derived from digital histopathology provides a valuable approach for characterizing tumor heterogeneity from routine clinical specimens. However, robust multi-cancer histopathological analysis remains challenging due to pronounced intra-tumor variability, inter-organ morphological overlap, and sensitivity to staining and acquisition variations, which can limit the generalizability of deep learning models. These limitations are largely driven by insufficient representation learning, particularly in multi-organ and multi-class diagnostic settings. In this study, we propose a hierarchically regularized representation learning framework for multi-cancer histopathological image analysis that models imaging-based features across multiple organs and diagnostic categories. The framework integrates complementary mechanisms to capture fine-grained cellular morphology, long-range tissue architecture, and organ-aware diagnostic semantics within a unified computational model. A hierarchical supervision strategy guides the network to reduce entanglement between organ-level structural characteristics and disease-specific diagnostic patterns in the learned representations. The method operates without pixel-level annotations or handcrafted morphological priors, supporting scalable experimental evaluation. We demonstrate the approach on balanced lung and colon cancer histopathology cohorts, achieving 96.5% accuracy on lung cancer classification and 96.8% accuracy on colon cancer classification. Ablation and robustness analyses further validate the contributions of hierarchical regularization and consistency learning. Overall, this work provides a demonstrated proof-of-concept framework for representation-centric imaging-based analysis in multi-organ histopathology under the evaluated dataset conditions. Full article
22 pages, 3438 KB  
Article
Beyond Byte-Level Modeling: Structure-Aware and Adaptive Traffic Classification for Encrypted Networks
by Gyeong-Min Yu, Yoon-Seong Jang, Ju-Sung Kim, Seung-Woo Nam, Ji-Min Kim, Yang-Seo Choi and Myung-Sup Kim
Electronics 2026, 15(9), 1828; https://doi.org/10.3390/electronics15091828 (registering DOI) - 25 Apr 2026
Abstract
The widespread adoption of encryption protocols such as TLS 1.3 has significantly reduced the visibility of packet payloads, limiting the effectiveness of traditional traffic analysis methods. Recent deep learning approaches attempt to learn representations directly from raw byte sequences; however, in encrypted environments, [...] Read more.
The widespread adoption of encryption protocols such as TLS 1.3 has significantly reduced the visibility of packet payloads, limiting the effectiveness of traditional traffic analysis methods. Recent deep learning approaches attempt to learn representations directly from raw byte sequences; however, in encrypted environments, byte-level patterns often exhibit high entropy and unstable ordering, raising concerns about their reliability. In this work, we revisit the roles of content and structural information in traffic classification and argue that effective modeling should move beyond content-only representations. We propose a structure-aware framework that models hierarchical relationships across fields, layers, and sessions while representing byte information using compact, permutation-invariant summaries. In addition, we introduce a hierarchical shuffle pretraining strategy to capture relational dependencies and an adaptive inter-level gating mechanism to dynamically integrate multi-level representations. Extensive experiments on multiple datasets with varying levels of encryption demonstrate that byte-level sequential patterns are not always essential, while structural information provides consistent complementary cues. Furthermore, the importance of different structural levels varies across datasets, highlighting the need for adaptive multi-level modeling. The proposed method achieves strong performance across diverse datasets, including highly encrypted traffic, while maintaining robustness under domain shifts and limited data scenarios. These results suggest that combining compact content representations with structural context and adaptive integration is a promising direction for encrypted traffic analysis. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 3rd Edition)
17 pages, 2710 KB  
Article
DPA-HiVQA: Enhancing Structured Radiology Reporting with Dual-Path Cross-Attention
by Ngoc Tuyen Do, Minh Nguyen Quang and Hai Van Pham
Mach. Learn. Knowl. Extr. 2026, 8(5), 113; https://doi.org/10.3390/make8050113 (registering DOI) - 24 Apr 2026
Abstract
Structured radiology reporting can improve clinical decision support by standardizing clinical findings into hierarchical formats. However, thousands of questions in structured report templates about clinical findings are prohibitively time-consuming, which can limit clinical adoption. Furthermore, early medical VQA datasets primarily focused on free-text [...] Read more.
Structured radiology reporting can improve clinical decision support by standardizing clinical findings into hierarchical formats. However, thousands of questions in structured report templates about clinical findings are prohibitively time-consuming, which can limit clinical adoption. Furthermore, early medical VQA datasets primarily focused on free-text and independent question–answer pairs while a recent dataset, Rad-ReStruct, introduced a hierarchical VQA, but the accompanying model still relies heavily on flattened embedding representations and single-path text–image fusion mechanisms that inadequately handle complex hierarchical dependencies in responses. In this paper, we propose DPA-HiVQA (Dual-Path Cross-Attention for Hierarchical VQA), addressing these limitations through two key contributions: (1) multi-scale image embedding representing global semantic embeddings with patch-level spatial features from domain-specific BioViL encoder; (2) dual-path cross-attention mechanism enabling simultaneous holistic semantic understanding and fine-grained spatial reasoning. Evaluated on the Rad-ReStruct benchmark, the model substantially outperforms the established benchmark baseline with an overall F1-score and Level 3 F1-score improvement by 21.2% and 31.9%, respectively. The proposed model demonstrates that dual-path cross-attention architectures can effectively connect holistic semantic understanding and fine-grained spatial detail, paving the way for practical AI-assisted structured reporting systems that reduce radiologist burden while maintaining diagnostic accuracy. Full article
34 pages, 2661 KB  
Article
Predictive Mamba-Enhanced Multi-Agent Reinforcement Learning Control for Virtual Coupling of High-Speed Trains
by Han Hu, Qingsheng Feng, Zhun Han, Wangyang Liu and Hong Li
Electronics 2026, 15(9), 1823; https://doi.org/10.3390/electronics15091823 (registering DOI) - 24 Apr 2026
Abstract
Virtual coupling control of trains is a promising technology for improving railway capacity and operational efficiency. However, existing multi-agent reinforcement learning (MARL) approaches struggle to capture long-sequence temporal dependencies among train states in complex multi-train interaction scenarios, resulting in limited robustness and coordination [...] Read more.
Virtual coupling control of trains is a promising technology for improving railway capacity and operational efficiency. However, existing multi-agent reinforcement learning (MARL) approaches struggle to capture long-sequence temporal dependencies among train states in complex multi-train interaction scenarios, resulting in limited robustness and coordination stability. To address this issue, this paper proposes a Predictive Mamba-based Multi-Agent Soft Actor–Critic (PM-MASAC) framework. A Mamba-based state prediction module is embedded into the centralized Critic network to model historical state sequences and generate predictive state representations, thereby enhancing value estimation accuracy. In addition, a multi-agent aggregated prioritized experience replay (PER) mechanism is introduced to improve the utilization of critical cooperative samples and stabilize training. A hierarchical local–global reward structure is further designed to ensure individual tracking performance while promoting overall formation coordination. Experimental results under realistic railway operating conditions demonstrate that PM-MASAC achieves superior robustness compared with baseline MARL methods. Velocity and spacing tracking errors are maintained within 3% and 1%, respectively, and the steady-state formation success rate exceeds 95.7% in the training environment. Full article
31 pages, 2203 KB  
Article
Hierarchical Multi-View Representation Learning via Generalized Deep Non-Negative Matrix Factorization
by Hubo Tan, Yuan Wan, Guoqing Luo and Zaichun Sun
Mathematics 2026, 14(9), 1442; https://doi.org/10.3390/math14091442 - 24 Apr 2026
Abstract
Multi-view clustering aims to exploit complementary information from multiple views to uncover intrinsic grouping structures in data, where effective representation learning plays a critical role. Non-negative matrix factorization (NMF) has been widely used for multi-view representation learning due to its inherent interpretability; however, [...] Read more.
Multi-view clustering aims to exploit complementary information from multiple views to uncover intrinsic grouping structures in data, where effective representation learning plays a critical role. Non-negative matrix factorization (NMF) has been widely used for multi-view representation learning due to its inherent interpretability; however, most existing NMF-based methods rely on shallow architectures and are therefore insufficient for capturing hierarchical characteristics. Although recent deep NMF models introduce multi-layer structures by factorizing either feature matrices or basis matrices, their performance may degrade when the data are limited or exhibit relatively simple structures. To address these issues, this paper proposes a generalized deep non-negative matrix factorization framework for multi-view representation learning, termed GDNMF-MRL, which jointly decomposes feature and basis matrices to learn hierarchical representations. By integrating shallow linear components with deep nonlinear structures, the proposed method enhances representation capability and yields more discriminative latent subspaces. Furthermore, a one-step variant, termed OS-GDNMF-MRL, is developed to simultaneously learn latent representations and clustering assignments within a unified optimization framework, enabling direct interaction between representation learning and clustering without requiring separate post-processing. Two efficient alternating optimization algorithms with guaranteed convergence of the objective function are derived, and extensive experiments on benchmark datasets demonstrate that the proposed methods consistently outperform several state-of-the-art multi-view clustering approaches. Full article
(This article belongs to the Section E: Applied Mathematics)
22 pages, 402 KB  
Article
Validation of a Scale to Measure Career Concerns Related to Perceived Environmental Challenges (the CC-PEC Scale)
by Andrea Zammitti, Angela Russo, Jenny Marcionetti and Anna Parola
Behav. Sci. 2026, 16(5), 636; https://doi.org/10.3390/bs16050636 - 24 Apr 2026
Abstract
Choosing a future career represents a complex developmental task, often accompanied by multiple concerns and anxieties. The Social Cognitive Career Theory and Life Design paradigm emphasize the importance of supporting individuals in managing career-related challenges. However, global stressors—such as the COVID-19 pandemic, the [...] Read more.
Choosing a future career represents a complex developmental task, often accompanied by multiple concerns and anxieties. The Social Cognitive Career Theory and Life Design paradigm emphasize the importance of supporting individuals in managing career-related challenges. However, global stressors—such as the COVID-19 pandemic, the war in Ukraine, and increasing awareness of the climate emergency—have introduced new and multifaceted sources of uncertainty that are not adequately captured by existing instruments. This gap highlights the need for a psychometrically sound measure to assess emerging career-related concerns in the contemporary context. Accordingly, the study aimed to develop and validate the Career Concerns related to Perceived Environmental Challenges (CC-PEC Scale). Four studies were conducted. Study 1 employed exploratory factor analysis, supporting a three-factor structure (Career-related COVID-19 pandemic concern, Career-related war concern, and Career-related climate emergency concern). Study 2 confirmed this structure using confirmatory factor analysis and demonstrated measurement invariance across gender, supporting a hierarchical factorial model. Study 3 provided evidence of concurrent and discriminant validity through associations with related constructs. Study 4 offered preliminary evidence of stability and predictive validity using life satisfaction and flourishing as outcome variables. Overall, the findings support the CC-PEC Scale as a reliable and valid instrument for assessing career-related concerns linked to global environmental challenges. These results have important implications for research and career guidance interventions aimed at supporting young people’s career development in increasingly uncertain contexts. Full article
(This article belongs to the Special Issue External Influences in Adolescents’ Career Development: 2nd Edition)
20 pages, 5677 KB  
Article
Robust Image Watermarking via Clustered Visual State-Space Modeling
by Bo Liu and Jianhua Ren
Appl. Sci. 2026, 16(9), 4166; https://doi.org/10.3390/app16094166 - 24 Apr 2026
Viewed by 68
Abstract
Most existing DNN-based image watermarking methods adopt an “encoder–noise–decoder” paradigm, where the watermark is typically replicated and expanded in a straightforward manner and then directly fused with image features, which limits robustness under complex distortions. Although Transformers improve fusion via attention mechanisms, their [...] Read more.
Most existing DNN-based image watermarking methods adopt an “encoder–noise–decoder” paradigm, where the watermark is typically replicated and expanded in a straightforward manner and then directly fused with image features, which limits robustness under complex distortions. Although Transformers improve fusion via attention mechanisms, their quadratic computational complexity makes high-resolution processing prohibitively expensive. To address these issues, we propose CCViM, a robust watermarking framework built on Vision Mamba, which leverages the linear-complexity property of state-space models (SSMs) to enable efficient global interactions. We design a Watermark Representation Learning Module (WRLM) that performs hierarchical feature extraction and structured expansion of the watermark through cascaded VSS blocks, yielding semantically rich and perturbation-resistant watermark representations. In addition, we introduce an Interwoven Fusion Enhancement Module (IFEM), which employs a CCS6 structure to treat the watermark as a dynamic guidance signal. By combining contextual clustering with the Mamba mechanism, IFEM deeply interweaves the watermark into host features at both local and global levels. Experiments on COCO, DIV2K, and ImageNet demonstrate that CCViM consistently improves imperceptibility, robustness, and efficiency to varying degrees, and remains stable and high quality under attacks such as JPEG compression, cropping, and Gaussian blur. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition & Computer Vision, 2nd Edition)
Show Figures

Figure 1

23 pages, 2175 KB  
Article
Semantic Segmentation of Sparse Array-SAR 3D Point Clouds Using an Enhanced PointNet++ Framework
by Ya Shu, Lei Pang and Miao Li
Appl. Sci. 2026, 16(9), 4149; https://doi.org/10.3390/app16094149 - 23 Apr 2026
Viewed by 67
Abstract
The semantic segmentation of sparse array synthetic aperture radar (SAR) 3D point clouds remains a significant challenge. These datasets are characterized by extreme sparsity, irregular distribution, and structural discontinuity, factors that diminish the reliability of local neighborhoods and impede the performance of traditional [...] Read more.
The semantic segmentation of sparse array synthetic aperture radar (SAR) 3D point clouds remains a significant challenge. These datasets are characterized by extreme sparsity, irregular distribution, and structural discontinuity, factors that diminish the reliability of local neighborhoods and impede the performance of traditional segmentation algorithms. This study introduces an enhanced PointNet++ framework specifically tailored for the semantic segmentation of sparse array-SAR 3D point clouds. Utilizing PointNet++ as a hierarchical backbone, the proposed architecture incorporates three geometry-oriented modifications: a feature enhancement strategy integrating normalized height, surface normals, and local density; an EdgeConv module positioned at an intermediate abstraction stage to reinforce local geometric modeling; and an FP-Refine module designed to optimize cross-scale feature propagation and recovery within sparse regions. Rather than proposing a fundamentally distinct universal architecture, this research focuses on a task-oriented adaptation of PointNet++ to address the neighborhood instability and structural gaps inherent in sparse array-SAR data. Experimental evaluations using the SARMV3D-1.0 dataset indicate that the proposed method consistently outperforms the PointNet++ baseline, maintaining stable performance across various random seeds with an mIoU between 55% and 58%. Further validation through ablation studies, parameter sensitivity analyses, and perturbation-based robustness assessments confirms the utility of the integrated components. Additionally, cross-dataset experiments on S3DIS and Toronto3D suggest that the framework generalizes effectively to point clouds with varying densities and spatial configurations. The findings demonstrate that the method is particularly successful for categories defined by distinct vertical geometry and structural continuity, such as trees, roofs, and facades, though performance remains limited for weakly structured classes like roads. Full article
47 pages, 5277 KB  
Article
A Probabilistic–Statistical Approach to Mass Transfer in Randomly Nonhomogeneous Layered Media Based on Boundary Experimental Data
by Olha Chernukha, Petro Pukach, Halyna Bilushchak, Yurii Bilushchak and Myroslava Vovk
Mathematics 2026, 14(9), 1413; https://doi.org/10.3390/math14091413 - 23 Apr 2026
Viewed by 60
Abstract
This paper presents a probabilistic–statistical approach to the analysis of diffusion processes in randomly nonhomogeneous multilayered bodies under conditions of incomplete experimental information on the boundary. The boundary condition is reconstructed from experimental data using linear regression, while the solution of the corresponding [...] Read more.
This paper presents a probabilistic–statistical approach to the analysis of diffusion processes in randomly nonhomogeneous multilayered bodies under conditions of incomplete experimental information on the boundary. The boundary condition is reconstructed from experimental data using linear regression, while the solution of the corresponding contact initial-boundary value problem is obtained in the form of a Neumann series and averaged over an ensemble of phase configurations. A system of statistical estimates for the solution is developed, including confidence intervals and two-sided critical regions, which provide complementary characteristics of uncertainty. Numerical experiments are performed for six representative samples differing in sample size, variance, and observation interval. It is shown that, despite significant differences in the statistical properties of the input data, the averaged concentration field preserves a qualitatively stable spatio-temporal structure. The results of the article address gaps in existing research by applying a probabilistic-statistical approach that consistently integrates two key elements for the analysis of diffusion processes in multilayer media. The first of these is the reconstruction of boundary conditions using linear regression to recover the conditions at the body boundary based on incomplete experimental data. The second key point is the analysis of uncertainty propagation by combining the regression model with a probabilistic analysis of the corresponding contact initial-boundary value problem, which allows us to quantitatively assess how the errors in the experimental data affect the final solution. From the point of view of mathematical modeling methods, the novelty of the approach lies in the creation of a structural-hierarchical scheme that synthesizes the approaches of mathematical statistics and the theory of random fields. The developed method is a theoretical and computational innovative basis for the analysis of specific physical and technological processes. Full article
(This article belongs to the Special Issue Theory and Applications of Probability Theory and Stochastic Analysis)
49 pages, 14696 KB  
Review
Recent Advances in Additively Manufactured Polymeric Structures for Mechanical Energy Absorption
by Alin Bustihan and Ioan Botiz
Polymers 2026, 18(9), 1019; https://doi.org/10.3390/polym18091019 - 23 Apr 2026
Viewed by 362
Abstract
Additive manufacturing has emerged as a powerful approach for producing architected materials with tailored mechanical properties and enhanced energy absorption capabilities. By enabling precise control over geometry, relative density, and hierarchical topology, additive manufacturing facilitates the design of lightweight cellular structures with superior [...] Read more.
Additive manufacturing has emerged as a powerful approach for producing architected materials with tailored mechanical properties and enhanced energy absorption capabilities. By enabling precise control over geometry, relative density, and hierarchical topology, additive manufacturing facilitates the design of lightweight cellular structures with superior crashworthiness compared to conventional energy-absorbing materials. This review provides a comprehensive overview of recent advances in additively manufactured energy-absorbing structures, with particular emphasis on the interplay between structural architecture, fabrication technologies, and mechanical performance. Key additive manufacturing processes, including fused deposition modeling, stereolithography, selective laser sintering, and multi-jet fusion, are evaluated in terms of their fabrication capabilities, material compatibility, and inherent limitations. Special attention is given to the mechanical behavior of representative architectures, including two-dimensional cellular structures, three-dimensional lattice geometries, sandwich systems, and emerging four-dimensional programmable materials. Depending on topology and material system, additively manufactured lattices can achieve specific energy absorption values exceeding 20–40 J g−1, significantly outperforming many conventional foams. Finally, current challenges, such as process-induced defects, anisotropic mechanical behavior, and the lack of standardized testing methodologies, are discussed, along with future research directions, including multi-material printing, functionally graded architectures, and adaptive metamaterials for next-generation impact mitigation systems. Full article
(This article belongs to the Special Issue Additive Manufacturing of Polymer Based Materials)
Show Figures

Figure 1

21 pages, 340 KB  
Article
Pareto-Optimal Explainable Diagnosis Under Cost-Aware Parallel Reasoning
by Ana Chacón-Luna, Miguel Tupac-Yupanqui, Nicolás Márquez and Cristian Vidal-Silva
Computers 2026, 15(5), 265; https://doi.org/10.3390/computers15050265 - 23 Apr 2026
Viewed by 169
Abstract
Model-Based Diagnosis (MBD) is widely used to identify minimal conflicts and repair actions in constraint-based systems. Recent advances in parallel reasoning have significantly reduced runtime in large-scale models through speculative and multicore execution strategies. However, existing approaches primarily focus on computational efficiency and [...] Read more.
Model-Based Diagnosis (MBD) is widely used to identify minimal conflicts and repair actions in constraint-based systems. Recent advances in parallel reasoning have significantly reduced runtime in large-scale models through speculative and multicore execution strategies. However, existing approaches primarily focus on computational efficiency and implicitly assume that minimal diagnoses are inherently suitable explanations for human decision makers. In complex configuration environments, minimality does not necessarily imply interpretability, as diagnoses may involve structurally dispersed or semantically heterogeneous constraints. To address this limitation, this paper introduces a multi-objective explainability-aware framework for parallel MDB. Diagnosis selection is formulated as a Pareto optimization problem balancing total computational cost and a formally defined interpretability penalty. Interpretability is quantified using graph-based structural dispersion, semantic entropy, hierarchical complexity, and ambiguity metrics. The proposed E-ParetoDiag algorithm computes non-dominated diagnoses and identifies balanced knee-point solutions without modifying correctness guarantees of underlying diagnosis algorithms. Experimental evaluation on large-scale benchmark datasets demonstrates a measurable trade-off between runtime and interpretability, particularly in dense constraint systems. Comparative analysis against classical selection strategies shows that the proposed approach reduces structural dispersion by up to 18% while increasing computational cost by only 7%. Statistical validation confirms that these improvements are significant (p < 0.01) in medium- and high-density scenarios. The results indicate that aggressive parallelism may improve computational efficiency while increasing explanation complexity, highlighting the need for multi-objective selection strategies. Overall, the proposed framework extends scalable symbolic reasoning toward a human-centered diagnosis paradigm and establishes a principled foundation for explainability-aware optimization in constraint-based systems. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

Back to TopTop