Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,234)

Search Parameters:
Keywords = regularity of solutions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 264 KB  
Article
Short-Stay Sedentarism: The Local Battle over Migrant Workers’ Housing in The Netherlands
by Tesseltje de Lange and Masja van Meeteren
Soc. Sci. 2026, 15(4), 245; https://doi.org/10.3390/socsci15040245 - 10 Apr 2026
Abstract
This article investigates the housing precarity of EU migrant workers in the Dutch–German border region, focusing on the Venlo Greenport area. Drawing on documentary analysis, 28 interviews, field observations, and stakeholder engagement, it explores how local governance, market dynamics, and framing practices shape [...] Read more.
This article investigates the housing precarity of EU migrant workers in the Dutch–German border region, focusing on the Venlo Greenport area. Drawing on documentary analysis, 28 interviews, field observations, and stakeholder engagement, it explores how local governance, market dynamics, and framing practices shape housing outcomes. While EU law guarantees free movement, housing remains excluded from the EU rights frameworks, leaving workers dependent on employer-linked or agency-controlled short-stay facilities. These arrangements—often overcrowded, surveilled, and formally temporary—become long-term solutions, producing what we term short-stay sedentarism: prolonged residence in housing designed to deny permanence. The study conceptualises the local “battleground” where municipalities, employers, housing providers, NGOs, and residents negotiate competing interests. Seven interpretive frames—nuisance/disorder, cowboys, human rights, NIMBY, shadow power, integration, and unwanted accumulation—structure these debates, legitimising certain strategies while obscuring structural deficiencies. Findings reveal that certification and enforcement, while intended to improve standards, often entrench precariousness by sustaining the short-stay model. Emerging integration-oriented policies signal a shift but remain fragile amid economic imperatives and spatial constraints. The paper argues that addressing housing precarity requires structural reforms: expanding access to regular housing, reducing employer dependency, and recognising migrant workers as long-term residents rather than temporary labour inputs. Full article
(This article belongs to the Special Issue Migration and Housing)
22 pages, 2127 KB  
Article
Interfacial and Bulk Properties of Volatile Amphiphiles and Sodium Dodecyl Sulfate Mixtures
by Ralitsa Uzunova, Rumyana Stanimirova and Krassimir Danov
Molecules 2026, 31(8), 1256; https://doi.org/10.3390/molecules31081256 - 10 Apr 2026
Abstract
Volatile amphiphiles and surfactant mixtures have gained wide applications in diverse areas of industry, cosmetics, and medicine. The surface tension isotherms, measured at different solute ratios, and data processing, using appropriate theoretical models, provide quantitative information on their bulk and interfacial properties. Here, [...] Read more.
Volatile amphiphiles and surfactant mixtures have gained wide applications in diverse areas of industry, cosmetics, and medicine. The surface tension isotherms, measured at different solute ratios, and data processing, using appropriate theoretical models, provide quantitative information on their bulk and interfacial properties. Here, this approach is applied for mixtures of volatile amphiphile (benzyl acetate, linalool, geraniol, menthol, citronellol) and sodium dodecyl sulfate (SDS). All surface tension isotherms are described by the van der Waals model for a two-component adsorption layer, taking into account the counterion binding in the Stern layer, by varying only one adjustable parameter (interfacial pair interaction energy between adsorbed molecules). Knowing the parameters of the model, we computed various properties of the adsorption layers (adsorptions of different components, occupancy of the Stern layer, and interfacial electrostatic potential). The experimental aqueous solubilities of mixtures are fitted using the regular solution theory to obtain the pair bulk interaction parameter. The mixing of SDS and: (i) benzyl acetate and citronellol is antagonistic; (ii) linalool and geraniol is synergistic; and (iii) menthol is ideal. The reported properties of the volatile amphiphiles and SDS mixtures could be of interest for increasing the range of their applicability in practice. Full article
(This article belongs to the Section Physical Chemistry)
Show Figures

Graphical abstract

15 pages, 34130 KB  
Article
Experimental Evaluation of Precision Positioning in Unmanned Aerial Systems Using Fiducial Markers
by Krzysztof Andrzejewski, Bartłomiej Dziewoński, Artur Kierzkowski, Michał Szewczyk and Krzysztof Kaliszuk
Electronics 2026, 15(8), 1582; https://doi.org/10.3390/electronics15081582 - 10 Apr 2026
Abstract
This paper presents the concept of optical navigation used in unmanned aerial systems based on fiducial markers. In this work, an idea of ArUco and AprilTag markers is presented. The paper provides an overview of existing solutions and comparison between other alternative navigation [...] Read more.
This paper presents the concept of optical navigation used in unmanned aerial systems based on fiducial markers. In this work, an idea of ArUco and AprilTag markers is presented. The paper provides an overview of existing solutions and comparison between other alternative navigation methods. The ideas presented in this article have been tested on real unmanned platforms based on ArduPilot software. The paper describes and compares landing touchdown precision while using regular GPS and optical navigation approach. Previously presented markers have been used as desired touchdown positioning. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation: Third Edition)
Show Figures

Figure 1

24 pages, 6226 KB  
Article
Enhanced IMERG SPE Using LSTM with a Novel Adaptive Regularization Method
by Seng Choon Toh, Wan Zurina Wan Jaafar, Cia Yik Ng, Eugene Zhen Xiang Soo, Majid Mirzaei, Fang Yenn Teo and Sai Hin Lai
Water 2026, 18(8), 905; https://doi.org/10.3390/w18080905 - 10 Apr 2026
Abstract
Satellite-based precipitation estimates (SPE) provide essential spatial coverage and near real-time availability for hydrological applications but often exhibit systematic biases in regions characterized by complex terrain and strong climatic variability, limiting their reliability for flood-related studies. To address these limitations, this study proposes [...] Read more.
Satellite-based precipitation estimates (SPE) provide essential spatial coverage and near real-time availability for hydrological applications but often exhibit systematic biases in regions characterized by complex terrain and strong climatic variability, limiting their reliability for flood-related studies. To address these limitations, this study proposes an Adaptive Regularization framework integrated within a Long Short-Term Memory (LSTM) model to enhance satellite–gauge rainfall fusion beyond conventional optimization strategies. The framework dynamically adjusts learning rate and weight decay during training based on validation performance and overfitting indicators, improving training stability, data efficiency, and model generalization across diverse precipitation regimes. The proposed approach was applied to refine Integrated Multi-satellite Retrievals for Global Precipitation Measurement (IMERG-Final) daily rainfall estimates over the flood-prone east coast of Peninsular Malaysia. Model performance was assessed against ten optimization algorithms using correlation coefficient (CC), mean absolute error (MAE), normalized root mean squared error (NRMSE), percentage bias (PBias), and Kling–Gupta efficiency (KGE). Results show that the Adaptive Regularization framework consistently outperforms all benchmark optimizers, achieving an MAE of 6.87, CC of 0.68, NRMSE of 1.84, and KGE of 0.56. Overall, the proposed framework enhances spatial consistency and robustness across monsoon seasons, offering a scalable solution for improving SPE in flood-prone regions. Full article
(This article belongs to the Special Issue Water and Environment for Sustainability)
Show Figures

Figure 1

35 pages, 856 KB  
Article
Stock Forecasting Based on Informational Complexity Representation: A Framework of Wavelet Entropy, Multiscale Entropy, and Dual-Branch Network
by Guisheng Tian, Chengjun Xu and Yiwen Yang
Entropy 2026, 28(4), 424; https://doi.org/10.3390/e28040424 - 10 Apr 2026
Abstract
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as [...] Read more.
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as well as difficulties in balancing prediction accuracy with model complexity. To address these challenges, we propose Wavelet Entropy and Cross-Attention Network (WECA-Net), which combines wavelet decomposition with a multimodal cross-attention mechanism. From an information-theoretic perspective, stock price dynamics reflect the time-varying uncertainty and informational complexity of the market. We employ wavelet entropy to quantify the dispersion and uncertainty of energy distribution across frequency bands, and multiscale entropy to measure the scale-dependent complexity and regularity of the time series. These entropy-derived descriptors provide an interpretable prior of “information content” for cross-modal attention fusion, thereby improving robustness and generalization under non-stationary market conditions. Experiments on Chinese stock indices, A-Share, and CSI 300 component stock datasets demonstrate that WECA-Net consistently outperforms mainstream models in Mean Absolute Error (MAE) and R2 across all datasets. Notably, on the CSI 300 dataset, WECA-Net achieves an R2 of 0.9895, underscoring its strong predictive accuracy and practical applicability. This framework is also well aligned with sensor data fusion and intelligent perception paradigms, offering a robust solution for financial signal processing and real-time market state awareness. Full article
(This article belongs to the Section Complexity)
15 pages, 3191 KB  
Article
High-Uniformity Core-Shell Nanofibers for Semiconductor Packaging: Process Optimization and Performance Study of Airflow-Assisted Coaxial Electrospinning
by Xun Chen, Shize Huang, Rongguang Zhang, Xuanzhi Zhang, Jiecai Long and Guohuai Lin
Micromachines 2026, 17(4), 463; https://doi.org/10.3390/mi17040463 - 10 Apr 2026
Abstract
Semiconductor miniaturization demands stricter material uniformity. Core-shell nanofibers, promising for semiconductor packaging and flexible circuits, face application limits due to traditional coaxial electrospinning’s electric field instability—causing poor fiber diameter uniformity and challenges with high-viscosity and low-conductivity solutions. To address this, airflow-assisted coaxial electrospinning [...] Read more.
Semiconductor miniaturization demands stricter material uniformity. Core-shell nanofibers, promising for semiconductor packaging and flexible circuits, face application limits due to traditional coaxial electrospinning’s electric field instability—causing poor fiber diameter uniformity and challenges with high-viscosity and low-conductivity solutions. To address this, airflow-assisted coaxial electrospinning leveraged airflow-electric field synergy to enhance fiber stretching. COMSOL Multiphysics 6.4 simulated the influence of different inner diameters of the air flow nozzles on the air flow field, while the response surface method optimized parameters. At 10 kPa air pressure, 16.71 kV voltage, and a gas nozzle inner diameter of 3.42 mm, nanofibers showed regular morphology with a diameter coefficient of variation as low as 9.2%. This study enables stable preparation of highly uniform core-shell nanofibers, providing key process support for their large-scale semiconductor application and advancing flexible electronics and photodetection. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

27 pages, 6134 KB  
Article
SHAP-Based Insights into Environmental and Economic Performance of a Shower Heat Exchanger Under Unbalanced Flow Conditions: A Feasibility Study
by Sabina Kordana-Obuch and Mariusz Starzec
Energies 2026, 19(8), 1845; https://doi.org/10.3390/en19081845 - 9 Apr 2026
Abstract
Heat recovery from greywater is one solution for improving the energy efficiency of buildings and reducing greenhouse gas emissions. Particular attention is paid to systems utilizing heat from shower water, which, due to its high temperature and regularity, represents a promising energy source. [...] Read more.
Heat recovery from greywater is one solution for improving the energy efficiency of buildings and reducing greenhouse gas emissions. Particular attention is paid to systems utilizing heat from shower water, which, due to its high temperature and regularity, represents a promising energy source. However, the interplay of parameters determining the financial and environmental effectiveness of such a solution has not yet been fully explored. Therefore, the aim of this paper was to identify key variables influencing the feasibility of using a shower heat exchanger operating under unbalanced flow conditions and to assess the consistency between financial and environmental effects. The analyzed net present values ranged from −€1381 to €52,168. Greenhouse gas emission reduction values ranged between 61 kgCO2e and 37,207 kgCO2e. The analysis was conducted using predictive modeling and the SHAP (SHapley Additive exPlanations) method, which allows for the interpretation of the impact of individual variables on the forecasted net present value and potential greenhouse gas emission reduction. A global analysis was carried out to determine the relative importance of variables, as well as a local analysis for selected cases. The results showed that operational variables related to shower use, particularly shower length and mixed water flow rate, significantly influenced the prediction results of both models. In the case of emission reduction, greenhouse gas emission intensity and its change over time also had a significant impact, whilst the financial effects were determined by the energy price from the perspective of the subsequent years of the system’s operation. Full article
Show Figures

Figure 1

37 pages, 1897 KB  
Article
A Bayesian Feature Weighting Model with Simplex-Constrained Dirichlet and Contamination-Aware Priors for Noisy Medical Data
by Mehmet Ali Cengiz, Zeynep Öztürk and Abdulmohsen Alharthi
Mathematics 2026, 14(8), 1243; https://doi.org/10.3390/math14081243 - 8 Apr 2026
Viewed by 210
Abstract
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity [...] Read more.
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity to label noise and outliers, which are pervasive in real-world medical data. This often results in unstable importance estimates and unreliable clinical interpretations. In this work, we introduce a novel Bayesian feature weighting model that fundamentally departs from existing approaches by jointly integrating simplex-constrained Dirichlet priors for global feature weights, hierarchical shrinkage priors for coefficient regularization, and contamination-aware priors for explicit modeling of label noise within a single coherent probabilistic framework. Unlike conventional Bayesian feature selection or robust classification models, the proposed formulation yields globally interpretable feature weights defined on the probability simplex, while simultaneously providing full posterior uncertainty quantification and robustness to both mislabeled observations and aberrant feature values through principled influence control. Comprehensive simulation studies across diverse contamination scenarios, together with applications to multiple real-world medical datasets, demonstrate that the proposed model consistently outperforms classical and state-of-the-art baselines in terms of discrimination, probabilistic calibration, and stability of feature-importance estimates. These results highlight the practical and methodological significance of the proposed framework as a robust, uncertainty-aware, and interpretable solution for medical decision making under noisy data conditions. Full article
(This article belongs to the Special Issue Statistical Machine Learning: Models and Its Applications)
Show Figures

Figure 1

18 pages, 894 KB  
Article
A Generative Approach to Enhancing Forums Through SVM-Based Spam Detection
by Jose Antonio Rivera-Hernandez, Liliana Ibeth Barbosa-Santillán and Juan Jaime Sánchez-Escobar
Data 2026, 11(4), 78; https://doi.org/10.3390/data11040078 - 8 Apr 2026
Viewed by 122
Abstract
Spam consists of unsolicited messages, and the posting of such irrelevant messages often presents significant challenges in technical forums. Two particular challenges are the dynamic nature of spamming tactics and the inadequacy of adaptable spam databases for automated classifiers. Our work addresses the [...] Read more.
Spam consists of unsolicited messages, and the posting of such irrelevant messages often presents significant challenges in technical forums. Two particular challenges are the dynamic nature of spamming tactics and the inadequacy of adaptable spam databases for automated classifiers. Our work addresses the need for a robust spam classification solution that can be seamlessly integrated with database, SQL, and APEX applications. We developed a labeled spam database by asking experts to categorize 1916 posts as spam or regular posts to ensure accurate classification and then created an SVM-based spam classification model that achieves an average validation accuracy of 90%. Our research enhances the current understanding of spam in technical forums and represents a solution for embedding spam classifiers into widely used platforms with an accuracy of 98.1%. Furthermore, we explore the incorporation of generative topics into our approach by integrating generative topic modeling techniques, such as latent Dirichlet allocation. In our work, the spam classifier is dynamically updated to account for emerging spam patterns and topics based on a generative approach that improves the robustness of the classifier against new spamming tactics and enables nuanced, context-aware filtering of messages. In addition, our experiments highlight the potential of text SVM classifiers for real-time applications through the fine-tuning of text features. Full article
Show Figures

Figure 1

28 pages, 816 KB  
Article
A Two-Stage Mixed-Integer Nonlinear Framework for Assessing Load-Redistribution False Data Injection Effects in AC-OPF-Based Power System Operation
by Dheeraj Verma, Praveen Kumar Agrawal, K. R. Niazi and Nikhil Gupta
Energies 2026, 19(7), 1806; https://doi.org/10.3390/en19071806 - 7 Apr 2026
Viewed by 123
Abstract
Load-redistribution false-data-injection (LR-FDI) attacks can degrade power-system operation by reshaping the perceived nodal demand pattern, thereby inducing congestion-aware redispatch and economic inefficiency while preserving the net system load. Prior LR-FDI studies commonly adopt bilevel/Stackelberg formulations with a continuous attack vector and an embedded [...] Read more.
Load-redistribution false-data-injection (LR-FDI) attacks can degrade power-system operation by reshaping the perceived nodal demand pattern, thereby inducing congestion-aware redispatch and economic inefficiency while preserving the net system load. Prior LR-FDI studies commonly adopt bilevel/Stackelberg formulations with a continuous attack vector and an embedded operator response; however, these formulations often (i) do not represent explicit compromised-load selection, (ii) become computationally restrictive when combinatorial target sets are considered, and (iii) offer limited transparency for structured, stage-wise attack planning. This paper proposes a sequential two-stage attacker–operator framework for LR-FDI vulnerability assessment that integrates sparse load compromise decisions with screening-regularized attack synthesis and post-attack operational evaluation. In Stage-1, a mixed-integer nonlinear program identifies economically influential load buses via binary selection and determines admissible perturbation magnitudes under total-load conservation and proportional shift bounds. To confine the attacker-side search region and avoid economically exaggerated solutions, a screening-derived conservative operating-cost ceiling is first estimated through a parametric load-sensitivity analysis and then used to regularize the attack-synthesis step. In Stage-2, the system operator’s corrective redispatch is evaluated by solving an active-power-oriented economic dispatch model with nonlinear network-consistent assessment of operational outcomes. Using the IEEE 24-bus RTS, results show that the hourly operating-cost deviation reaches ≈0.2% in the most adverse feasible cases, and the cumulative daily impact approaches ≈5% only under selectively realizable compromised-load patterns, accompanied by a nearly 80% increase in total active-power transmission losses relative to the base case. Overall, the framework yields a practically grounded quantification of conditionally severe economic and network stress under coordinated LR-FDI scenarios and provides actionable insight for prioritizing vulnerable load locations for protection and monitoring. Full article
(This article belongs to the Special Issue Nonlinear Control Design for Power Systems)
Show Figures

Figure 1

34 pages, 8819 KB  
Article
Mitigating Overfitting and Physical Inconsistency in Flood Susceptibility Mapping: A Physics-Constrained Evolutionary Machine Learning Framework for Ungauged Alpine Basins
by Chuanjie Yan, Lingling Wu, Peng Huang, Jiajia Yue, Haowen Li, Chun Zhou, Congxiang Fan, Yinan Guo and Li Zhou
Water 2026, 18(7), 882; https://doi.org/10.3390/w18070882 - 7 Apr 2026
Viewed by 259
Abstract
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study [...] Read more.
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study proposes a Physically constrained Particle Swarm Optimization–Random Forest (P-PDRF) framework, validated in the Lhasa River Basin. The core innovation lies in coupling a hydrological model with statistical learning by utilizing the maximum daily runoff depth as a “Relative Hydraulic Intensity Index.” This approach leverages the topological correctness of physical simulations to circumvent absolute forcing errors. Furthermore, a Physiographically Constrained Negative Sampling (PCNS) strategy and a PSO-optimized “Shallow Tree” configuration are introduced to enforce structural regularization against stochastic noise. Empirical results demonstrate that P-PDRF achieves superior generalization (AUC = 0.942), significantly outperforming standard Random Forest, Support Vector Machine, and Analytic Hierarchy Process models. Ablation studies confirm that the dynamic index outweighs the static Topographic Wetness Index in feature importance, effectively correcting topographic artifacts where static models misclassify arid depressions as high-risk zones. This study offers a scalable Physics-Informed Machine Learning solution for the global “Prediction in Ungauged Basins” initiative. Full article
(This article belongs to the Special Issue Urban Flood Risk Assessment and Management)
Show Figures

Figure 1

12 pages, 3798 KB  
Article
Mathematical Model and Application of Areal Sweep Efficiency for Irregular Well Patterns
by Jiqiang Wu, Shijun Huang, Miaomiao Liu, Wenxuan Gao, Mengting Zuo, Shuang Zhang and Yang Wang
Processes 2026, 14(7), 1181; https://doi.org/10.3390/pr14071181 - 7 Apr 2026
Viewed by 178
Abstract
Areal sweep efficiency is a critical indicator in reservoir development. Accurate calculation of the waterflood areal sweep efficiency of a well pattern provides a theoretical basis for optimizing injection-production strategies and enhancing effective field development. However, in calculating the areal sweep efficiency of [...] Read more.
Areal sweep efficiency is a critical indicator in reservoir development. Accurate calculation of the waterflood areal sweep efficiency of a well pattern provides a theoretical basis for optimizing injection-production strategies and enhancing effective field development. However, in calculating the areal sweep efficiency of irregular well patterns, the inclusion of streamlines with excessively low corresponding flow rates can lead to an overestimation of the swept area. To address this issue, the concepts of critical flow velocity and critical streamlines were introduced, leading to the derivation of the parametric equation for critical streamlines. By considering the boundary-curve equations of the swept region for each well pair, an analytical solution for the areal sweep efficiency was obtained, thereby proposing a calculation method for the areal sweep efficiency of irregular well patterns. Compared with theoretical results for regular well patterns, the relative error of the calculated areal sweep efficiency is less than 5%, with the critical flow velocity corresponding to a pressure gradient magnitude of 0.05 times the average pressure gradient along the main streamline of the well pair. When applied to an actual irregular well pattern, the method yields an areal sweep efficiency of 0.119 km2, corresponding to a sweep coefficient of 27.2%. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

14 pages, 537 KB  
Article
An Improved Sample-Aggregation Method for Weibull Estimation of Bushing Maximum Friction Torque Under Small-Sample Conditions
by Shenglei Liu, Liqiang Zhang and Liyang Xie
Aerospace 2026, 13(4), 342; https://doi.org/10.3390/aerospace13040342 - 6 Apr 2026
Viewed by 177
Abstract
This study addresses the instability of statistical modeling for small-sample maximum friction torque data under multiple temperature conditions. Within the Weibull distribution framework, a sample-aggregation method is proposed, and a unified modeling scheme separating central tendency from dispersion structure is established. This approach [...] Read more.
This study addresses the instability of statistical modeling for small-sample maximum friction torque data under multiple temperature conditions. Within the Weibull distribution framework, a sample-aggregation method is proposed, and a unified modeling scheme separating central tendency from dispersion structure is established. This approach enables equivalent aggregation of data across different temperature levels while preserving structural consistency, thereby improving parameter estimation stability and statistical efficiency. To overcome the tendency of single-criterion optimization to fall into local optima under small-sample conditions, a secondary identification criterion combining residual minimization with a Levene-based statistical consistency test is introduced, and a dual-level search strategy is used to obtain a more robust global optimal solution. The parameter estimation results indicate that direct estimation based on small samples produces unstable parameters, with the coefficient of variation of the shape parameter reaching approximately 7.4%. In contrast, the sample-aggregation method shows that the scale parameter increases with temperature, while the location parameter first decreases and then increases due to the combined influence of central tendency and dispersion. The parameters obtained by the aggregation method exhibit more stable and regular variation trends with temperature. The results demonstrate that the proposed method significantly improves parameter stability and statistical efficiency for small-sample maximum friction torque data and provides a practical statistical modeling approach for multi-condition small-sample engineering data. Full article
Show Figures

Figure 1

26 pages, 429 KB  
Article
Modified Asymptotic Solutions and Application to Asymptotic Expansions of Indicator Functions in Mixed-Type Media
by Mishio Kawashita and Wakako Kawashita
Mathematics 2026, 14(7), 1210; https://doi.org/10.3390/math14071210 - 3 Apr 2026
Viewed by 169
Abstract
Asymptotic solutions that can describe the incidence and reflection of waves have been used in various situations. They can also be applied to inverse problems and provide useful information in situations where a precise evaluation is required. However, the construction of standard asymptotic [...] Read more.
Asymptotic solutions that can describe the incidence and reflection of waves have been used in various situations. They can also be applied to inverse problems and provide useful information in situations where a precise evaluation is required. However, the construction of standard asymptotic solutions requires higher regularity with respect to the boundaries of the observation target. This article proposes a “modified asymptotic solution” to overcome this weakness. To demonstrate its usefulness, it is applied to the analysis of the indicator function in the enclosure method for the inverse problem of the wave equation in a mixed-type medium. Full article
(This article belongs to the Section C: Mathematical Analysis)
25 pages, 4371 KB  
Article
GTS-SLAM: A Tightly-Coupled GICP and 3D Gaussian Splatting Framework for Robust Dense SLAM in Underground Mines
by Yi Liu, Changxin Li and Meng Jiang
Vehicles 2026, 8(4), 79; https://doi.org/10.3390/vehicles8040079 - 3 Apr 2026
Viewed by 285
Abstract
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for [...] Read more.
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for intelligent driving platforms such as underground mining vehicles, inspection robots, and tunnel autonomous navigation systems. The front-end performs covariance-aware point-cloud registration using GICP to achieve robust pose estimation under low texture, dust interference, and dynamic disturbances. The back-end employs probabilistic dense mapping based on 3DGS, combined with scale regularization, scale alignment, and keyframe factor-graph optimization, enabling synchronized optimization of localization and mapping. A Compact-3DGS compression strategy further reduces memory usage while maintaining real-time performance. Experiments on public datasets and real underground-like scenarios demonstrate centimeter-level trajectory accuracy, high-quality dense reconstruction, and real-time rendering. The system provides reliable perception capability for vehicle autonomous navigation, obstacle avoidance, and path planning in confined and weak-light environments. Overall, the proposed framework offers a deployable solution for autonomous driving and mobile robots requiring accurate localization and dense environmental understanding in challenging conditions. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

Back to TopTop