Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (134)

Search Parameters:
Keywords = partial autocorrelation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6464 KiB  
Article
A Hybrid Model for Carbon Price Forecasting Based on Secondary Decomposition and Weight Optimization
by Yongfa Chen, Yingjie Zhu, Jie Wang and Meng Li
Mathematics 2025, 13(14), 2323; https://doi.org/10.3390/math13142323 - 21 Jul 2025
Viewed by 165
Abstract
Accurate carbon price forecasting is essential for market stability, risk management, and policy-making. To address the nonlinear, non-stationary, and multiscale nature of carbon prices, this paper proposes a forecasting framework integrating secondary decomposition, two-stage feature selection, and dynamic ensemble learning. Firstly, the original [...] Read more.
Accurate carbon price forecasting is essential for market stability, risk management, and policy-making. To address the nonlinear, non-stationary, and multiscale nature of carbon prices, this paper proposes a forecasting framework integrating secondary decomposition, two-stage feature selection, and dynamic ensemble learning. Firstly, the original price series is decomposed into intrinsic mode functions (IMFs), using complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). The IMFs are then grouped into low- and high-frequency components based on multiscale entropy (MSE) and K-Means clustering. To further alleviate mode mixing in the high-frequency components, an improved variational mode decomposition (VMD) optimized by particle swarm optimization (PSO) is applied for secondary decomposition. Secondly, a two-stage feature-selection method is employed, in which the partial autocorrelation function (PACF) is used to select relevant lagged features, while the maximal information coefficient (MIC) is applied to identify key variables from both historical and external data. Finally, this paper introduces a dynamic integration module based on sliding windows and sequential least squares programming (SLSQP), which can not only adaptively adjust the weights of four base learners but can also effectively leverage the complementary advantages of each model and track the dynamic trends of carbon prices. The empirical results of the carbon markets in Hubei and Guangdong indicate that the proposed method outperforms the benchmark model in terms of prediction accuracy and robustness, and the method has been tested by Diebold Mariano (DM). The main contributions are the improved feature-extraction process and the innovative use of a sliding window-based SLSQP method for dynamic ensemble weight optimization. Full article
Show Figures

Figure 1

23 pages, 8957 KiB  
Article
Geometallurgical Cluster Creation in a Niobium Deposit Using Dual-Space Clustering and Hierarchical Indicator Kriging with Trends
by João Felipe C. L. Costa, Fernanda G. F. Niquini, Claudio L. Schneider, Rodrigo M. Alcântara, Luciano N. Capponi and Rafael S. Rodrigues
Minerals 2025, 15(7), 755; https://doi.org/10.3390/min15070755 - 19 Jul 2025
Viewed by 178
Abstract
Alkaline carbonatite complexes are formed by magmatic, hydrothermal, and weathering geological events, which modify the minerals present in the rocks, resulting in ores with varied metallurgical behavior. To better spatially distinguish ores with distinct plant responses, creating a 3D geometallurgical block model was [...] Read more.
Alkaline carbonatite complexes are formed by magmatic, hydrothermal, and weathering geological events, which modify the minerals present in the rocks, resulting in ores with varied metallurgical behavior. To better spatially distinguish ores with distinct plant responses, creating a 3D geometallurgical block model was necessary. To establish the clusters, four different algorithms were tested: K-Means, Hierarchical Agglomerative Clustering, dual-space clustering (DSC), and clustering by autocorrelation statistics. The chosen method was DSC, which can consider the multivariate and spatial aspects of data simultaneously. To better understand each cluster’s mineralogy, an XRD analysis was conducted, shedding light on why each cluster performs differently in the plant: cluster 0 contains high magnetite content, explaining its strong magnetic yield; cluster 3 has low pyrochlore, resulting in reduced flotation yield; cluster 2 shows high pyrochlore and low gangue minerals, leading to the best overall performance; cluster 1 contains significant quartz and monazite, indicating relevance for rare earth elements. A hierarchical indicator kriging workflow incorporating a stochastic partial differential equation (SPDE) trend model was applied to spatially map these domains. This improved the deposit’s circular geometry reproduction and better represented the lithological distribution. The elaborated model allowed the identification of four geometallurgical zones with distinct mineralogical profiles and processing behaviors, leading to a more robust model for operational decision-making. Full article
(This article belongs to the Special Issue Geostatistical Methods and Practices for Specific Ore Deposits)
Show Figures

Figure 1

8 pages, 758 KiB  
Article
Role of Diffuser Autocorrelation and Spatial Translation in Computational Ghost Imaging
by Yishai Albeck, Shimon Sukholuski, Orit Herman, Talya Arusi-Parpar, Sharon Shwartz and Eliahu Cohen
Photonics 2025, 12(7), 650; https://doi.org/10.3390/photonics12070650 - 26 Jun 2025
Viewed by 251
Abstract
Ghost imaging (GI) is an imaging modality typically based on correlations between a single-pixel (bucket) detector collecting the electromagnetic field which was transmitted through or reflected from an object and a high-resolution detector which measures the field that did not interact with the [...] Read more.
Ghost imaging (GI) is an imaging modality typically based on correlations between a single-pixel (bucket) detector collecting the electromagnetic field which was transmitted through or reflected from an object and a high-resolution detector which measures the field that did not interact with the object. When using partially coherent sources, fluctuations can be introduced into a beam by rotating or translating a diffuser, and then the beam is split into two beams with identical intensity fluctuations. In computational GI, the diffuser with an unknown scatter distribution is replaced by a diffuser with a known scatter distribution so that the reference beam and high-resolution detector can be discarded. In this work, we wish to examine how the relation between the diffuser’s autocorrelation length and its spatial displacement affects the quality of image reconstruction obtained with these methods. We first analyze this general question theoretically and simulatively, and we then present some specific, proof-of-principle results we obtained in an optical setup. Finally, we discuss the relation between theory and experiment, suggesting some general conclusions regarding the preferred working points. Full article
Show Figures

Figure 1

18 pages, 1063 KiB  
Article
Multi-Model and Variable Combination Approaches for Improved Prediction of Soil Heavy Metal Content
by Xiaolong Chen, Hongfeng Zhang, Cora Un In Wong and Zhengchun Song
Processes 2025, 13(7), 2008; https://doi.org/10.3390/pr13072008 - 25 Jun 2025
Viewed by 309
Abstract
Soil heavy metal contamination poses significant risks to ecosystems and human health, necessitating accurate prediction methods for effective monitoring and remediation. We propose a multi-model and variable combination framework to improve the prediction of soil heavy metal content by integrating diverse environmental and [...] Read more.
Soil heavy metal contamination poses significant risks to ecosystems and human health, necessitating accurate prediction methods for effective monitoring and remediation. We propose a multi-model and variable combination framework to improve the prediction of soil heavy metal content by integrating diverse environmental and spatial features. The methodology incorporates environmental variables (e.g., soil properties, remote sensing indices), spatial autocorrelation measures based on nearest-neighbor distances, and spatial regionalization variables derived from interpolation techniques such as ordinary kriging, inverse distance weighting, and trend surface analysis. These variables are systematically combined into six distinct sets to evaluate their predictive performance. Three advanced models—Partial Least Squares Regression, Random Forest, and a Deep Forest variant (DF21)—are employed to assess the robustness of the approach across different variable combinations. Experimental results demonstrate that the inclusion of spatial autocorrelation and regionalization variables consistently enhances prediction accuracy compared to using environmental variables alone. Furthermore, the proposed framework exhibits strong generalizability, as validated through subset analyses with reduced training data. The study highlights the importance of integrating spatial dependencies and multi-source data for reliable heavy metal prediction, offering practical insights for environmental management and policy-making. Compared to using environmental variables alone, the full framework incorporating spatial features achieved relative improvements of 18–23% in prediction accuracy (R2) across all models, with the Deep Forest variant (DF21) showing the most substantial enhancement. The findings advance the field by providing a flexible and scalable methodology adaptable to diverse geographical contexts and data availability scenarios. Full article
(This article belongs to the Special Issue Environmental Protection and Remediation Processes)
Show Figures

Figure 1

33 pages, 2441 KiB  
Article
Kernel Ridge-Type Shrinkage Estimators in Partially Linear Regression Models with Correlated Errors
by Syed Ejaz Ahmed, Ersin Yilmaz and Dursun Aydın
Mathematics 2025, 13(12), 1959; https://doi.org/10.3390/math13121959 - 13 Jun 2025
Viewed by 249
Abstract
Partially linear time series models often suffer from multicollinearity among regressors and autocorrelated errors, both of which can inflate estimation risk. This study introduces a generalized ridge-type kernel (GRTK) framework that combines kernel smoothing with ridge shrinkage and augments it through ordinary and [...] Read more.
Partially linear time series models often suffer from multicollinearity among regressors and autocorrelated errors, both of which can inflate estimation risk. This study introduces a generalized ridge-type kernel (GRTK) framework that combines kernel smoothing with ridge shrinkage and augments it through ordinary and positive-part Stein adjustments. Closed-form expressions and large-sample properties are established, and data-driven criteria—including GCV, AICc, BIC, and RECP—are used to tune the bandwidth and shrinkage penalties. Monte-Carlo simulations indicate that the proposed procedures usually reduce risk relative to existing semiparametric alternatives, particularly when the predictors are strongly correlated and the error process is dependent. An empirical study of US airline-delay data further demonstrates that GRTK produces a stable, interpretable fit, captures a nonlinear air-time effect overlooked by conventional approaches, and leaves only a modest residual autocorrelation. By tackling multicollinearity and autocorrelation within a single, flexible estimator, the GRTK family offers practitioners a practical avenue for more reliable inference in partially linear time series settings. Full article
(This article belongs to the Special Issue Statistical Forecasting: Theories, Methods and Applications)
Show Figures

Figure 1

24 pages, 3545 KiB  
Article
Leveraging Advanced Data-Driven Approaches to Forecast Daily Floods Based on Rainfall for Proactive Prevention Strategies in Saudi Arabia
by Anwar Ali Aldhafiri, Mumtaz Ali and Abdulhaleem H. Labban
Water 2025, 17(11), 1699; https://doi.org/10.3390/w17111699 - 3 Jun 2025
Viewed by 458
Abstract
Accurate flood forecasts are imperative to supervise and prepare for extreme events to assess the risks and develop proactive prevention strategies. The flood time-series data exhibit both spatial and temporal structures and make it challenging for the models to fully capture the embedded [...] Read more.
Accurate flood forecasts are imperative to supervise and prepare for extreme events to assess the risks and develop proactive prevention strategies. The flood time-series data exhibit both spatial and temporal structures and make it challenging for the models to fully capture the embedded features due to their complex stochastic nature. This paper proposed a new approach for the first time using variational mode decomposition (VMD) hybridized with Gaussian process regression (GPR) to design the VMD-GPR model for daily flood forecasting. First, the VMD model decomposed the (t − 1) lag into several signals called intrinsic mode functions (IMFs). The VMD has the ability to improve noise robustness, better mode separation, reduced mode aliasing, and end effects. Then, the partial auto-correlation function (PACF) was applied to determine the significant lag (t − 1). Finally, the PACF-based decomposed IMFs were sent into the GPR to forecast the daily flood index at (t − 1) for Jeddah and Jazan stations in Saudi Arabia. The long short-term memory (LSTM) boosted regression tree (BRT) and cascaded forward neural network (CFNN) models were combined with VMD to compare along with the standalone versions. The proposed VMD-GPR outperformed the comparing model to forecast daily floods for both stations using a set of performance metrics. The VMD-GPR outperformed comparing models by achieving R = 0.9825, RMSE = 0.0745, MAE = 0.0088, ENS = 0.9651, KGE = 0.9802, IA = 0.9911, U95% = 0.2065 for Jeddah station, and R = 0.9891, RMSE = 0.0945, MAE = 0.0189, ENS = 0.9781, KGE = 0.9849, IA = 0.9945, U95% = 0.2621 for Jazan station. The proposed VMD-GPR method efficiently analyzes flood events to forecast in these two stations to facilitate flood forecasting for disaster mitigation and enable the efficient use of water resources. The VMD-GPR model can help policymakers in strategic planning flood management to undertake mandatory risk mitigation measures. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

32 pages, 14609 KiB  
Article
How Does the Platform Economy Affect Urban System: Evidence from Business-to-Business (B2B) E-Commerce Enterprises in China
by Pengfei Fang, Xiaojin Cao, Yuhao Huang and Yile Chen
Buildings 2025, 15(10), 1687; https://doi.org/10.3390/buildings15101687 - 16 May 2025
Viewed by 651
Abstract
In the new paradigm where the digital economy is profoundly reshaping urban spatial organization, how the platform economy transcends traditional geographical constraints to restructure the urban system has become a strategic issue in urban geography and regional economics. This study develops an innovative [...] Read more.
In the new paradigm where the digital economy is profoundly reshaping urban spatial organization, how the platform economy transcends traditional geographical constraints to restructure the urban system has become a strategic issue in urban geography and regional economics. This study develops an innovative measurement framework based on Business-to-Business (B2B) e-commerce enterprises to analyze platform-driven urban systems across 337 Chinese cities. Using spatial autocorrelation, rank-size distributions, and urban scaling laws, we reveal spatial differentiation patterns of cities’ B2B platforms. Combining Ordinary Least Squares (OLS) and random forest models with Partial Dependence Plots (PDP), Individual Conditional Expectations (ICE), and Locally Weighted Scatterplot Smoothing (LOWESS), we uncover non-linear mechanisms between platform development and urban attributes. Results indicate that (1) B2B platforms exhibit “superliner agglomeration” and “gradient locking”, reinforcing advantages in top-tier cities; (2) platform effects are non-linear, with Gross Domestic Product (GDP), Information Technology (IT) employment, and service sector shares showing threshold-enhanced marginal effects, while manufacturing bases display saturation effects; and (3) regional divergence exists, with eastern consumer-oriented platforms forming digital synergies, while western manufacturing platforms face path dependence. The findings highlight that platform economy evolution is shaped by a “threshold–adaptation–differentiation” mechanism rather than neutral diffusion. This study provides new insights into urban system restructuring under digital transformation. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

10 pages, 1175 KiB  
Data Descriptor
A Dataset for Examining the Problem of the Use of Accounting Semi-Identity-Based Models in Econometrics
by Francisco Javier Sánchez-Vidal
Data 2025, 10(5), 62; https://doi.org/10.3390/data10050062 - 28 Apr 2025
Viewed by 343
Abstract
The problem of using accounting semi-identity-based (ASI) models in Econometrics can be severe in certain circumstances, and estimations from OLS regressions in such models may not accurately reflect causal relationships. This dataset was generated through Monte Carlo simulations, which allowed for the precise [...] Read more.
The problem of using accounting semi-identity-based (ASI) models in Econometrics can be severe in certain circumstances, and estimations from OLS regressions in such models may not accurately reflect causal relationships. This dataset was generated through Monte Carlo simulations, which allowed for the precise control of a causal relationship. The problem of an ASI cannot be directly demonstrated in real samples, as researchers lack insight into the specific factors driving each company’s investment policy. Consequently, it is impossible to distinguish whether regression results in such datasets stem from actual causality or are merely a byproduct of arithmetic distortions introduced by the ASI. The strategy of addressing this issue through simulations allows researchers to determine the true value of any estimator with certainty. The selected model for testing the influence of the ASI problem is the investment-cash flow sensitivity model (Fazzari, Hubbard and Petersen (FHP hereinafter) (1988)), which seeks to establish a relationship between a company’s investments and its cash flows and which is an ASI as well. The dataset included randomly generated independent variables (cash flows and Tobin’s Q) to analyze how they influence the dependent variable (cash flows). The Monte Carlo methodology in Stata enabled repeated sampling to assess how ASIs affect regression models, highlighting their impact on variable relationships and the unreliability of estimated coefficients. The purpose of this paper is twofold: its first goal is to provide a deeper explanation of the syntax in the related article, offering more insights into the ASI problem. The openly available dataset supports replication and further research on ASIs’ effects in economic models and can be adapted for other ASI-based analyses, as the information comprised in the reusability examples prove. Second, our aim is to encourage research supported by Monte Carlo simulations, as they enable the modeling of a comprehensive ecosystem of economic relationships between variables. This allows researchers to address a variety of issues, such as partial correlations, heteroskedasticity, multicollinearity, autocorrelation, endogeneity, and more, while testing their impact on the true value of coefficients. Full article
Show Figures

Figure 1

19 pages, 14693 KiB  
Article
Community-Level Urban Vitality Intensity and Diversity Analysis Supported by Multisource Remote Sensing Data
by Zhiran Zhang, Jiping Liu, Yangyang Zhao, Qing Zhou, Lijun Song and Shenghua Xu
Remote Sens. 2025, 17(6), 1056; https://doi.org/10.3390/rs17061056 - 17 Mar 2025
Cited by 1 | Viewed by 944
Abstract
Urban vitality serves as a crucial metric for evaluating sustainable urban development and the well-being of residents. Existing studies have predominantly focused on analyzing the direct effects of urban vitality intensity (VI) and its influencing factors, while paying less attention to the urban [...] Read more.
Urban vitality serves as a crucial metric for evaluating sustainable urban development and the well-being of residents. Existing studies have predominantly focused on analyzing the direct effects of urban vitality intensity (VI) and its influencing factors, while paying less attention to the urban vitality diversity (VD) and its indirect impact mechanisms. Supported by multisource remote sensing data, this study establishes a five-dimensional urban vitality evaluation system and employs the Partial Least Squares Structural Equation Model (PLS-SEM) to quantify direct and indirect interrelationships between these multidimensional factors and VI/VD. The findings are as follows: (1) Spatial divergence between VI and VD: VI exhibited stronger clustering (I = 1.12), predominantly aggregating in central urban areas, whereas VD demonstrated moderate autocorrelation (I = 0.45) concentrated in mixed-use central or suburban zones. (2) Drivers of vitality intensity: VI are strongly associated with commercial density (β = 0.344) and transportation accessibility (β = 0.253), but negatively correlated with natural environment quality (r = −0.166). (3) Mechanisms of vitality diversity: VD is closely linked to public service (β = 0.228). This research provides valuable insights for city development and decision-making, particularly in strengthening urban vitality and optimizing urban functional layouts. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

17 pages, 28541 KiB  
Article
Utilizing Deep Learning Models to Predict Streamflow
by Habtamu Alemu Workneh and Manoj K. Jha
Water 2025, 17(5), 756; https://doi.org/10.3390/w17050756 - 5 Mar 2025
Cited by 3 | Viewed by 1828
Abstract
This study employs convolutional neural network (CNN), long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM), and gated recurrent unit (GRU) deep learning models to simulate daily streamflow using precipitation data. Two approaches were explored: one without dimension reduction and another incorporating dimensionality [...] Read more.
This study employs convolutional neural network (CNN), long short-term memory (LSTM), bidirectional long short-term memory (BiLSTM), and gated recurrent unit (GRU) deep learning models to simulate daily streamflow using precipitation data. Two approaches were explored: one without dimension reduction and another incorporating dimensionality reduction technique. Principal component analysis (PCA) was employed for dimensionality reduction, and partial autocorrelation function (PACF) was used to determine time lags. An augmented Dickey–Fuller (ADF) test was utilized to ascertain the stationarity of the data, ensuring optimal model performance. The data were normalized and then partitioned into features and target variables, before being split into training, validation, and test sets. The developed models were tested for their performance, robustness, and stability at three locations along the Neuse River, which is in the Neuse River Basin, North Carolina, USA, covering an area of about 14,500 km2. Furthermore, the model’s performance was tested during peak flood events to assess their ability to capture the temporal resolution of streamflow. The results revealed that the CNN model could capture the variability in daily streamflow prediction, as evidenced by excellent statistical measures, including mean absolute error, root mean square error, and Nush–Sutcliffe efficiency. The study also found that incorporating dimensionality reduction significantly improved model performance. Full article
Show Figures

Figure 1

27 pages, 1455 KiB  
Article
Neutral Delayed Fractional Models in Financial Time Series: Insights into Borsa Istanbul Sectors Affected by the Kahramanmaraş Earthquake
by Ömer Akgüller, Mehmet Ali Balcı, Larissa Margareta Batrancea, Dilara Altan Koç and Anca Nichita
Fractal Fract. 2025, 9(3), 141; https://doi.org/10.3390/fractalfract9030141 - 24 Feb 2025
Viewed by 541
Abstract
This study examines the impact of the Kahramanmaraş Earthquake on four key sectors of Borsa Istanbul: Basic Metal, Insurance, Non-Metallic Mineral Products, and Wholesale and Retail Trade using neutral delayed fractional differential equations. Employing the Chebyshev collocation method, we numerically solved the neutral [...] Read more.
This study examines the impact of the Kahramanmaraş Earthquake on four key sectors of Borsa Istanbul: Basic Metal, Insurance, Non-Metallic Mineral Products, and Wholesale and Retail Trade using neutral delayed fractional differential equations. Employing the Chebyshev collocation method, we numerically solved the neutral delayed fractional differential equations with initial conditions scaled by each sector’s log difference standard deviation to accurately reflect market volatility. Fractional orders were derived from the Hurst exponent, and time delays were identified using average mutual information, autocorrelation function, and partial autocorrelation function methods. The results reveal significant changes post-earthquake, including reduced market persistence and increased volatility in the Basic Metal and Insurance sectors, contrasted by enhanced stability in the Non-Metallic Mineral Products sector. Neutral delayed fractional differential equations demonstrated superior performance over traditional models by effectively capturing memory and delay effects. This work underscores the efficacy of neutral delayed fractional differential equations in modeling financial resilience amid external shocks. Full article
(This article belongs to the Special Issue Applications of Fractional Calculus in Modern Mathematical Modeling)
Show Figures

Figure 1

19 pages, 13346 KiB  
Article
Study on Fluctuating Wind Characteristics and Non-Stationarity at U-Shaped Canyon Bridge Site
by Zhe Sun, Zhuoyi Zou, Jiaying Wang, Xue Zhao and Feng Wang
Appl. Sci. 2025, 15(3), 1482; https://doi.org/10.3390/app15031482 - 31 Jan 2025
Viewed by 793
Abstract
To investigate the non-stationary characteristics of the wind field at the U-shaped canyon bridge site and its impact on fluctuating wind characteristics, a wind observation tower was installed near a cable-stayed bridge. The Augmented Dickey–Fuller (ADF) test was employed to assess the stationarity [...] Read more.
To investigate the non-stationary characteristics of the wind field at the U-shaped canyon bridge site and its impact on fluctuating wind characteristics, a wind observation tower was installed near a cable-stayed bridge. The Augmented Dickey–Fuller (ADF) test was employed to assess the stationarity of wind speed series, while the discrete wavelet transform (DWT) was applied to reconstruct the time-varying mean wind and analyze its effect on fluctuating wind characteristics. Results indicate that wind speeds in this region exhibit bimodal distribution characteristics, with the Weibull-Gamma mixed distribution model providing the best fit. The proportion of non-stationary samples increases with height. Autocorrelation function (ACF), partial autocorrelation function (PACF) tests, and power spectral density (PSD) analysis determined the optimal wavelet decomposition level for wind speed in this region. Analysis of non-stationary samples using db10 wavelet reconstruction reveals that the stationary wind speed model overestimates turbulence intensity but underestimates the turbulence integral scale. The downwind spectrum deviates from the Kaimal spectrum in both low- and high-frequency bands, whereas the vertical spectrum aligns well with the Panofsky spectrum. The findings demonstrate that the wavelet reconstruction method more accurately captures fluctuating wind characteristics under the complex terrain conditions of this canyon area. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

54 pages, 5783 KiB  
Article
Characterization of RAP Signal Patterns, Temporal Relationships, and Artifact Profiles Derived from Intracranial Pressure Sensors in Acute Traumatic Neural Injury
by Abrar Islam, Amanjyot Singh Sainbhi, Kevin Y. Stein, Nuray Vakitbilir, Alwyn Gomez, Noah Silvaggio, Tobias Bergmann, Mansoor Hayat, Logan Froese and Frederick A. Zeiler
Sensors 2025, 25(2), 586; https://doi.org/10.3390/s25020586 - 20 Jan 2025
Viewed by 1226
Abstract
Goal: Current methodologies for assessing cerebral compliance using pressure sensor technologies are prone to errors and issues with inter- and intra-observer consistency. RAP, a metric for measuring intracranial compensatory reserve (and therefore compliance), holds promise. It is derived using the moving correlation between [...] Read more.
Goal: Current methodologies for assessing cerebral compliance using pressure sensor technologies are prone to errors and issues with inter- and intra-observer consistency. RAP, a metric for measuring intracranial compensatory reserve (and therefore compliance), holds promise. It is derived using the moving correlation between intracranial pressure (ICP) and the pulse amplitude of ICP (AMP). RAP remains largely unexplored in cases of moderate to severe acute traumatic neural injury (also known as traumatic brain injury (TBI)). The goal of this work is to explore the general description of (a) RAP signal patterns and behaviors derived from ICP pressure transducers, (b) temporal statistical relationships, and (c) the characterization of the artifact profile. Methods: Different summary and statistical measurements were used to describe RAP’s pattern and behaviors, along with performing sub-group analyses. The autoregressive integrated moving average (ARIMA) model was employed to outline the time-series structure of RAP across different temporal resolutions using the autoregressive (p-order) and moving average orders (q-order). After leveraging the time-series structure of RAP, similar methods were applied to ICP and AMP for comparison with RAP. Finally, key features were identified to distinguish artifacts in RAP. This might involve leveraging ICP/AMP signals and statistical structures. Results: The mean and time spent within the RAP threshold ranges ([0.4, 1], (0, 0.4), and [−1, 0]) indicate that RAP exhibited high positive values, suggesting an impaired compensatory reserve in TBI patients. The median optimal ARIMA model for each resolution and each signal was determined. Autocorrelative function (ACF) and partial ACF (PACF) plots of residuals verified the adequacy of these median optimal ARIMA models. The median of residuals indicates that ARIMA performed better with the higher-resolution data. To identify artifacts, (a) ICP q-order, AMP p-order, and RAP p-order and q-order, (b) residuals of ICP, AMP, and RAP, and (c) cross-correlation between residuals of RAP and AMP proved to be useful at the minute-by-minute resolution, whereas, for the 10-min-by-10-min data resolution, only the q-order of the optimal ARIMA model of ICP and AMP served as a distinguishing factor. Conclusions: RAP signals derived from ICP pressure sensor technology displayed reproducible behaviors across this population of TBI patients. ARIMA modeling at the higher resolution provided comparatively strong accuracy, and key features were identified leveraging these models that could identify RAP artifacts. Further research is needed to enhance artifact management and broaden applicability across varied datasets. Full article
(This article belongs to the Special Issue Sensing Signals for Biomedical Monitoring)
Show Figures

Figure 1

17 pages, 11078 KiB  
Article
Variations in Water Stress and Its Driving Factors in the Yellow River Basin
by Haodong Lyu, Jianmin Qiao, Gonghuan Fang, Wenting Liang, Zidong Tang, Furong Lv, Qin Zhang, Zewei Qiu and Gengning Huang
Land 2025, 14(1), 53; https://doi.org/10.3390/land14010053 - 30 Dec 2024
Cited by 2 | Viewed by 895
Abstract
As one of the most sensitive areas to climate change in China, the Yellow River Basin faces a significant water resource shortage, which severely restricts sustainable economic development in the region and has become the most prominent issue in the basin. In response [...] Read more.
As one of the most sensitive areas to climate change in China, the Yellow River Basin faces a significant water resource shortage, which severely restricts sustainable economic development in the region and has become the most prominent issue in the basin. In response to the national strategy of ecological protection and high-quality development of the Yellow River Basin, as well as Sustainable Development Goal 6.4 (SDG 6.4), we applied the water stress index (WSI) to measure water stress in the basin. This analysis utilized land use datasets, socio-economic datasets, irrigation datasets, water withdrawal/consumption datasets, and runoff datasets from 2000 to 2020. We also identified the driving factors of the WSI using a partial least squares regression (PLSR) and assessed spatial clustering with global and local Moran’s indices. The results indicate that water stress in the Yellow River Basin has been alleviated, as indicated by the decreasing WSI due to increased precipitation. However, rising domestic water withdrawals have led to an overall increase in total water withdrawal, with agricultural water use accounting for the largest proportion of total water consumption. Precipitation is the most significant factor influencing water stress, affecting 46.25% of the basin area, followed by air temperature, which affects 12.64% of the area. Other factors account for less than 10% each. Furthermore, the global Moran’s index values for 2000, 2005, 2010, 2015, and 2020 were 0.172, 0.280, 0.284, 0.305, and 0.302, respectively, indicating a strong positive spatial autocorrelation within the basin. The local Moran’s index revealed that the WSI of 446 catchments was predominantly characterized by high–high and low–low clusters, suggesting a strong positive correlation in the WSI among these catchments. This study provides a reference framework for developing a water resources assessment index system in the Yellow River Basin and supports regional water resources management and industrial structure planning. Full article
Show Figures

Figure 1

28 pages, 7331 KiB  
Article
Investigation of VOC Series Collected in a Refinery and Their Classification Based on Statistical Features
by Alina Bărbulescu, Sebastian-Barbu Barbeş, Lucica Barbeş and Cristian Ștefan Dumitriu
Appl. Sci. 2024, 14(24), 11921; https://doi.org/10.3390/app142411921 - 19 Dec 2024
Viewed by 1049
Abstract
In the context of the increased pollution from different sources and its significant negative effect on the population’s health and environment, the article presents a comprehensive analysis of the data series formed by the concentrations of the volatile organic compounds (VOCs) collected in [...] Read more.
In the context of the increased pollution from different sources and its significant negative effect on the population’s health and environment, the article presents a comprehensive analysis of the data series formed by the concentrations of the volatile organic compounds (VOCs) collected in three zones—storage areas in the reservoir park—of a refinery complex in Romania during the maintenance period. Statistical analyses, including parametric and nonparametric tests, were performed to assess the correlation between the studied series and to group them based on some common features. The series were clustered using the raw data, and the series features were extracted after the statistical analysis. The results indicate that the series are not correlated and do not follow the same distribution even though the study zone is not large. The sites’ classification based on statistical features is shown to be more relevant from the viewpoint of the emissions level than that provided using the raw series. The Principal Component Analysis (PCA) indicates that the features with the highest contribution on the first two components are maximum, standard deviation, autocorrelation, and partial autocorrelation for Zone 1; average, maximum, minimum, and partial autocorrelation for Zone 2; and skewness, average, maximum, and standard deviation for Zone 3. The study’s novelty is two-fold. First, it provides the results of the study performed during the maintenance period of the storage tanks, which was insufficiently investigated in the literature. Secondly, since complete data series are not generally available to the large public, clustering them based on their features provides a clear image of pollution levels and the sites where actions should be taken to reduce it. This investigation offers essential insights that can serve as a background for developing effective air pollutant monitoring strategies and mitigation measures by understanding the emission patterns and identifying the factors that influence VOC levels during the maintenance of storage tanks for highly volatile petroleum products. Full article
(This article belongs to the Special Issue Air Pollution and Its Impact on the Atmospheric Environment)
Show Figures

Figure 1

Back to TopTop