Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = singular stochastic vectors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 10489 KB  
Article
An SSA-SARIMA-GSVR Hybrid Model Based on Singular Spectrum Analysis for O3-CPM Prediction
by Chaoli Tang, Wenlong Liu, Yuanyuan Wei and Yue Pan
Remote Sens. 2025, 17(23), 3826; https://doi.org/10.3390/rs17233826 - 26 Nov 2025
Viewed by 640
Abstract
Ozone density at cold-point mesopause (O3-CPM) can provide information on long-term atmospheric trends. Compared to ground-level ozone, O3-CPM is not only adversely affected by chemical substances emitted from human activities but is also regulated by solar radiation. Therefore, an accurate prediction of O3-CPM [...] Read more.
Ozone density at cold-point mesopause (O3-CPM) can provide information on long-term atmospheric trends. Compared to ground-level ozone, O3-CPM is not only adversely affected by chemical substances emitted from human activities but is also regulated by solar radiation. Therefore, an accurate prediction of O3-CPM is necessary. However, it is difficult for traditional forecasting methods to predict the main trends and seasonal characteristics of ozone time series while capturing the random components and noise of O3-CPM. In order to improve the prediction accuracy of O3-CPM, this paper proposes a hybrid SSA-SARIMA-GSVR model based on the Singular Spectrum Analysis (SSA) method, which combines the Seasonal Autoregressive Integrated Moving Average Model (SARIMA) and the Gray Wolf Algorithm Optimized Support Vector Regression Algorithm (GSVR). First, the O3-CPM sequence is decomposed using SSA, and the concept of reconstruction threshold (RT) is introduced to categorize the decomposed singular values into two classes. The categorized RT reconstructed sequences containing periodic features and major trends are fed into the SARIMA model for prediction, and the N-RT reconstructed sequences (original sequence N minus RT reconstructed sequence) containing stochastic components and nonlinear features are fed into the GSVR model for prediction. The final prediction results are obtained by superimposing the outputs of these two models. The results confirm that, compared to various commonly used time series forecasting models such as Long Short-Term Memory (LSTM), Informer, SVR, SARIMA, GSVR, SSA-GSVR, and SSA-SARIMA models, the proposed SSA-SARIMA-GSVR hybrid prediction model has the lowest error evaluation metrics, enabling accurate and efficient prediction of the O3-CPM time series. Specifically, the proposed model achieved an RMSE of 0.26, MAE of 0.212, and R2 of 0.987 on the test set, outperforming the best baseline model (SARIMA) by 45.8%, 42.1%, and 3.1%, respectively. Full article
Show Figures

Figure 1

28 pages, 1237 KB  
Article
Counting Cosmic Cycles: Past Big Crunches, Future Recurrence Limits, and the Age of the Quantum Memory Matrix Universe
by Florian Neukart, Eike Marx and Valerii Vinokur
Entropy 2025, 27(10), 1043; https://doi.org/10.3390/e27101043 - 7 Oct 2025
Cited by 3 | Viewed by 2169
Abstract
We present a quantitative theory of contraction and expansion cycles within the Quantum Memory Matrix (QMM) cosmology. In this framework, spacetime consists of finite-capacity Hilbert cells that store quantum information. Each non-singular bounce adds a fixed increment of imprint entropy, defined as the [...] Read more.
We present a quantitative theory of contraction and expansion cycles within the Quantum Memory Matrix (QMM) cosmology. In this framework, spacetime consists of finite-capacity Hilbert cells that store quantum information. Each non-singular bounce adds a fixed increment of imprint entropy, defined as the cumulative quantum information written irreversibly into the matrix and distinct from coarse-grained thermodynamic entropy, thereby providing an intrinsic, monotonic cycle counter. By calibrating the geometry–information duality, inferring today’s cumulative imprint from CMB, BAO, chronometer, and large-scale-structure constraints, and integrating the modified Friedmann equations with imprint back-reaction, we find that the Universe has already completed Npast=3.6±0.4 cycles. The finite Hilbert capacity enforces an absolute ceiling: propagating the holographic write rate and accounting for instability channels implies only Nfuture=7.8±1.6 additional cycles before saturation halts further bounces. Integrating Kodama-vector proper time across all completed cycles yields a total cumulative age tQMM=62.0±2.5Gyr, compared to the 13.8±0.2Gyr of the current expansion usually described by ΛCDM. The framework makes concrete, testable predictions: an enhanced faint-end UV luminosity function at z12 observable with JWST, a stochastic gravitational-wave background with f2/3 scaling in the LISA band from primordial black-hole mergers, and a nanohertz background with slope α2/3 accessible to pulsar-timing arrays. These signatures provide near-term opportunities to confirm, refine, or falsify the cyclical QMM chronology. Full article
Show Figures

Figure 1

28 pages, 5051 KB  
Article
Comparative Analysis of Load Profile Forecasting: LSTM, SVR, and Ensemble Approaches for Singular and Cumulative Load Categories
by Ahmad Fayyazbakhsh, Thomas Kienberger and Julia Vopava-Wrienz
Smart Cities 2025, 8(2), 65; https://doi.org/10.3390/smartcities8020065 - 10 Apr 2025
Cited by 9 | Viewed by 3656
Abstract
Accurately forecasting load profiles, especially peak catching, is a challenge due to the stochastic nature of consumption. In this paper, we applied the following three models for forecasting: Long Short-Term Memory (LSTM); Support Vector Regression (SVR); and the combined model, which is a [...] Read more.
Accurately forecasting load profiles, especially peak catching, is a challenge due to the stochastic nature of consumption. In this paper, we applied the following three models for forecasting: Long Short-Term Memory (LSTM); Support Vector Regression (SVR); and the combined model, which is a blend of SVR, Gated Recurrent Units (GRU), and Linear Regression (LR) to forecast 24 h-ahead load profiles. Household (HH), heat pump (HP), and electric vehicle (EV) loads are singular, and these were collectively considered with one-year load profiles. This study tackles the issue of accurately forecasting load profiles by evaluating LSTM, SVR, and an ensemble model for predicting energy consumption in HH, HP, and EV loads. A novel forecast correction mechanism is introduced, adjusting forecasts every eight hours to increase reliability. The findings highlight the potential of deep learning in enhancing energy demand forecasting, especially in identifying peak loads, which contributes to more stable and efficient grid operations. Visual and validation data were investigated, along with the models’ performances at different levels, such as off-peak, on-peak, and entirely. Among all models, LSTM performed slightly better in most of the factors, particularly in peak capturing. However, the blended model showed slightly better performance than LSTM for EV power load forecasting, with an on-peak mean absolute percentage error (MAPE) of 21.45%, compared to 29.24% and 22.02% for SVR and LSTM, respectively. Nevertheless, visual analysis clearly showed the strong ability of LSTM to capture peaks. This LSTM potential was also shown by the mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (SMAPE) during the on-peak period, with around 3–5% improvement compared to SVR and the blended model. Finally, LSTM was employed in predicting day-ahead load profiles using measured data from four grids and showed high potential in capturing peaks with MAPE values less than 10% for most of the grids. Full article
Show Figures

Figure 1

20 pages, 2329 KB  
Article
Downhole Camera Runs Validate the Capability of Machine Learning Models to Accurately Predict Perforation Entry Hole Diameter
by Samuel Nashed, FNU Srijan, Abdelali Guezei, Oluchi Ejehu and Rouzbeh Moghanloo
Energies 2024, 17(22), 5558; https://doi.org/10.3390/en17225558 - 7 Nov 2024
Cited by 8 | Viewed by 2084
Abstract
In the field of oil and gas well perforation, it is imperative to accurately forecast the casing entry hole diameter under full downhole conditions. Precise prediction of the casing entry hole diameter enhances the design of both conventional and limited entry hydraulic fracturing, [...] Read more.
In the field of oil and gas well perforation, it is imperative to accurately forecast the casing entry hole diameter under full downhole conditions. Precise prediction of the casing entry hole diameter enhances the design of both conventional and limited entry hydraulic fracturing, mitigates the risk of proppant screenout, reduces skin factors attributable to perforation, guarantees the presence of sufficient flow areas for the effective pumping of cement during a squeeze operation, and reduces issues related to sand production. Implementing machine learning and deep learning models yields immediate and precise estimations of entry hole diameter, thereby facilitating the attainment of these objectives. The principal aim of this research is to develop sophisticated machine learning-based models proficient in predicting entry hole diameter under full downhole conditions. Ten machine learning and deep learning models have been developed utilizing readily available parameters routinely gathered during perforation operations, including perforation depth, rock density, shot phasing, shot density, fracture gradient, reservoir unconfined compressive strength, casing elastic limit, casing nominal weight, casing outer diameter, and gun diameter as input variables. These models are trained by utilizing actual casing entry hole diameter data acquired from deployed downhole cameras, which serve as the output for the X’ models. A comprehensive dataset from 53 wells has been utilized to meticulously develop and fine-tune various machine learning algorithms. These include Gradient Boosting, Linear Regression, Stochastic Gradient Descent, AdaBoost, Decision Trees, Random Forest, K-Nearest Neighbor, neural network, and Support Vector Machines. The results of the most effective machine learning models, specifically Gradient Boosting, Random Forest, AdaBoost, neural network (L-BFGS), and neural network (Adam), reveal exceptionally low values of mean absolute percent error (MAPE), root mean square error (RMSE), and mean squared error (MSE) in comparison to actual measurements of entry hole diameter. The recorded MAPE values are 4.6%, 4.4%, 4.7%, 4.9%, and 6.3%, with corresponding RMSE values of 0.057, 0.057, 0.058, 0.065, and 0.089, and MSE values of 0.003, 0.003, 0.003, 0.004, and 0.008, respectively. These low MAPE, RMSE, and MSE values verify the remarkably high accuracy of the generated models. This paper offers novel insights by demonstrating the improvements achieved in ongoing perforation operations through the application of a machine learning model for predicting entry hole diameter. The utilization of machine learning models presents a more accurate, expedient, real-time, and economically viable alternative to empirical models and deployed downhole cameras. Additionally, these machine learning models excel in accommodating a broad spectrum of guns, well completions, and reservoir parameters, a challenge that a singular empirical model struggled to address. Full article
(This article belongs to the Section H: Geo-Energy)
Show Figures

Figure 1

22 pages, 8985 KB  
Article
Predicting a Fall Based on Gait Anomaly Detection: A Comparative Study of Wrist-Worn Three-Axis and Mobile Phone-Based Accelerometer Sensors
by Primož Kocuvan, Aleksander Hrastič, Andrea Kareska and Matjaž Gams
Sensors 2023, 23(19), 8294; https://doi.org/10.3390/s23198294 - 7 Oct 2023
Cited by 10 | Viewed by 3618
Abstract
Falls by the elderly pose considerable health hazards, leading not only to physical harm but a number of other related problems. A timely alert about a deteriorating gait, as an indication of an impending fall, can assist in fall prevention. In this investigation, [...] Read more.
Falls by the elderly pose considerable health hazards, leading not only to physical harm but a number of other related problems. A timely alert about a deteriorating gait, as an indication of an impending fall, can assist in fall prevention. In this investigation, a comprehensive comparative analysis was conducted between a commercially available mobile phone system and two wristband systems: one commercially available and another representing a novel approach. Each system was equipped with a singular three-axis accelerometer. The walk suggestive of a potential fall was induced by special glasses worn by the participants. The same standard machine-learning techniques were employed for the classification with all three systems based on a single three-axis accelerometer, yielding a best average accuracy of 86%, a specificity of 88%, and a sensitivity of 86% via the support vector machine (SVM) method using a wristband. A smartphone, on the other hand, achieved a best average accuracy of 73% also with an SVM using only a three-axis accelerometer sensor. The significance analysis of the mean accuracy, sensitivity, and specificity between the innovative wristband and the smartphone yielded a p-value of 0.000. Furthermore, the study applied unsupervised and semi-supervised learning methods, incorporating principal component analysis and t-distributed stochastic neighbor embedding. To sum up, both wristbands demonstrated the usability of wearable sensors in the early detection and mitigation of falls in the elderly, outperforming the smartphone. Full article
(This article belongs to the Special Issue Smart Sensing for Pervasive Health)
Show Figures

Figure 1

19 pages, 464 KB  
Article
Some Extensions of the Asymmetric Exponentiated Bimodal Normal Model for Modeling Data with Positive Support
by Roger Tovar-Falón, Guillermo Martínez-Flórez and Isaías Ceña-Tapia
Mathematics 2023, 11(7), 1563; https://doi.org/10.3390/math11071563 - 23 Mar 2023
Cited by 1 | Viewed by 1898
Abstract
It is common in many fields of knowledge to assume that the data under study have a normal distribution, which often generates mistakes in the results, since this assumption does not always coincide with the characteristics of the observations under analysis. In some [...] Read more.
It is common in many fields of knowledge to assume that the data under study have a normal distribution, which often generates mistakes in the results, since this assumption does not always coincide with the characteristics of the observations under analysis. In some cases, the data may have degrees of skewness and/or kurtosis greater than what the normal model can capture, and in others, they may present two or more modes. In this work, two new families of skewed distributions are presented that fit bimodal data with positive support. The new families were obtained from the extension of the bimodal normal distribution to the alpha-power family class. The proposed distributions were studied for their main properties, such as their probability density function, cumulative distribution function, survival function, and hazard function. The parameter estimation process was performed from a classical perspective using the maximum likelihood method. The non-singularity of Fisher’s information was demonstrated, which made it possible to find the stochastic convergence of the vector of the maximum likelihood estimators and, based on the latter, perform statistical inference via the likelihood ratio. The applicability of the proposed distributions was exemplified using real data sets. Full article
(This article belongs to the Special Issue Probability, Statistics & Symmetry)
Show Figures

Figure 1

21 pages, 4322 KB  
Article
Debris Flow Prediction Based on the Fast Multiple Principal Component Extraction and Optimized Broad Learning
by Genqi Xu, Xin-E Yan, Ning Cao, Jing Ma, Guokun Xie and Lu Li
Water 2022, 14(21), 3374; https://doi.org/10.3390/w14213374 - 24 Oct 2022
Cited by 3 | Viewed by 3517
Abstract
In the current research of debris flow geological disaster prediction, determining reasonable disaster-inducing factors and ensuring the accuracy and rapidity of the prediction model are considered vital issues, and also, essential foundations for disaster early warning and disaster prevention and mitigation. Aiming at [...] Read more.
In the current research of debris flow geological disaster prediction, determining reasonable disaster-inducing factors and ensuring the accuracy and rapidity of the prediction model are considered vital issues, and also, essential foundations for disaster early warning and disaster prevention and mitigation. Aiming at the problems of low prediction accuracy and long prediction time in the current debris flow research, firstly, six debris flow impact factors were selected relying on the fast multiple principal component extraction (FMPCE) algorithm, including rainfall, slope gradient, gully bed gradient, relative height difference, soil moisture content and pore water pressure. Next, based on the broad learning (BL) algorithm, the debris flow prediction model based on FMPCE and the optimized BL is established with the input of debris flow-inducing factors and the output of debris flow probability. Then the model is optimized using matrix stochastic approximate singular value decomposition (SVD), and the debris flow disaster prediction model, based on SVDBL, is constructed. The prediction results of the optimized model are compared with those of the gradient descent optimized the BP neural network model(GD-BP), Support Vector Machines model(SVM) based on grid search and BL model. The results show that the accuracy of SVDBL is 7.5% higher than that of GD-BP, 3% higher than that of SVM and 0.5% higher than that of BL. The RMSE sum of SVDBL was 0.05870, 0.0478 and 0.0227 less than that of GD-BPSVM and BL, respectively; the MAPE sum of SVDBL was 1.95%, 1.66% and 0.49% less than that of GD-BPSVM and BL; the AUC values of SVDBL were 12.75%, 7.64% and 2.79% higher than those of the above three models, respectively. In addition, the input dataset is expanded to compare the training time of each model. The simulation results show that the prediction accuracy of this model is the highest and the training time is the shortest after the dataset is expanded. This study shows that the BL can be used for debris flow prediction, and can also provide references for disaster early warning and prevention. Full article
(This article belongs to the Special Issue Safety Monitoring and Management of Reservoir and Dams)
Show Figures

Figure 1

17 pages, 1613 KB  
Article
Static Attitude Determination Using Convolutional Neural Networks
by Guilherme Henrique dos Santos, Laio Oriel Seman, Eduardo Augusto Bezerra, Valderi Reis Quietinho Leithardt, André Sales Mendes and Stéfano Frizzo Stefenon
Sensors 2021, 21(19), 6419; https://doi.org/10.3390/s21196419 - 26 Sep 2021
Cited by 19 | Viewed by 4051
Abstract
The need to estimate the orientation between frames of reference is crucial in spacecraft navigation. Robust algorithms for this type of problem have been built by following algebraic approaches, but data-driven solutions are becoming more appealing due to their stochastic nature. Hence, an [...] Read more.
The need to estimate the orientation between frames of reference is crucial in spacecraft navigation. Robust algorithms for this type of problem have been built by following algebraic approaches, but data-driven solutions are becoming more appealing due to their stochastic nature. Hence, an approach based on convolutional neural networks in order to deal with measurement uncertainty in static attitude determination problems is proposed in this paper. PointNet models were trained with different datasets containing different numbers of observation vectors that were used to build attitude profile matrices, which were the inputs of the system. The uncertainty of measurements in the test scenarios was taken into consideration when choosing the best model. The proposed model, which used convolutional neural networks, proved to be less sensitive to higher noise than traditional algorithms, such as singular value decomposition (SVD), the q-method, the quaternion estimator (QUEST), and the second estimator of the optimal quaternion (ESOQ2). Full article
(This article belongs to the Special Issue Sensors for Space Applications)
Show Figures

Figure 1

30 pages, 4820 KB  
Article
A Generator of Bivariate Distributions: Properties, Estimation, and Applications
by Manuel Franco, Juana-María Vivo and Debasis Kundu
Mathematics 2020, 8(10), 1776; https://doi.org/10.3390/math8101776 - 14 Oct 2020
Cited by 5 | Viewed by 3781
Abstract
In 2020, El-Morshedy et al. introduced a bivariate extension of the Burr type X generator (BBX-G) of distributions, and Muhammed presented a bivariate generalized inverted Kumaraswamy (BGIK) distribution. In this paper, we propose a more flexible generator of bivariate distributions based on the [...] Read more.
In 2020, El-Morshedy et al. introduced a bivariate extension of the Burr type X generator (BBX-G) of distributions, and Muhammed presented a bivariate generalized inverted Kumaraswamy (BGIK) distribution. In this paper, we propose a more flexible generator of bivariate distributions based on the maximization process from an arbitrary three-dimensional baseline distribution vector, which is of interest for maintenance and stress models, and expands the BBX-G and BGIK distributions, among others. This proposed generator allows one to generate new bivariate distributions by combining non-identically distributed baseline components. The bivariate distributions belonging to the proposed family have a singular part due to the latent component which makes them suitable for modeling two-dimensional data sets with ties. Several distributional and stochastic properties are studied for such bivariate models, as well as for its marginals, conditional distributions, and order statistics. Furthermore, we analyze its copula representation and some related association measures. The EM algorithm is proposed to compute the maximum likelihood estimations of the unknown parameters, which is illustrated by using two particular distributions of this bivariate family for modeling two real data sets. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

23 pages, 361 KB  
Article
Cointegration and Error Correction Mechanisms for Singular Stochastic Vectors
by Matteo Barigozzi, Marco Lippi and Matteo Luciani
Econometrics 2020, 8(1), 3; https://doi.org/10.3390/econometrics8010003 - 4 Feb 2020
Cited by 18 | Viewed by 7228
Abstract
Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q < r . The present paper [...] Read more.
Large-dimensional dynamic factor models and dynamic stochastic general equilibrium models, both widely used in empirical macroeconomics, deal with singular stochastic vectors, i.e., vectors of dimension r which are driven by a q-dimensional white noise, with q < r . The present paper studies cointegration and error correction representations for an I ( 1 ) singular stochastic vector y t . It is easily seen that y t is necessarily cointegrated with cointegrating rank c r q . Our contributions are: (i) we generalize Johansen’s proof of the Granger representation theorem to I ( 1 ) singular vectors under the assumption that y t has rational spectral density; (ii) using recent results on singular vectors by Anderson and Deistler, we prove that for generic values of the parameters the autoregressive representation of y t has a finite-degree polynomial. The relationship between the cointegration of the factors and the cointegration of the observable variables in a large-dimensional factor model is also discussed. Full article
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)
20 pages, 890 KB  
Article
Analog Circuit Fault Diagnosis via Joint Cross-Wavelet Singular Entropy and Parametric t-SNE
by Wei He, Yigang He, Bing Li and Chaolong Zhang
Entropy 2018, 20(8), 604; https://doi.org/10.3390/e20080604 - 14 Aug 2018
Cited by 29 | Viewed by 5057
Abstract
In this paper, a novel method with cross-wavelet singular entropy (XWSE)-based feature extractor and support vector machine (SVM) is proposed for analog circuit fault diagnosis. Primarily, cross-wavelet transform (XWT), which possesses a good capability to restrain the environment noise, is applied to transform [...] Read more.
In this paper, a novel method with cross-wavelet singular entropy (XWSE)-based feature extractor and support vector machine (SVM) is proposed for analog circuit fault diagnosis. Primarily, cross-wavelet transform (XWT), which possesses a good capability to restrain the environment noise, is applied to transform the fault signal into time-frequency spectra (TFS). Then, a simple segmentation method is utilized to decompose the TFS into several blocks. We employ the singular value decomposition (SVD) to analysis the blocks, then Tsallis entropy of each block is obtained to construct the original features. Subsequently, the features are imported into parametric t-distributed stochastic neighbor embedding (t-SNE) for dimension reduction to yield the discriminative and concise fault characteristics. Finally, the fault characteristics are entered into SVM classifier to locate circuits’ defects that the free parameters of SVM are determined by quantum-behaved particle swarm optimization (QPSO). Simulation results show the proposed approach is with superior diagnostic performance than other existing methods. Full article
Show Figures

Figure 1

Back to TopTop