Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = tensor product transformation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 496 KB  
Article
Two-Dimensional Discrete Coupled Fractional Fourier Transform (DCFrFT)
by Asma Elshamy, Zeinab S. I. Mansour and Ahmed Zayed
Fractal Fract. 2026, 10(1), 7; https://doi.org/10.3390/fractalfract10010007 - 23 Dec 2025
Viewed by 308
Abstract
The fractional Fourier transform is critical in signal processing and supports many applications. Signal processing is one notable application. Implementing the fractional Fourier transform requires discrete versions. As a result, defining a discrete coupled fractional Fourier transform (DCFrFT) is essential. This paper presents [...] Read more.
The fractional Fourier transform is critical in signal processing and supports many applications. Signal processing is one notable application. Implementing the fractional Fourier transform requires discrete versions. As a result, defining a discrete coupled fractional Fourier transform (DCFrFT) is essential. This paper presents a discrete version of the continuous, two-dimensional coupled fractional Fourier transform, which is not a tensor product of one-dimensional transforms. We examine the main characteristics of the operator and illustrate its relationship with the existing two-dimensional discrete fractional Fourier transforms. Examples help clarify the approach. Full article
Show Figures

Figure 1

26 pages, 847 KB  
Article
An Efficient Quasi-Monte Carlo Algorithm for High Dimensional Numerical Integration
by Huicong Zhong and Xiaobing Feng
Mathematics 2025, 13(21), 3437; https://doi.org/10.3390/math13213437 - 28 Oct 2025
Viewed by 994
Abstract
In this paper, we develop a fast numerical algorithm, termed MDI-LR, for the efficient implementation of quasi-Monte Carlo lattice rules in computing d-dimensional integrals of a given function. The algorithm is based on converting the underlying lattice rule into a tensor-product form [...] Read more.
In this paper, we develop a fast numerical algorithm, termed MDI-LR, for the efficient implementation of quasi-Monte Carlo lattice rules in computing d-dimensional integrals of a given function. The algorithm is based on converting the underlying lattice rule into a tensor-product form through an affine transformation, and further improving computational efficiency by incorporating a multilevel dimension iteration (MDI) strategy. This approach computes the function evaluations at the integration points collectively and iterates along each transformed coordinate direction, allowing substantial reuse of computations. As a result, the algorithm avoids the need to explicitly store integration points or compute function values at those points independently. Extensive numerical experiments are conducted to evaluate the performance of MDI-LR and compare it with the straightforward implementation of quasi-Monte Carlo lattice rules. The results demonstrate that MDI-LR achieves a computational complexity of order O(N2d3) or better, where N denotes the number of points in each transformed coordinate direction. Thus, MDI-LR effectively mitigates the curse of dimensionality and revitalizes the use of QMC lattice rules for high dimensional integration. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

22 pages, 632 KB  
Article
Enhancing Multi-Key Fully Homomorphic Encryption with Efficient Key Switching and Batched Multi-Hop Computations
by Liang Zhou, Ruwei Huang and Bingbing Wang
Appl. Sci. 2025, 15(10), 5771; https://doi.org/10.3390/app15105771 - 21 May 2025
Cited by 1 | Viewed by 1823
Abstract
Multi-Key Fully Homomorphic Encryption (MKFHE) offers a powerful solution for secure multi-party computations, where data encrypted under different keys can be jointly computed without decryption. However, existing MKFHE schemes still face challenges such as large parameter sizes, inefficient evaluation key generation, complex homomorphic [...] Read more.
Multi-Key Fully Homomorphic Encryption (MKFHE) offers a powerful solution for secure multi-party computations, where data encrypted under different keys can be jointly computed without decryption. However, existing MKFHE schemes still face challenges such as large parameter sizes, inefficient evaluation key generation, complex homomorphic multiplication processes, and limited scalability in multi-hop scenarios. In this paper, we propose an enhanced multi-hop MKFHE scheme based on the Brakerski-Gentry-Vaikuntanathan (BGV) framework. Our approach eliminates the need for an auxiliary Gentry-Sahai-Waters (GSW)-type scheme, simplifying the design and significantly reducing the public key size. We propose novel algorithms for evaluation key generation and key switching that simplify the computation while allowing each party to independently precompute and share its evaluation keys, thereby reducing both computational overhead and storage costs. Additionally, we combine the tensor product and key switching processes through homomorphic gadget decomposition, developing a new homomorphic multiplication algorithm and achieving linear complexity with respect to the number of parties. Furthermore, by leveraging the Polynomial Chinese Remainder Theorem (Polynomial CRT), we design a ciphertext packing technique that transforms our BGV-type MKFHE scheme into a batched scheme with improved amortized performance. Our schemes feature stronger multi-hop properties and operate without requiring a predefined maximum number of parties, offering enhanced flexibility and scalability compared to existing similar schemes. Full article
Show Figures

Figure 1

15 pages, 671 KB  
Article
A Simultaneous Decomposition for a Quaternion Tensor Quaternity with Applications
by Jia-Wei Huo, Yun-Ze Xu and Zhuo-Heng He
Mathematics 2025, 13(10), 1679; https://doi.org/10.3390/math13101679 - 20 May 2025
Cited by 1 | Viewed by 716
Abstract
Quaternion tensor decompositions have recently been the center of focus due to their wide potential applications in color data processing. In this paper, we establish a simultaneous decomposition for a quaternion tensor quaternity under Einstein product. The decomposition brings the quaternity of four [...] Read more.
Quaternion tensor decompositions have recently been the center of focus due to their wide potential applications in color data processing. In this paper, we establish a simultaneous decomposition for a quaternion tensor quaternity under Einstein product. The decomposition brings the quaternity of four quaternion tensors into a canonical form, which only has 0 and 1 entries. The structure of the canonical form is discussed in detail. Moreover, the proposed decomposition is applied to a new framework of color video encryption and decryption based on discrete wavelet transform. This new approach can realize simultaneous encryption and compression with high security. Full article
(This article belongs to the Special Issue Advanced Numerical Linear Algebra)
Show Figures

Figure 1

17 pages, 6081 KB  
Article
Research on Shale Oil Well Productivity Prediction Model Based on CNN-BiGRU Algorithm
by Yuan Pan, Xuewei Liu, Fuchun Tian, Liyong Yang, Xiaoting Gou, Yunpeng Jia, Quan Wang and Yingxi Zhang
Energies 2025, 18(10), 2523; https://doi.org/10.3390/en18102523 - 13 May 2025
Viewed by 825
Abstract
Unconventional reservoirs are characterized by intricate fluid-phase behaviors, and physics-based shale oil well productivity prediction models often exhibit substantial deviations due to oversimplified theoretical frameworks and challenges in parameter acquisition. Under these circumstances, data-driven approaches leveraging actual production datasets have emerged as viable [...] Read more.
Unconventional reservoirs are characterized by intricate fluid-phase behaviors, and physics-based shale oil well productivity prediction models often exhibit substantial deviations due to oversimplified theoretical frameworks and challenges in parameter acquisition. Under these circumstances, data-driven approaches leveraging actual production datasets have emerged as viable alternatives for productivity forecasting. Nevertheless, conventional data-driven architectures suffer from structural simplicity, limited capacity for processing low-dimensional feature spaces, and exclusive applicability to intra-sequence learning paradigms (e.g., production-to-production sequence mapping). This fundamentally conflicts with the underlying principles of mechanistic modeling, which emphasize pressure-to-production sequence transformations. To address these limitations, we propose a hybrid deep learning architecture integrating convolutional neural networks with bidirectional gated recurrent units (CNN-BiGRU). The model incorporates dedicated input pathways: fully connected layers for feature embedding and convolutional operations for high-dimensional feature extraction. By implementing a sequence-to-sequence (seq2seq) architecture with encoder–decoder mechanisms, our framework enables cross-domain sequence learning, effectively bridging pressure dynamics with production profiles. The CNN-BiGRU model was implemented on the TensorFlow framework, with rigorous validation of model robustness and systematic evaluation of feature importance. Hyperparameter optimization via grid searching yielded optimal configurations, while field applications demonstrated operational feasibility. Comparative analysis revealed a mean relative error (MRE) of 16.11% between predicted and observed production values, substantiating the model’s predictive competence. This methodology establishes a novel paradigm for machine learning-driven productivity prediction in unconventional reservoir engineering. Full article
(This article belongs to the Section H: Geo-Energy)
Show Figures

Figure 1

18 pages, 4983 KB  
Article
Small Defects Detection of Galvanized Strip Steel via Schatten-p Norm-Based Low-Rank Tensor Decomposition
by Shiyang Zhou, Xuguo Yan, Huaiguang Liu and Caiyun Gong
Sensors 2025, 25(8), 2606; https://doi.org/10.3390/s25082606 - 20 Apr 2025
Viewed by 806
Abstract
Accurate and efficient white-spot defects detection for the surface of galvanized strip steel is one of the most important guarantees for the quality of steel production. It is a fundamental but “hard” small target detection problem due to its small pixel occupation in [...] Read more.
Accurate and efficient white-spot defects detection for the surface of galvanized strip steel is one of the most important guarantees for the quality of steel production. It is a fundamental but “hard” small target detection problem due to its small pixel occupation in low-contrast images. By fully exploiting the low-rank and sparse prior information of a surface defect image, a Schatten-p norm-based low-rank tensor decomposition (SLRTD) method is proposed to decompose the defect image into low-rank background, sparse defect, and random noise. Firstly, the original defect images are transformed into a new patch-based tensor mode through data reconstruction for mining valuable information of the defect image. Then, considering the over-shrinkage problem in the low-rank component estimation caused by a vanilla nuclear norm and a weighted nuclear norm, a nonlinear reweighting strategy based on a Schatten p-norm is incorporated to improve the decomposition performance. Finally, a solution framework is proposed via a well-designed alternating direction method of multipliers to obtain the white-spot defect target image by a simple segmenting algorithm. The white-spot defect dataset from a real-world galvanized strip steel production line is constructed, and the experimental results demonstrate that the proposed SLRTD method outperforms existing state-of-the-art methods qualitatively and quantitatively. Full article
(This article belongs to the Special Issue Sensing and Imaging for Defect Detection: 2nd Edition)
Show Figures

Figure 1

17 pages, 4313 KB  
Article
D3AT-LSTM: An Efficient Model for Spatiotemporal Temperature Prediction Based on Attention Mechanisms
by Ting Tian, Huijing Wu, Xianhua Liu and Qiao Hu
Electronics 2024, 13(20), 4089; https://doi.org/10.3390/electronics13204089 - 17 Oct 2024
Cited by 1 | Viewed by 1643
Abstract
Accurate temperature prediction is essential for economic production and human society’s daily life. However, most current methods only focus on time-series temperature modeling and prediction, ignoring the complex interplay of meteorological variables in the spatial domain. In this paper, a novel temperature prediction [...] Read more.
Accurate temperature prediction is essential for economic production and human society’s daily life. However, most current methods only focus on time-series temperature modeling and prediction, ignoring the complex interplay of meteorological variables in the spatial domain. In this paper, a novel temperature prediction model (D3AT-LSTM) is proposed by combining the three-dimensional convolutional neural network (3DCNN) and the attention-based gated cyclic network. Firstly, the historical meteorological series of eight surrounding pixels are combined to construct a multi-dimensional feature tensor that integrates variables from the temporal domain as the input data. Convolutional units are used to model and analyze the spatiotemporal patterns of the local sequence in CNN modules by combining them with parallel attention mechanisms. The fully connected layer finally makes the final temperature prediction. This method is subsequently compared with both classical and state-of-art prediction models such as ARIMA (AR), long short-term memory network (LSTM), and Transformer using three indices: the root mean square error (RMSE), the mean absolute error (MAE), and the coefficient of determination (R2). The results indicate that the D3AT-LSTM model can achieve good prediction accuracy compared to AR, LSTMs, and Transformer. Full article
Show Figures

Figure 1

24 pages, 738 KB  
Article
Tensor Core-Adapted Sparse Matrix Multiplication for Accelerating Sparse Deep Neural Networks
by Yoonsang Han, Inseo Kim, Jinsung Kim and Gordon Euhyun Moon
Electronics 2024, 13(20), 3981; https://doi.org/10.3390/electronics13203981 - 10 Oct 2024
Cited by 1 | Viewed by 6076
Abstract
Sparse matrix–matrix multiplication (SpMM) is essential for deep learning models and scientific computing. Recently, Tensor Cores (TCs) on GPUs, originally designed for dense matrix multiplication with mixed precision, have gained prominence. However, utilizing TCs for SpMM is challenging due to irregular memory access [...] Read more.
Sparse matrix–matrix multiplication (SpMM) is essential for deep learning models and scientific computing. Recently, Tensor Cores (TCs) on GPUs, originally designed for dense matrix multiplication with mixed precision, have gained prominence. However, utilizing TCs for SpMM is challenging due to irregular memory access patterns and a varying number of non-zero elements in a sparse matrix. To improve data locality, previous studies have proposed reordering sparse matrices before multiplication, but this adds computational overhead. In this paper, we propose Tensor Core-Adapted SpMM (TCA-SpMM), which leverages TCs without requiring matrix reordering and uses the compressed sparse row (CSR) format. To optimize TC usage, the SpMM algorithm’s dot product operation is transformed into a blocked matrix–matrix multiplication. Addressing load imbalance and minimizing data movement are critical to optimizing the SpMM kernel. Our TCA-SpMM dynamically allocates thread blocks to process multiple rows simultaneously and efficiently uses shared memory to reduce data movement. Performance results on sparse matrices from the Deep Learning Matrix Collection public dataset demonstrate that TCA-SpMM achieves up to 29.58× speedup over state-of-the-art SpMM implementations optimized with TCs. Full article
(This article belongs to the Special Issue Compiler and Hardware Design Systems for High-Performance Computing)
Show Figures

Figure 1

20 pages, 1425 KB  
Article
Knowledge Graph Embedding Using a Multi-Channel Interactive Convolutional Neural Network with Triple Attention
by Lin Shi, Weitao Liu, Yafeng Wu, Chenxu Dai, Zhanlin Ji and Ivan Ganchev
Mathematics 2024, 12(18), 2821; https://doi.org/10.3390/math12182821 - 11 Sep 2024
Cited by 3 | Viewed by 2699
Abstract
Knowledge graph embedding (KGE) has been identified as an effective method for link prediction, which involves predicting missing relations or entities based on existing entities or relations. KGE is an important method for implementing knowledge representation and, as such, has been widely used [...] Read more.
Knowledge graph embedding (KGE) has been identified as an effective method for link prediction, which involves predicting missing relations or entities based on existing entities or relations. KGE is an important method for implementing knowledge representation and, as such, has been widely used in driving intelligent applications w.r.t. question-answering systems, recommendation systems, and relationship extraction. Models based on convolutional neural networks (CNNs) have achieved good results in link prediction. However, as the coverage areas of knowledge graphs expand, the increasing volume of information significantly limits the performance of these models. This article introduces a triple-attention-based multi-channel CNN model, named ConvAMC, for the KGE task. In the embedding representation module, entities and relations are embedded into a complex space and the embeddings are performed in an alternating pattern. This approach helps in capturing richer semantic information and enhances the expressive power of the model. In the encoding module, a multi-channel approach is employed to extract more comprehensive interaction features. A triple attention mechanism and max pooling layers are used to ensure that interactions between spatial dimensions and output tensors are captured during the subsequent tensor concatenation and reshaping process, which allows preserving local and detailed information. Finally, feature vectors are transformed into prediction targets for embedding through the Hadamard product of feature mapping and reshaping matrices. Extensive experiments were conducted to evaluate the performance of ConvAMC on three benchmark datasets compared with state-of-the-art (SOTA) models, demonstrating that the proposed model outperforms all compared models across all evaluation metrics on two of the datasets, and achieves advanced link prediction results on most evaluation metrics on the third dataset. Full article
Show Figures

Figure 1

17 pages, 6523 KB  
Article
Lightweight Model Development for Forest Region Unstructured Road Recognition Based on Tightly Coupled Multisource Information
by Guannan Lei, Peng Guan, Yili Zheng, Jinjie Zhou and Xingquan Shen
Forests 2024, 15(9), 1559; https://doi.org/10.3390/f15091559 - 4 Sep 2024
Cited by 3 | Viewed by 1550
Abstract
Promoting the deployment and application of embedded systems in complex forest scenarios is an inevitable developmental trend in advanced intelligent forestry equipment. Unstructured roads, which lack effective artificial traffic signs and reference objects, pose significant challenges for driverless technology in forest scenarios, owing [...] Read more.
Promoting the deployment and application of embedded systems in complex forest scenarios is an inevitable developmental trend in advanced intelligent forestry equipment. Unstructured roads, which lack effective artificial traffic signs and reference objects, pose significant challenges for driverless technology in forest scenarios, owing to their high nonlinearity and uncertainty. In this research, an unstructured road parameterization construction method, “DeepLab-Road”, based on tight coupling of multisource information is proposed, which aims to provide a new segmented architecture scheme for the embedded deployment of a forestry engineering vehicle driving assistance system. DeepLab-Road utilizes MobileNetV2 as the backbone network that improves the completeness of feature extraction through the inverse residual strategy. Then, it integrates pluggable modules including DenseASPP and strip-pooling mechanisms. They can connect the dilated convolutions in a denser manner to improve feature resolution without significantly increasing the model size. The boundary pixel tensor expansion is then completed through a cascade of two-dimensional Lidar point cloud information. Combined with the coordinate transformation, a quasi-structured road parameterization model in the vehicle coordinate system is established. The strategy is trained on a self-built Unstructured Road Scene Dataset and transplanted into our intelligent experimental platform to verify its effectiveness. Experimental results show that the system can meet real-time data processing requirements (≥12 frames/s) under low-speed conditions (≤1.5 m/s). For the trackable road centerline, the average matching error between the image and the Lidar was 0.11 m. This study offers valuable technical support for the rejection of satellite signals and autonomous navigation in unstructured environments devoid of high-precision maps, such as forest product transportation, agricultural and forestry management, autonomous inspection and spraying, nursery stock harvesting, skidding, and transportation. Full article
(This article belongs to the Special Issue Modeling of Vehicle Mobility in Forests and Rugged Terrain)
Show Figures

Figure 1

12 pages, 3079 KB  
Article
Michelson Interferometric Methods for Full Optical Complex Convolution
by Haoyan Kang, Hao Wang, Jiachi Ye, Zibo Hu, Jonathan K. George, Volker J. Sorger, Maria Solyanik-Gorgone and Behrouz Movahhed Nouri
Nanomaterials 2024, 14(15), 1262; https://doi.org/10.3390/nano14151262 - 28 Jul 2024
Cited by 2 | Viewed by 2072
Abstract
Optical real-time data processing is advancing fields like tensor algebra acceleration, cryptography, and digital holography. This technology offers advantages such as reduced complexity through optical fast Fourier transform and passive dot-product multiplication. In this study, the proposed Reconfigurable Complex Convolution Module (RCCM) is [...] Read more.
Optical real-time data processing is advancing fields like tensor algebra acceleration, cryptography, and digital holography. This technology offers advantages such as reduced complexity through optical fast Fourier transform and passive dot-product multiplication. In this study, the proposed Reconfigurable Complex Convolution Module (RCCM) is capable of independently modulating both phase and amplitude over two million pixels. This research is relevant for applications in optical computing, hardware acceleration, encryption, and machine learning, where precise signal modulation is crucial. We demonstrate simultaneous amplitude and phase modulation of an optical two-dimensional signal in a thin lens’s Fourier plane. Utilizing two spatial light modulators (SLMs) in a Michelson interferometer placed in the focal plane of two Fourier lenses, our system enables full modulation in a 4F system’s Fourier domain. This setup addresses challenges like SLMs’ non-linear inter-pixel crosstalk and variable modulation efficiency. The integration of these technologies in the RCCM contributes to the advancement of optical computing and related fields. Full article
Show Figures

Figure 1

15 pages, 3158 KB  
Article
Inferencing Space Travel Pricing from Mathematics of General Relativity Theory, Accounting Equation, and Economic Functions
by Kang-Lin Peng, Xunyue Xue, Liqiong Yu and Yixin Ren
Mathematics 2024, 12(5), 757; https://doi.org/10.3390/math12050757 - 3 Mar 2024
Cited by 2 | Viewed by 2946
Abstract
This study derives space travel pricing by Walrasian Equilibrium, which is logical reasoning from the general relativity theory (GRT), the accounting equation, and economic supply and demand functions. The Cobb–Douglas functions embed the endogenous space factor as new capital to form the space [...] Read more.
This study derives space travel pricing by Walrasian Equilibrium, which is logical reasoning from the general relativity theory (GRT), the accounting equation, and economic supply and demand functions. The Cobb–Douglas functions embed the endogenous space factor as new capital to form the space travel firm’s production function, which is also transformed into the consumer’s utility function. Thus, the market equilibrium occurs at the equivalence of supply and demand functions, like the GRT, which presents the equivalence between the spatial geometric tensor and the energy–momentum tensor, explaining the principles of gravity and the motion of space matter in the spacetime framework. The mathematical axiomatic set theory of the accounting equation explains the equity premium effect that causes a short-term accounting equation inequality, then reaches the equivalence by suppliers’ incremental equity through the closing accounts process of the accounting cycle. On the demand side, the consumption of space travel can be assumed as a value at risk (VaR) investment to attain the specific spacetime curvature in an expected orbit. Spacetime market equilibrium is then achieved to construct the space travel pricing model. The methodology of econophysics and the analogy method was applied to infer space travel pricing with the model of profit maximization, single-mindedness, and envy-free pricing in unit-demand markets. A case study with simulation was conducted for empirical verification of the mathematical models and algorithm. The results showed that space travel pricing remains associated with the principle of market equilibrium, but needs to be extended to the spacetime tensor of GRT. Full article
Show Figures

Figure 1

27 pages, 727 KB  
Article
Induced Isotensor Interactions in Heavy-Ion Double-Charge-Exchange Reactions and the Role of Initial and Final State Interactions
by Horst Lenske, Jessica Bellone, Maria Colonna, Danilo Gambacurta and José-Antonio Lay
Universe 2024, 10(2), 93; https://doi.org/10.3390/universe10020093 - 16 Feb 2024
Cited by 3 | Viewed by 2048
Abstract
The role of initial state (ISI) and final state (FSI) ion–ion interactions in heavy-ion double-charge-exchange (DCE) reactions A(Z,N)A(Z±2,N2) are studied for double single-charge-exchange (DSCE) reactions given by [...] Read more.
The role of initial state (ISI) and final state (FSI) ion–ion interactions in heavy-ion double-charge-exchange (DCE) reactions A(Z,N)A(Z±2,N2) are studied for double single-charge-exchange (DSCE) reactions given by sequential actions of the isovector nucleon–nucleon (NN) T-matrix. In momentum representation, the second-order DSCE reaction amplitude is shown to be given in factorized form by projectile and target nuclear matrix elements and a reaction kernel containing ISI and FSI. Expanding the intermediate propagator in a Taylor series with respect to auxiliary energy allows us to perform the summation in the leading-order term over intermediate nuclear states in closure approximation. The nuclear matrix element attains a form given by the products of two-body interactions directly exciting the n2p2 and p2n2 DCE transitions in the projectile and the target nucleus, respectively. A surprising result is that the intermediate propagation induces correlations between the transition vertices, showing that DSCE reactions are a two-nucleon process that resembles a system of interacting spin–isospin dipoles. Transformation of the DSCE NN T-matrix interactions from the reaction theoretical t-channel form to the s-channel operator structure required for spectroscopic purposes is elaborated in detail, showing that, in general, a rich spectrum of spin scalar, spin vector and higher-rank spin tensor multipole transitions will contribute to a DSCE reaction. Similarities (and differences) to two-neutrino double-beta decay (DBD) are discussed. ISI/FSI distortion and absorption effects are illustrated in black sphere approximation and in an illustrative application to data. Full article
(This article belongs to the Section High Energy Nuclear and Particle Physics)
Show Figures

Figure 1

20 pages, 1500 KB  
Article
A New Method for 2D-Adapted Wavelet Construction: An Application in Mass-Type Anomalies Localization in Mammographic Images
by Damian Valdés-Santiago, Angela M. León-Mecías, Marta Lourdes Baguer Díaz-Romañach, Antoni Jaume-i-Capó, Manuel González-Hidalgo and Jose Maria Buades Rubio
Appl. Sci. 2024, 14(1), 468; https://doi.org/10.3390/app14010468 - 4 Jan 2024
Viewed by 2611
Abstract
This contribution presents a wavelet-based algorithm to detect patterns in images. A two-dimensional extension of the DST-II is introduced to construct adapted wavelets using the equation of the tensor product corresponding to the diagonal coefficients in the 2D discrete wavelet transform. A 1D [...] Read more.
This contribution presents a wavelet-based algorithm to detect patterns in images. A two-dimensional extension of the DST-II is introduced to construct adapted wavelets using the equation of the tensor product corresponding to the diagonal coefficients in the 2D discrete wavelet transform. A 1D filter was then estimated that meets finite energy conditions, vanished moments, orthogonality, and four new detection conditions. These allow, when performing the 2D transform, for the filter to detect the pattern by taking the diagonal coefficients with values of the normalized similarity measure, defined by Guido, as greater than 0.7, and α=0.1. The positions of these coefficients are used to estimate the position of the pattern in the original image. This strategy has been used successfully to detect artificial patterns and localize mass-like abnormalities in digital mammography images. In the case of the latter, high sensitivity and positive predictive value in detection were achieved but not high specificity or negative predictive value, contrary to what occurred in the 1D strategy. This means that the proposed detection algorithm presents a high number of false negatives, which can be explained by the complexity of detection in these types of images. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Well-Being)
Show Figures

Figure 1

17 pages, 418 KB  
Article
Mean Square Exponential Stability of Stochastic Delay Differential Systems with Logic Impulses
by Chunxiang Li, Lijuan Shen, Fangshu Hui, Wen Luo and Zhongliang Wang
Mathematics 2023, 11(7), 1613; https://doi.org/10.3390/math11071613 - 27 Mar 2023
Cited by 2 | Viewed by 1909
Abstract
This paper focuses on the mean square exponential stability of stochastic delay differential systems with logic impulses. Firstly, a class of nonlinear stochastic delay differential systems with logic impulses is constructed. Then, the logic impulses are transformed into an equivalent algebraic expression by [...] Read more.
This paper focuses on the mean square exponential stability of stochastic delay differential systems with logic impulses. Firstly, a class of nonlinear stochastic delay differential systems with logic impulses is constructed. Then, the logic impulses are transformed into an equivalent algebraic expression by using the semi-tensor product method. Thirdly, the mean square exponential stability criteria of nonlinear stochastic delay differential systems with logic impulses are given. Finally, two kinds of stochastic delay differential systems with logic impulses and uncertain parameters are discussed, and the coefficient conditions guaranteeing the mean square exponential stability of these systems are obtained. Full article
(This article belongs to the Special Issue Advances of Intelligent Systems)
Show Figures

Figure 1

Back to TopTop