Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (130)

Search Parameters:
Keywords = low tensor rank

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 7542 KiB  
Article
Accelerated Tensor Robust Principal Component Analysis via Factorized Tensor Norm Minimization
by Geunseop Lee
Appl. Sci. 2025, 15(14), 8114; https://doi.org/10.3390/app15148114 - 21 Jul 2025
Viewed by 138
Abstract
In this paper, we aim to develop an efficient algorithm for the solving Tensor Robust Principal Component Analysis (TRPCA) problem, which focuses on obtaining a low-rank approximation of a tensor by separating sparse and impulse noise. A common approach is to minimize the [...] Read more.
In this paper, we aim to develop an efficient algorithm for the solving Tensor Robust Principal Component Analysis (TRPCA) problem, which focuses on obtaining a low-rank approximation of a tensor by separating sparse and impulse noise. A common approach is to minimize the convex surrogate of the tensor rank by shrinking its singular values. Due to the existence of various definitions of tensor ranks and their corresponding convex surrogates, numerous studies have explored optimal solutions under different formulations. However, many of these approaches suffer from computational inefficiency primarily due to the repeated use of tensor singular value decomposition in each iteration. To address this issue, we propose a novel TRPCA algorithm that introduces a new convex relaxation for the tensor norm and computes low-rank approximation more efficiently. Specifically, we adopt the tensor average rank and tensor nuclear norm, and further relax the tensor nuclear norm into a sum of the tensor Frobenius norms of the factor tensors. By alternating updates of the truncated factor tensors, our algorithm achieves efficient use of computational resources. Experimental results demonstrate that our algorithm achieves significantly faster performance than existing reference methods known for efficient computation while maintaining high accuracy in recovering low-rank tensors for applications such as color image recovery and background subtraction. Full article
Show Figures

Figure 1

41 pages, 1327 KiB  
Article
Space-Time Finite Element Tensor Network Approach for the Time-Dependent Convection–Diffusion–Reaction Equation with Variable Coefficients
by Dibyendu Adak, Duc P. Truong, Radoslav Vuchkov, Saibal De, Derek DeSantis, Nathan V. Roberts, Kim Ø. Rasmussen and Boian S. Alexandrov
Mathematics 2025, 13(14), 2277; https://doi.org/10.3390/math13142277 - 15 Jul 2025
Viewed by 176
Abstract
In this paper, we present a new space-time Galerkin-like method, where we treat the discretization of spatial and temporal domains simultaneously. This method utilizes a mixed formulation of the tensor-train (TT) and quantized tensor-train (QTT) (please see Section Tensor-Train Decomposition), designed for the [...] Read more.
In this paper, we present a new space-time Galerkin-like method, where we treat the discretization of spatial and temporal domains simultaneously. This method utilizes a mixed formulation of the tensor-train (TT) and quantized tensor-train (QTT) (please see Section Tensor-Train Decomposition), designed for the finite element discretization (Q1-FEM) of the time-dependent convection–diffusion–reaction (CDR) equation. We reformulate the assembly process of the finite element discretized CDR to enhance its compatibility with tensor operations and introduce a low-rank tensor structure for the finite element operators. Recognizing the banded structure inherent in the finite element framework’s discrete operators, we further exploit the QTT format of the CDR to achieve greater speed and compression. Additionally, we present a comprehensive approach for integrating variable coefficients of CDR into the global discrete operators within the TT/QTT framework. The effectiveness of the proposed method, in terms of memory efficiency and computational complexity, is demonstrated through a series of numerical experiments, including a semi-linear example. Full article
Show Figures

Figure 1

15 pages, 984 KiB  
Article
Tensioned Multi-View Ordered Kernel Subspace Clustering
by Liping Chen and Gongde Guo
Appl. Sci. 2025, 15(13), 7251; https://doi.org/10.3390/app15137251 - 27 Jun 2025
Viewed by 205
Abstract
Multi-view data improve the effectiveness of clustering tasks, but they often encounter complex noise and corruption. The missing view of the multi-view samples leads to serious degradation of the clustering model’s performance. Current multi-view clustering methods always try to compensate for the missing [...] Read more.
Multi-view data improve the effectiveness of clustering tasks, but they often encounter complex noise and corruption. The missing view of the multi-view samples leads to serious degradation of the clustering model’s performance. Current multi-view clustering methods always try to compensate for the missing information in the original domain, which is limited by the linear representation function. Even more, their clustering structures across views are not sufficiently considered, which leads to suboptimal results. To solve these problems, a tensioned multi-view subspace clustering algorithm is proposed based on sequential kernels to integrate complementary information in multi-source heterogeneous data. By superimposing the kernel matrix based on the sequential characteristics onto the third-order tensor, the robust low-rank representation for the missing is reconstructed by the matrix calculation of sequential kernel learning. Moreover, the tensor structure helps subspace learning to mine the high-order associations between different views. Tensioned Multi-view Ordered Kernel Subspace Clustering (TMOKSC) implements the ADMM framework. Compared with current representative multi-view clustering algorithms, the proposed TMOKSC algorithm is the best in many objective measures. In general, the robust sequential kernel represents the tensor fusion potential subspace structure. Full article
Show Figures

Figure 1

22 pages, 4021 KiB  
Article
Image Characteristic-Guided Learning Method for Remote-Sensing Image Inpainting
by Ying Zhou, Xiang Gao, Xinrong Wu, Fan Wang, Weipeng Jing and Xiaopeng Hu
Remote Sens. 2025, 17(13), 2132; https://doi.org/10.3390/rs17132132 - 21 Jun 2025
Viewed by 399
Abstract
Inpainting noisy remote-sensing images can reduce the cost of acquiring remote-sensing images (RSIs). Since RSIs contain complex land structure features and concentrated obscured areas, existing inpainting methods often produce color inconsistency and structural smoothing when applied to RSIs with a high missing ratio. [...] Read more.
Inpainting noisy remote-sensing images can reduce the cost of acquiring remote-sensing images (RSIs). Since RSIs contain complex land structure features and concentrated obscured areas, existing inpainting methods often produce color inconsistency and structural smoothing when applied to RSIs with a high missing ratio. To address these problems, inspired by tensor recovery, a lightweight image Inpainting Generative Adversarial Network (GAN) method combining low-rankness and local-smoothness (IGLL) is proposed. IGLL utilizes the low-rankness and local-smoothness characteristics of RSIs to guide the deep-learning inpainting. Based on the strong low rankness characteristic of the RSIs, IGLL fully utilizes the background information for foreground inpainting and constrains the consistency of the key ranks. Based on the low smoothness characteristic of the RSIs, learnable edges and structure priors are designed to enhance the non-smoothness of the results. Specifically, the generator of IGLL consists of a pixel-level reconstruction net (PIRN) and a perception-level reconstruction net (PERN). In PIRN, the proposed global attention module (GAM) establishes long-range pixel dependencies. GAM performs precise normalization and avoids overfitting. In PERN, the proposed flexible feature similarity module (FFSM) computes the similarity between background and foreground features and selects a reasonable feature for recovery. Compared with existing works, FFSM improves the fineness of feature matching. To avoid the problem of local-smoothness in the results, both the generator and discriminator utilize the structure priors and learnable edges to regularize large concentrated missing regions. Additionally, IGLL incorporates mathematical constraints into deep-learning models. A singular value decomposition (SVD) loss item is proposed to model the low-rankness characteristic, and it constrains feature consistency. Extensive experiments demonstrate that the proposed IGLL performs favorably against state-of-the-art methods in terms of the reconstruction quality and computation costs, especially on RSIs with high mask ratios. Moreover, our ablation studies reveal the effectiveness of GAM, FFSM, and SVD loss. Source code is publicly available on GitHub. Full article
Show Figures

Figure 1

24 pages, 6467 KiB  
Article
Combining Kronecker-Basis-Representation Tensor Decomposition and Total Variational Constraint for Spectral Computed Tomography Reconstruction
by Xuru Li, Kun Wang, Yan Chang, Yaqin Wu and Jing Liu
Photonics 2025, 12(5), 492; https://doi.org/10.3390/photonics12050492 - 15 May 2025
Viewed by 289
Abstract
Energy spectrum computed tomography (CT) technology based on photon-counting detectors has been widely used in many applications such as lesion detection, material decomposition, and so on. But severe noise in the reconstructed images affects the accuracy of these applications. The method based on [...] Read more.
Energy spectrum computed tomography (CT) technology based on photon-counting detectors has been widely used in many applications such as lesion detection, material decomposition, and so on. But severe noise in the reconstructed images affects the accuracy of these applications. The method based on tensor decomposition can effectively remove noise by exploring the correlation of energy channels, but it is difficult for traditional tensor decomposition methods to describe the problem of tensor sparsity and low-rank properties of all expansion modules simultaneously. To address this issue, an algorithm for spectral CT reconstruction based on photon-counting detectors is proposed, which combines Kronecker-Basis-Representation (KBR) tensor decomposition and total variational (TV) regularization (namely KBR-TV). The proposed algorithm uses KBR tensor decomposition to unify the sparse measurements of traditional tensor spaces, and constructs a third-order tensor cube through non-local image similarity matching. At the same time, the TV regularization term is introduced into the independent energy spectrum image domain to enhance the sparsity constraint of single-channel images, effectively reduce artifacts, and improve the accuracy of image reconstruction. The proposed objective minimization model has been tackled using the split-Bregman algorithm. To evaluate the algorithm’s performance, both numerical simulations and realistic preclinical mouse studies were conducted. The ultimate findings indicate that the KBR-TV method offers superior enhancement in the quality of spectral CT images in comparison to several existing methods. Full article
(This article belongs to the Special Issue Biomedical Optics:Imaging, Sensing and Therapy)
Show Figures

Figure 1

26 pages, 15657 KiB  
Article
Infrared Small Target Detection Based on Compound Eye Structural Feature Weighting and Regularized Tensor
by Linhan Li, Xiaoyu Wang, Shijing Hao, Yang Yu, Sili Gao and Juan Yue
Appl. Sci. 2025, 15(9), 4797; https://doi.org/10.3390/app15094797 - 25 Apr 2025
Viewed by 406
Abstract
Compared to conventional single-aperture infrared cameras, the bio-inspired infrared compound eye camera integrates the advantages of infrared imaging technology with the benefits of multi-aperture systems, enabling simultaneous information acquisition from multiple perspectives. This enhanced detection capability demonstrates unique performance in applications such as [...] Read more.
Compared to conventional single-aperture infrared cameras, the bio-inspired infrared compound eye camera integrates the advantages of infrared imaging technology with the benefits of multi-aperture systems, enabling simultaneous information acquisition from multiple perspectives. This enhanced detection capability demonstrates unique performance in applications such as autonomous driving, surveillance, and unmanned aerial vehicle reconnaissance. Current single-aperture small target detection algorithms fail to exploit the spatial relationships among compound eye apertures, thereby underutilizing the inherent advantages of compound eye imaging systems. This paper proposes a low-rank and sparse decomposition method based on bio-inspired infrared compound eye image features for small target detection. Initially, a compound eye structural weighting operator is designed according to image characteristics, which enhances the sparsity of target points when combined with the reweighted l1-norm. Furthermore, to improve detection speed, the structural tensor of the effective imaging region in infrared compound eye images is reconstructed, and the Representative Coefficient Total Variation method is employed to avoid complex singular value decomposition and regularization optimization computations. Our model is efficiently solved using the Alternating Direction Method of Multipliers (ADMM). Experimental results demonstrate that the proposed model can rapidly and accurately detect small infrared targets in bio-inspired compound eye image sequences, outperforming other comparative algorithms. Full article
Show Figures

Figure 1

18 pages, 4983 KiB  
Article
Small Defects Detection of Galvanized Strip Steel via Schatten-p Norm-Based Low-Rank Tensor Decomposition
by Shiyang Zhou, Xuguo Yan, Huaiguang Liu and Caiyun Gong
Sensors 2025, 25(8), 2606; https://doi.org/10.3390/s25082606 - 20 Apr 2025
Viewed by 366
Abstract
Accurate and efficient white-spot defects detection for the surface of galvanized strip steel is one of the most important guarantees for the quality of steel production. It is a fundamental but “hard” small target detection problem due to its small pixel occupation in [...] Read more.
Accurate and efficient white-spot defects detection for the surface of galvanized strip steel is one of the most important guarantees for the quality of steel production. It is a fundamental but “hard” small target detection problem due to its small pixel occupation in low-contrast images. By fully exploiting the low-rank and sparse prior information of a surface defect image, a Schatten-p norm-based low-rank tensor decomposition (SLRTD) method is proposed to decompose the defect image into low-rank background, sparse defect, and random noise. Firstly, the original defect images are transformed into a new patch-based tensor mode through data reconstruction for mining valuable information of the defect image. Then, considering the over-shrinkage problem in the low-rank component estimation caused by a vanilla nuclear norm and a weighted nuclear norm, a nonlinear reweighting strategy based on a Schatten p-norm is incorporated to improve the decomposition performance. Finally, a solution framework is proposed via a well-designed alternating direction method of multipliers to obtain the white-spot defect target image by a simple segmenting algorithm. The white-spot defect dataset from a real-world galvanized strip steel production line is constructed, and the experimental results demonstrate that the proposed SLRTD method outperforms existing state-of-the-art methods qualitatively and quantitatively. Full article
(This article belongs to the Special Issue Sensing and Imaging for Defect Detection: 2nd Edition)
Show Figures

Figure 1

31 pages, 7540 KiB  
Article
Temporal Denoising of Infrared Images via Total Variation and Low-Rank Bidirectional Twisted Tensor Decomposition
by Zhihao Liu, Weiqi Jin and Li Li
Remote Sens. 2025, 17(8), 1343; https://doi.org/10.3390/rs17081343 - 9 Apr 2025
Viewed by 741
Abstract
Temporal random noise (TRN) in uncooled infrared detectors significantly degrades image quality. Existing denoising techniques primarily address fixed-pattern noise (FPN) and do not effectively mitigate TRN. Therefore, a novel TRN denoising approach based on total variation regularization and low-rank tensor decomposition is proposed. [...] Read more.
Temporal random noise (TRN) in uncooled infrared detectors significantly degrades image quality. Existing denoising techniques primarily address fixed-pattern noise (FPN) and do not effectively mitigate TRN. Therefore, a novel TRN denoising approach based on total variation regularization and low-rank tensor decomposition is proposed. This method effectively suppresses temporal noise by introducing twisted tensors in both horizontal and vertical directions while preserving spatial information in diverse orientations to protect image details and textures. Additionally, the Laplacian operator-based bidirectional twisted tensor truncated nuclear norm (bt-LPTNN), is proposed, which is a norm that automatically assigns weights to different singular values based on their importance. Furthermore, a weighted spatiotemporal total variation regularization method for nonconvex tensor approximation is employed to preserve scene details. To recover spatial domain information lost during tensor estimation, robust principal component analysis is employed, and spatial information is extracted from the noise tensor. The proposed model, bt-LPTVTD, is solved using an augmented Lagrange multiplier algorithm, which outperforms several state-of-the-art algorithms. Compared to some of the latest algorithms, bt-LPTVTD demonstrates improvements across all evaluation metrics. Extensive experiments conducted using complex scenes underscore the strong adaptability and robustness of our algorithm. Full article
(This article belongs to the Special Issue Recent Advances in Infrared Target Detection)
Show Figures

Graphical abstract

23 pages, 62859 KiB  
Article
Seismic Random Noise Attenuation via Low-Rank Tensor Network
by Taiyin Zhao, Luoxiao Ouyang and Tian Chen
Appl. Sci. 2025, 15(7), 3453; https://doi.org/10.3390/app15073453 - 21 Mar 2025
Viewed by 419
Abstract
Seismic data are easily contaminated by random noise, impairing subsequent geological interpretation tasks. Existing denoising methods like low-rank approximation (LRA) and deep learning (DL) show promising denoising capabilities but still have limitations; for instance, LRA performance is parameter-sensitive, and DL networks lack interpretation. [...] Read more.
Seismic data are easily contaminated by random noise, impairing subsequent geological interpretation tasks. Existing denoising methods like low-rank approximation (LRA) and deep learning (DL) show promising denoising capabilities but still have limitations; for instance, LRA performance is parameter-sensitive, and DL networks lack interpretation. As an alternative, this paper introduces the low-rank tensor network (LRTNet), an innovative approach that integrates low-rank tensor approximation (LRTA) with DL. Our method involves constructing a noise attenuation model that leverages LRTA, total variation (TV) regularization, and weighted tensor nuclear norm minimization (WTNNM). By applying the alternating direction method of multipliers (ADMM), we solve the model and transform the iterative schemes into a DL framework, where each iteration corresponds to a network layer. The key learnable parameters, including weights and thresholds, are optimized using labeled data to enhance performance. Quantitative evaluations on synthetic data reveal that LRTNet achieves an average signal-to-noise ratio (SNR) of 9.37 dB on the validation set, outperforming Pyseistr (6.46 dB) and TNN-SSTV (6.10 dB) by 45.0% and 53.6%, respectively. Furthermore, tests on real field datasets demonstrate consistent enhancements in noise suppression while preserving critical stratigraphic structures and fault discontinuities. The embedded LRTA mechanism not only improves network interpretability, but also reduces parameter sensitivity compared to conventional LRA methods. These findings position LRTNet as a robust, physics-aware solution for seismic data restoration. Full article
Show Figures

Figure 1

18 pages, 901 KiB  
Article
A Hierarchical Latent Modulation Approach for Controlled Text Generation
by Jincheng Zou, Guorong Chen, Jian Wang, Bao Zhang, Hong Hu and Cong Liu
Mathematics 2025, 13(5), 713; https://doi.org/10.3390/math13050713 - 22 Feb 2025
Viewed by 953
Abstract
Generative models based on Variational Autoencoders (VAEs) represent an important area of research in Controllable Text Generation (CTG). However, existing approaches often do not fully exploit the potential of latent variables, leading to limitations in both the diversity and thematic consistency of the [...] Read more.
Generative models based on Variational Autoencoders (VAEs) represent an important area of research in Controllable Text Generation (CTG). However, existing approaches often do not fully exploit the potential of latent variables, leading to limitations in both the diversity and thematic consistency of the generated text. To overcome these challenges, this paper introduces a new framework based on Hierarchical Latent Modulation (HLM). The framework incorporates a hierarchical latent space modulation module for the generation and embedding of conditional modulation parameters. By using low-rank tensor factorization (LMF), the approach combines multi-layer latent variables and generates modulation parameters based on conditional labels, enabling precise control over the features during text generation. Additionally, layer-by-layer normalization and random dropout mechanisms are employed to address issues such as the under-utilization of conditional information and the collapse of generative patterns. We performed experiments on five baseline models based on VAEs for conditional generation, and the results demonstrate the effectiveness of the proposed framework. Full article
(This article belongs to the Special Issue Mathematical Foundations in NLP: Applications and Challenges)
Show Figures

Figure 1

25 pages, 12377 KiB  
Article
Exploiting Weighted Multidirectional Sparsity for Prior Enhanced Anomaly Detection in Hyperspectral Images
by Jingjing Liu, Jiashun Jin, Xianchao Xiu, Wanquan Liu and Jianhua Zhang
Remote Sens. 2025, 17(4), 602; https://doi.org/10.3390/rs17040602 - 10 Feb 2025
Cited by 1 | Viewed by 684
Abstract
Anomaly detection (AD) is an important topic in remote sensing, aiming to identify unusual or abnormal features within the data. However, most existing low-rank representation methods usually use the nuclear norm for background estimation, and do not consider the different contributions of different [...] Read more.
Anomaly detection (AD) is an important topic in remote sensing, aiming to identify unusual or abnormal features within the data. However, most existing low-rank representation methods usually use the nuclear norm for background estimation, and do not consider the different contributions of different singular values. Besides, they overlook the spatial relationships of abnormal regions, particularly failing to fully leverage the 3D structured information of the data. Moreover, noise in practical scenarios can disrupt the low-rank structure of the background, making it challenging to separate anomaly from the background and ultimately reducing detection accuracy. To address these challenges, this paper proposes a weighted multidirectional sparsity regularized low-rank tensor representation method (WMS-LRTR) for AD. WMS-LRTR uses the weighted tensor nuclear norm for background estimation to characterize the low-rank property of the background. Considering the correlation between abnormal pixels across different dimensions, the proposed method introduces a novel weighted multidirectional sparsity (WMS) by unfolding anomaly into multimodal to better exploit the sparsity of the anomaly. In order to improve the robustness of AD, we further embed a user-friendly plug-and-play (PnP) denoising prior to optimize the background modeling under low-rank structure and facilitate the separation of sparse anomalous regions. Furthermore, an effective iterative algorithm using alternate direction method of multipliers (ADMM) is introduced, whose subproblems can be solved quickly by fast solvers or have closed-form solutions. Numerical experiments on various datasets show that WMS-LRTR outperforms state-of-the-art AD methods, demonstrating its better detection ability. Full article
Show Figures

Figure 1

23 pages, 979 KiB  
Article
Hyperspectral Band Selection via Tensor Low Rankness and Generalized 3DTV
by Katherine Henneberger and Jing Qin
Remote Sens. 2025, 17(4), 567; https://doi.org/10.3390/rs17040567 - 7 Feb 2025
Cited by 1 | Viewed by 1006
Abstract
Hyperspectral band selection plays a key role in reducing the high dimensionality of data while maintaining essential details. However, existing band selection methods often encounter challenges, such as high memory consumption, the need for data matricization that disrupts inherent data structures, and difficulties [...] Read more.
Hyperspectral band selection plays a key role in reducing the high dimensionality of data while maintaining essential details. However, existing band selection methods often encounter challenges, such as high memory consumption, the need for data matricization that disrupts inherent data structures, and difficulties in preserving crucial spatial–spectral relationships. To address these challenges, we propose a tensor-based band selection model using Generalized 3D Total Variation (G3DTV), which utilizes the 1p norm to promote smoothness across spatial and spectral dimensions. Based on the Alternating Direction Method of Multipliers (ADMM), we develop an efficient hyperspectral band selection algorithm, where the tensor low-rank structure is captured through tensor CUR decomposition, thus significantly improving computational efficiency. Numerical experiments on benchmark datasets have demonstrated that our method outperforms other state-of-the-art approaches. In addition, we provide practical guidelines for parameter tuning in both noise-free and noisy data scenarios. We also discuss computational complexity trade-offs, explore parameter selection using grid search and Bayesian Optimization, and extend our analysis to evaluate performance with additional classifiers. These results further validate the proposed robustness and accuracy of the model. Full article
Show Figures

Graphical abstract

16 pages, 9114 KiB  
Article
Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation
by Xinhua Su, Huixiang Lin, Huanmin Ge and Yifan Mei
Electronics 2025, 14(2), 238; https://doi.org/10.3390/electronics14020238 - 8 Jan 2025
Viewed by 1008
Abstract
Tensor restoration finds applications in various fields, including data science, image processing, and machine learning, where the global low-rank property is a crucial prior. As the convex relaxation to the tensor rank function, the traditional tensor nuclear norm is used by directly adding [...] Read more.
Tensor restoration finds applications in various fields, including data science, image processing, and machine learning, where the global low-rank property is a crucial prior. As the convex relaxation to the tensor rank function, the traditional tensor nuclear norm is used by directly adding all the singular values of a tensor. Considering the variations among singular values, nonconvex regularizations have been proposed to approximate the tensor rank function more effectively, leading to improved recovery performance. In addition, the local characteristics of the tensor could further improve detail recovery. Currently, the gradient tensor is explored to effectively capture the smoothness property across tensor dimensions. However, previous studies considered the gradient tensor only within the context of the nuclear norm. In order to better simultaneously represent the global low-rank property and local smoothness of tensors, we propose a novel regularization, the Tensor-Correlated Total Variation (TCTV), based on the nonconvex Geman norm and total variation. Specifically, the proposed method minimizes the nonconvex Geman norm on singular values of the gradient tensor. It enhances the recovery performance of a low-rank tensor by simultaneously reducing estimation bias, improving approximation accuracy, preserving fine-grained structural details and maintaining good computational efficiency compared to traditional convex regularizations. Based on the proposed TCTV regularization, we develop TC-TCTV and TRPCA-TCTV models to solve completion and denoising problems, respectively. Subsequently, the proposed models are solved by the Alternating Direction Method of Multipliers (ADMM), and the complexity and convergence of the algorithm are analyzed. Extensive numerical results on multiple datasets validate the superior recovery performance of our method, even in extreme conditions with high missing rates. Full article
(This article belongs to the Special Issue Image Fusion and Image Processing)
Show Figures

Figure 1

22 pages, 15185 KiB  
Article
Low Tensor Rank Constrained Image Inpainting Using a Novel Arrangement Scheme
by Shuli Ma, Youchen Fan, Shengliang Fang, Weichao Yang and Li Li
Appl. Sci. 2025, 15(1), 322; https://doi.org/10.3390/app15010322 - 31 Dec 2024
Viewed by 837
Abstract
Employing low tensor rank decomposition in image inpainting has attracted increasing attention. This study exploited novel tensor arrangement schemes to transform an image (a low-order tensor) to a higher-order tensor without changing the total number of pixels. The developed arrangement schemes enhanced the [...] Read more.
Employing low tensor rank decomposition in image inpainting has attracted increasing attention. This study exploited novel tensor arrangement schemes to transform an image (a low-order tensor) to a higher-order tensor without changing the total number of pixels. The developed arrangement schemes enhanced the low rankness of images under three tensor decomposition methods: matrix SVD, tensor train (TT) decomposition, and tensor singular value decomposition (t-SVD). By exploiting the schemes, we solved the image inpainting problem with three low-rank constrained models that use the matrix rank, TT rank, and tubal rank as constrained priors. The tensor tubal rank and tensor train multi-rank were developed from t-SVD and TT decomposition, respectively. Then, ADMM algorithms were efficiently exploited for solving the three models. Experimental results demonstrate that our methods are effective for image inpainting and superior to numerous close methods. Full article
(This article belongs to the Special Issue AI-Based Image Processing: 2nd Edition)
Show Figures

Figure 1

19 pages, 7418 KiB  
Article
Nonconvex Nonlinear Transformation of Low-Rank Approximation for Tensor Completion
by Yifan Mei, Xinhua Su, Huixiang Lin and Huanmin Ge
Appl. Sci. 2024, 14(24), 11895; https://doi.org/10.3390/app142411895 - 19 Dec 2024
Viewed by 1027
Abstract
Recovering incomplete high-dimensional data to create complete and valuable datasets is the main focus of tensor completion research, which lies at the intersection of mathematics and information science. Researchers typically apply various linear and nonlinear transformations to the original tensor, using regularization terms [...] Read more.
Recovering incomplete high-dimensional data to create complete and valuable datasets is the main focus of tensor completion research, which lies at the intersection of mathematics and information science. Researchers typically apply various linear and nonlinear transformations to the original tensor, using regularization terms like the nuclear norm for low-rank approximation. However, relying solely on the tensor nuclear norm can lead to suboptimal solutions because of the convex relaxation of tensor rank, which strays from the original outcomes. To tackle these issues, we introduce the low-rank approximation nonconvex nonlinear transformation (LRANNT) method. By employing nonconvex norms and nonlinear transformations, we can more accurately capture the intrinsic structure of tensors, providing a more effective solution to the tensor completion problem. Additionally, we propose the proximal alternating minimization (PAM) algorithm to solve the model, demonstrating its convergence. Tests on publicly available datasets demonstrate that our method outperforms the current state-of-the-art approaches, even under extreme conditions with a high missing rate of up to 97.5%. Full article
Show Figures

Figure 1

Back to TopTop