Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = vector-circulant matrices

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 346 KB  
Article
Symmetry and Attention Dynamics in Ducci-Generated Jacobsthal Circulant Matrices
by Bahar Kuloğlu, Taras Goy and Engin Özkan
Symmetry 2026, 18(3), 520; https://doi.org/10.3390/sym18030520 - 18 Mar 2026
Viewed by 132
Abstract
A Ducci sequence generated by the vector A=(a1,a2,,an)Zn is defined by (A,DA,DA2,DA3,) [...] Read more.
A Ducci sequence generated by the vector A=(a1,a2,,an)Zn is defined by (A,DA,DA2,DA3,), where the Ducci map D:ZnZn is given by DA=(|a2a1|,|a3a2|,,|anan1|,|a1an|). In this paper, we examine the impact of iterative Ducci transformations on Jacobsthal numbers and construct circulant and skew-circulant matrices generated by the resulting sequences. Their properties are investigated through matrix norms (Euclidean (Frobenius), spectral, and p), determinants, and eigenvalues. To extend the classical analysis, we incorporate the Convolutional Block Attention Module (CBAM) from deep learning and interpret the structured matrices as simulated image inputs. By analyzing channel-attention vectors and their variances, we assess how successive Ducci transformations influence attention distribution. The first-order transformation produces greater variance in attention weights, indicating enhanced feature discrimination, whereas higher-order transformations promote a more balanced distribution. The results highlight how Ducci transformations influence attention variance in structured matrices. Full article
(This article belongs to the Special Issue Symmetry in Combinatorics and Discrete Mathematics, 2nd Edition)
Show Figures

Figure 1

18 pages, 399 KB  
Article
Quantum Algorithms for the Multiplication of Circulant Matrices and Vectors
by Lu Hou, Zhenyu Huang and Chang Lv
Information 2024, 15(8), 453; https://doi.org/10.3390/info15080453 - 1 Aug 2024
Cited by 1 | Viewed by 2805
Abstract
This article presents two quantum algorithms for computing the product of a circulant matrix and a vector. The arithmetic complexity of the first algorithm is O(Nlog2N) in most cases. For the second algorithm, when the entries in [...] Read more.
This article presents two quantum algorithms for computing the product of a circulant matrix and a vector. The arithmetic complexity of the first algorithm is O(Nlog2N) in most cases. For the second algorithm, when the entries in the circulant matrix and the vector take values in C or R, the complexity is O(Nlog2N) in most cases. However, when these entries take values from positive real numbers, the complexity is reduced to O(log3N) in most cases, which presents an exponential speedup compared to the classical complexity of O(NlogN) for computing the product of a circulant matrix and vector. We apply this algorithm to the convolution calculation in quantum convolutional neural networks, which effectively accelerates the computation of convolutions. Additionally, we present a concrete quantum circuit structure for quantum convolutional neural networks. Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
Show Figures

Figure 1

19 pages, 615 KB  
Article
On Block g-Circulant Matrices with Discrete Cosine and Sine Transforms for Transformer-Based Translation Machine
by Euis Asriani, Intan Muchtadi-Alamsyah and Ayu Purwarianti
Mathematics 2024, 12(11), 1697; https://doi.org/10.3390/math12111697 - 29 May 2024
Viewed by 2038
Abstract
Transformer has emerged as one of the modern neural networks that has been applied in numerous applications. However, transformers’ large and deep architecture makes them computationally and memory-intensive. In this paper, we propose the block g-circulant matrices to replace the dense weight [...] Read more.
Transformer has emerged as one of the modern neural networks that has been applied in numerous applications. However, transformers’ large and deep architecture makes them computationally and memory-intensive. In this paper, we propose the block g-circulant matrices to replace the dense weight matrices in the feedforward layers of the transformer and leverage the DCT-DST algorithm to multiply these matrices with the input vector. Our test using Portuguese-English datasets shows that the suggested method improves model memory efficiency compared to the dense transformer but at the cost of a slight drop in accuracy. We found that the model Dense-block 1-circulant DCT-DST of 128 dimensions achieved the highest model memory efficiency at 22.14%. We further show that the same model achieved a BLEU score of 26.47%. Full article
(This article belongs to the Special Issue Applications of Mathematics in Neural Networks and Machine Learning)
Show Figures

Figure 1

20 pages, 1376 KB  
Article
AnScalable Matrix Computing Unit Architecture for FPGA, and SCUMO User Design Interface
by Asgar Abbaszadeh, Taras Iakymchuk, Manuel Bataller-Mompeán, Jose V. Francés-Villora and Alfredo Rosado-Muñoz
Electronics 2019, 8(1), 94; https://doi.org/10.3390/electronics8010094 - 15 Jan 2019
Cited by 9 | Viewed by 7211
Abstract
High dimensional matrix algebra is essential in numerous signal processing and machine learning algorithms. This work describes a scalable square matrix-computing unit designed on the basis of circulant matrices. It optimizes data flow for the computation of any sequence of matrix operations removing [...] Read more.
High dimensional matrix algebra is essential in numerous signal processing and machine learning algorithms. This work describes a scalable square matrix-computing unit designed on the basis of circulant matrices. It optimizes data flow for the computation of any sequence of matrix operations removing the need for data movement for intermediate results, together with the individual matrix operations’ performance in direct or transposed form (the transpose matrix operation only requires a data addressing modification). The allowed matrix operations are: matrix-by-matrix addition, subtraction, dot product and multiplication, matrix-by-vector multiplication, and matrix by scalar multiplication. The proposed architecture is fully scalable with the maximum matrix dimension limited by the available resources. In addition, a design environment is also developed, permitting assistance, through a friendly interface, from the customization of the hardware computing unit to the generation of the final synthesizable IP core. For N × N matrices, the architecture requires N ALU-RAM blocks and performs O ( N 2 ) , requiring N 2 + 7 and N + 7 clock cycles for matrix-matrix and matrix-vector operations, respectively. For the tested Virtex7 FPGA device, the computation for 500 × 500 matrices allows a maximum clock frequency of 346 MHz, achieving an overall performance of 173 GOPS. This architecture shows higher performance than other state-of-the-art matrix computing units. Full article
(This article belongs to the Special Issue Hardware and Architecture)
Show Figures

Figure 1

18 pages, 918 KB  
Article
Accelerating Deep Neural Networks by Combining Block-Circulant Matrices and Low-Precision Weights
by Zidi Qin, Di Zhu, Xingwei Zhu, Xuan Chen, Yinghuan Shi, Yang Gao, Zhonghai Lu, Qinghong Shen, Li Li and Hongbing Pan
Electronics 2019, 8(1), 78; https://doi.org/10.3390/electronics8010078 - 10 Jan 2019
Cited by 6 | Viewed by 5612
Abstract
As a key ingredient of deep neural networks (DNNs), fully-connected (FC) layers are widely used in various artificial intelligence applications. However, there are many parameters in FC layers, so the efficient process of FC layers is restricted by memory bandwidth. In this paper, [...] Read more.
As a key ingredient of deep neural networks (DNNs), fully-connected (FC) layers are widely used in various artificial intelligence applications. However, there are many parameters in FC layers, so the efficient process of FC layers is restricted by memory bandwidth. In this paper, we propose a compression approach combining block-circulant matrix-based weight representation and power-of-two quantization. Applying block-circulant matrices in FC layers can reduce the storage complexity from O ( k 2 ) to O ( k ) . By quantizing the weights into integer powers of two, the multiplications in the reference can be replaced by shift and add operations. The memory usages of models for MNIST, CIFAR-10 and ImageNet can be compressed by 171 × , 2731 × and 128 × with minimal accuracy loss, respectively. A configurable parallel hardware architecture is then proposed for processing the compressed FC layers efficiently. Without multipliers, a block matrix-vector multiplication module (B-MV) is used as the computing kernel. The architecture is flexible to support FC layers of various compression ratios with small footprint. Simultaneously, the memory access can be significantly reduced by using the configurable architecture. Measurement results show that the accelerator has a processing power of 409.6 GOPS, and achieves 5.3 TOPS/W energy efficiency at 800 MHz. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1073 KB  
Article
Minimal-Entanglement Entanglement-Assisted Quantum Error Correction Codes from Modified Circulant Matrices
by Duc Manh Nguyen and Sunghwan Kim
Symmetry 2017, 9(7), 122; https://doi.org/10.3390/sym9070122 - 18 Jul 2017
Cited by 15 | Viewed by 6709
Abstract
In this paper, new construction methods of entanglement-assisted quantum error correction code (EAQECC) from circulant matrices are proposed. We first construct the matrices from two vectors of constraint size, and determine the isotropic subgroup. Then, we also propose a method for calculation of [...] Read more.
In this paper, new construction methods of entanglement-assisted quantum error correction code (EAQECC) from circulant matrices are proposed. We first construct the matrices from two vectors of constraint size, and determine the isotropic subgroup. Then, we also propose a method for calculation of the entanglement subgroup based on standard forms of binary matrices to satisfy the constraint conditions of EAQECC. With isotropic and entanglement subgroups, we determine all the parameters and the minimum distance of the EAQECC. The proposed EAQECC with small lengths are presented to explain the practicality of this construction of EAQECC. Comparison with some earlier constructions of EAQECC shows that the proposed EAQECC is better. Full article
Show Figures

Figure 1

7 pages, 222 KB  
Article
Vector-Circulant Matrices and Vector-Circulant Based Additive Codes over Finite Fields
by Somphong Jitman
Information 2017, 8(3), 82; https://doi.org/10.3390/info8030082 - 10 Jul 2017
Cited by 3 | Viewed by 4076
Abstract
Circulant matrices have attracted interest due to their rich algebraic structures and various applications. In this paper, the concept of vector-circulant matrices over finite fields is studied as a generalization of circulant matrices. The algebraic characterization for such matrices has been discussed. As [...] Read more.
Circulant matrices have attracted interest due to their rich algebraic structures and various applications. In this paper, the concept of vector-circulant matrices over finite fields is studied as a generalization of circulant matrices. The algebraic characterization for such matrices has been discussed. As applications, constructions of vector-circulant based additive codes over finite fields have been given together with some examples of optimal additive codes over F 4 . Full article
(This article belongs to the Section Information Theory and Methodology)
12 pages, 276 KB  
Article
On the Complexity Reduction of Coding WSS Vector Processes by Using a Sequence of Block Circulant Matrices
by Jesús Gutiérrez-Gutiérrez, Marta Zárraga-Rodríguez, Xabier Insausti and Bjørn O. Hogstad
Entropy 2017, 19(3), 95; https://doi.org/10.3390/e19030095 - 2 Mar 2017
Cited by 6 | Viewed by 3777
Abstract
In the present paper, we obtain a result on the rate-distortion function (RDF) of wide sense stationary (WSS) vector processes that allows us to reduce the complexity of coding those processes. To achieve this result, we propose a sequence of block circulant matrices. [...] Read more.
In the present paper, we obtain a result on the rate-distortion function (RDF) of wide sense stationary (WSS) vector processes that allows us to reduce the complexity of coding those processes. To achieve this result, we propose a sequence of block circulant matrices. In addition, we use the proposed sequence to reduce the complexity of filtering WSS vector processes. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Back to TopTop