Next Article in Journal
Quantitative Evaluation of Food-Waste Components in Organic Fertilizer Using Visible–Near-Infrared Hyperspectral Imaging
Next Article in Special Issue
Design and Implementation of Novel Efficient Full Adder/Subtractor Circuits Based on Quantum-Dot Cellular Automata Technology
Previous Article in Journal
Smart Manufacturing Technology
Previous Article in Special Issue
System for Neural Network Determination of Atrial Fibrillation on ECG Signals with Wavelet-Based Preprocessing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Algorithm for Fast Multiplication of Kaluza Numbers

by
Aleksandr Cariow
,
Galina Cariowa
and
Janusz P. Paplinski
*,†
Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, Szczecin, Żołnierska 49, 71-210 Szczecin, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(17), 8203; https://doi.org/10.3390/app11178203
Submission received: 6 August 2021 / Revised: 30 August 2021 / Accepted: 1 September 2021 / Published: 3 September 2021
(This article belongs to the Special Issue Advanced Information Processing Methods and Their Applications)

Abstract

:
This paper presents a new algorithm for multiplying two Kaluza numbers. Performing this operation directly requires 1024 real multiplications and 992 real additions. We presented in a previous paper an effective algorithm that can compute the same result with only 512 real multiplications and 576 real additions. More effective solutions have not yet been proposed. Nevertheless, it turned out that an even more interesting solution could be found that would further reduce the computational complexity of this operation. In this article, we propose a new algorithm that allows one to calculate the product of two Kaluza numbers using only 192 multiplications and 384 additions of real numbers.

1. Introduction

The permanent development of the theory and practice of data processing, as well as the need to solve increasingly complex problems of computational intelligence, inspire the use of complex and advanced mathematical methods and formalisms to represent and process big multidimensional data arrays. A convenient formalism for representing big data arrays is the high-dimensional number system. For a long time, high-dimensional number systems have been used in physics and mathematics for modeling complex systems and physical phenomena. Today, hypercomplex numbers [1] are also used in various fields of data processing, including digital signal and image processing, machine graphics, telecommunications, and cryptography [2,3,4,5,6,7,8,9,10]. However, their use in brain-inspired computation and neural networks has been largely limited due to the lack of comprehensive and all-inclusive information processing and deep learning techniques. Although there has been a number of research articles addressing the use of quaternions and octonions, higher-dimensional numbers remain a largely open problem [11,12,13,14,15,16,17,18,19,20,21,22]. Recently, new articles appeared in open access that presented a sedenion-based neural network [23,24]. The expediency of using numerical systems of higher dimensions was also noted. Thus, the object of our research was hypercomplex-valued convolutional neural networks using 32-dimensional Kaluza numbers.
In advanced hypercomplex-valued convolutional neural networks, multiplying hypercomplex numbers is the most time-consuming arithmetic operation. The reason for this is that the addition of N-dimensional hypercomplex numbers requires N real additions, while the multiplication of these numbers already requires N ( N 1 ) real additions and N 2 real multiplication. It is easy to see that the increasing of dimensions of hypercomplex numbers increases the computational complexity of the multiplication. Therefore, reducing the computational complexity of the multiplication of hypercomplex numbers is an important scientific and engineering problem. The original algorithm for computing the product of Kaluza numbers was described in [25], but we found a more efficient solution. The purpose of this article is to present our new solution.

2. Preliminary Remarks

In all likelihood, the rules for constructing Kaluza numbers were first described in [26]. In article [25], based on these rules, a multiplication table for the imaginary units of the Kaluza number was constructed. A Kaluza number is defined as follows:
d = d 0 + n = 1 31 d n e n ,
where N = 2 m 1 and { d n } for n = 1 , 2 , , 31 are real numbers, and { e n } for n = 1 , 2 , , 31 are the imaginary units.
Imaginary units e 1 , e 2 , …, e m are called principal, and the remaining imaginary units are expressed through them using the formula:
e s = e p , e q , e r ,
where 1 p < q < < r m .
All kinds of works of imaginary units are entirely based on established rules:
e p 2 = ϵ p ; ϵ q ϵ p = α p q ϵ p ϵ q ; p < q ; p q = 1 , 2 , , m
For Kaluza numbers [26]:
m = 5 , ϵ 1 = ϵ 2 = 1 , ϵ 2 = ϵ 3 = 1 , α p q = 1
Using the above rules, the results of all possible products of imaginary units of Kaluza numbers can be summarized in the following tables [25]: Table 1, Table 2, Table 3 and Table 4. For conveniens of notation we represents each element e i in the tables by its subscript i, and we set i = e i .
Suppose we want to compute the product of two Kaluza numbers:
d = d ( 1 ) d ( 2 ) = d 0 + n = 1 31 d n e n ,
where
d ( 1 ) = a 0 + n = 1 31 a n e n and d ( 2 ) = b 0 + n = 1 31 b n e n .
The operation of the multiplication of Kaluza numbers can be represented more compactly in the form of a matrix-vector product:
Y 32 × 1 = B 32 X 32 × 1 ,
where Y 32 × 1 = [ d 0 , d 1 , , d 31 ] T , X 32 × 1 = [ a 0 , a 1 , , a 31 ] T ,
B 32 = B 16 ( 0 , 0 ) B 16 ( 1 , 0 ) B 16 ( 0 , 1 ) B 16 ( 1 , 1 ) ,
B 16 ( 0 , 0 ) = b 0 b 1 b 2 b 3 b 4 b 5 b 6 b 7 b 8 b 9 b 10 b 11 b 12 b 13 b 14 b 15 b 1 b 0 b 6 b 7 b 8 b 9 b 2 b 3 b 4 b 5 b 16 b 17 b 18 b 19 b 20 b 21 b 2 b 6 b 0 b 10 b 11 b 12 b 1 b 16 b 17 b 18 b 3 b 4 b 5 b 22 b 23 b 24 b 3 b 7 b 10 b 0 b 13 b 14 b 16 b 1 b 19 b 20 b 2 b 22 b 23 b 4 b 5 b 25 b 4 b 8 b 11 b 13 b 0 b 15 b 17 b 19 b 1 b 21 b 22 b 2 b 24 b 3 b 25 b 5 b 5 b 9 b 12 b 14 b 15 b 0 b 18 b 20 b 21 b 1 b 23 b 24 b 2 b 25 b 3 b 4 b 6 b 2 b 1 b 16 b 17 b 18 b 0 b 10 b 11 b 12 b 7 b 8 b 9 b 26 b 27 b 28 b 7 b 3 b 16 b 1 b 19 b 20 b 10 b 0 b 13 b 14 b 6 b 26 b 27 b 8 b 9 b 29 b 8 b 4 b 17 b 19 b 1 b 21 b 11 b 13 b 0 b 15 b 26 b 6 b 28 b 7 b 29 b 9 b 9 b 5 b 18 b 20 b 21 b 1 b 12 b 14 b 15 b 0 b 27 b 28 b 6 b 29 b 7 b 8 b 10 b 16 b 3 b 2 b 22 b 23 b 7 b 6 b 26 b 27 b 0 b 13 b 14 b 11 b 12 b 30 b 11 b 17 b 4 b 22 b 2 b 24 b 8 b 26 b 6 b 28 b 13 b 0 b 15 b 10 b 30 b 12 b 12 b 18 b 5 b 23 b 24 b 2 b 9 b 27 b 28 b 6 b 14 b 15 b 0 b 30 b 10 b 11 b 13 b 19 b 22 b 4 b 3 b 25 b 26 b 8 b 7 b 29 b 11 b 10 b 30 b 0 b 15 b 14 b 14 b 20 b 23 b 5 b 25 b 3 b 27 b 9 b 29 b 7 b 12 b 30 b 10 b 15 b 0 b 13 b 15 b 21 b 24 b 25 b 5 b 4 b 28 b 29 b 9 b 8 b 30 b 12 b 11 b 14 b 13 b 0 ,
B 16 ( 1 , 0 ) = b 16 b 10 b 7 b 6 b 26 b 27 b 3 b 2 b 17 b 11 b 8 b 26 b 6 b 28 b 4 b 22 b 18 b 12 b 9 b 27 b 28 b 6 b 5 b 23 b 19 b 13 b 26 b 8 b 7 b 29 b 22 b 4 b 20 b 14 b 27 b 9 b 29 b 7 b 23 b 5 b 21 b 15 b 28 b 29 b 9 b 8 b 24 b 25 b 22 b 26 b 13 b 11 b 10 b 30 b 19 b 17 b 23 b 27 b 14 b 12 b 30 b 10 b 20 b 18 b 24 b 28 b 15 b 30 b 12 b 11 b 21 b 31 b 25 b 29 b 30 b 15 b 14 b 13 b 31 b 21 b 26 b 22 b 19 b 17 b 16 b 31 b 13 b 11 b 27 b 23 b 20 b 18 b 31 b 16 b 14 b 12 b 28 b 24 b 21 b 31 b 18 b 17 b 15 b 30 b 29 b 25 b 31 b 21 b 20 b 19 b 30 b 15 b 30 b 31 b 25 b 24 b 23 b 22 b 29 b 28 b 31 b 30 b 29 b 28 b 27 b 26 b 25 b 24 b 22 b 23 b 1 b 19 b 20 b 17 b 18 b 31 b 2 b 24 b 19 b 1 b 21 b 16 b 31 b 18 b 24 b 2 b 20 b 21 b 1 b 31 b 16 b 17 b 3 b 25 b 17 b 16 b 31 b 1 b 21 b 20 b 25 b 3 b 18 b 31 b 16 b 21 b 1 b 19 b 5 b 4 b 31 b 18 b 17 b 20 b 19 b 1 b 16 b 31 b 4 b 3 b 25 b 2 b 24 b 23 b 31 b 16 b 5 b 25 b 3 b 24 b 2 b 22 b 18 b 17 b 25 b 5 b 4 b 23 b 22 b 2 b 20 b 19 b 24 b 23 b 22 b 5 b 4 b 3 b 10 b 30 b 8 b 7 b 29 b 6 b 28 b 27 b 30 b 10 b 9 b 29 b 7 b 28 b 6 b 26 b 12 b 11 b 29 b 9 b 8 b 27 b 26 b 6 b 14 b 13 b 28 b 27 b 26 b 9 b 8 b 7 b 27 b 26 b 15 b 14 b 13 b 12 b 11 b 10 b 23 b 22 b 21 b 20 b 19 b 18 b 17 b 16 ,
B 16 ( 0 , 1 ) = b 16 b 17 b 18 b 19 b 20 b 21 b 22 b 23 b 10 b 11 b 12 b 13 b 14 b 15 b 26 b 27 b 7 b 8 b 9 b 26 b 27 b 28 b 13 b 14 b 6 b 26 b 27 b 8 b 9 b 29 b 11 b 12 b 26 b 6 b 28 b 7 b 29 b 9 b 10 b 30 b 27 b 28 b 6 b 29 b 7 b 8 b 30 b 10 b 3 b 4 b 5 b 22 b 23 b 24 b 19 b 20 b 2 b 22 b 23 b 4 b 5 b 25 b 17 b 18 b 22 b 2 b 24 b 3 b 25 b 5 b 16 b 31 b 23 b 24 b 2 b 25 b 3 b 4 b 31 b 16 b 1 b 19 b 20 b 17 b 18 b 31 b 4 b 5 b 19 b 1 b 21 b 16 b 31 b 18 b 3 b 25 b 20 b 21 b 1 b 31 b 16 b 17 b 25 b 3 b 17 b 16 b 31 b 1 b 21 b 20 b 2 b 24 b 18 b 31 b 16 b 21 b 1 b 19 b 24 b 2 b 31 b 18 b 17 b 20 b 19 b 1 b 23 b 22 b 24 b 25 b 26 b 27 b 28 b 29 b 30 b 31 b 28 b 29 b 22 b 23 b 24 b 25 b 31 b 30 b 15 b 30 b 19 b 20 b 21 b 31 b 25 b 29 b 30 b 15 b 17 b 18 b 31 b 21 b 24 b 28 b 12 b 14 b 16 b 31 b 18 b 20 b 23 b 27 b 11 b 13 b 31 b 16 b 17 b 19 b 22 b 26 b 21 b 31 b 13 b 14 b 15 b 30 b 29 b 25 b 31 b 21 b 11 b 12 b 30 b 15 b 28 b 24 b 18 b 20 b 10 b 30 b 12 b 14 b 27 b 23 b 17 b 19 b 30 b 10 b 11 b 13 b 26 b 22 b 25 b 24 b 8 b 9 b 29 b 28 b 15 b 21 b 5 b 23 b 7 b 29 b 9 b 27 b 14 b 20 b 4 b 22 b 29 b 7 b 8 b 26 b 13 b 19 b 23 b 5 b 6 b 28 b 27 b 9 b 12 b 18 b 22 b 4 b 28 b 6 b 26 b 8 b 11 b 17 b 2 b 3 b 27 b 26 b 6 b 7 b 10 b 16 ,
B 16 ( 1 , 1 ) = b 0 b 13 b 14 b 11 b 12 b 30 b 8 b 9 b 13 b 0 b 15 b 10 b 30 b 12 b 7 b 29 b 14 b 15 b 0 b 30 b 10 b 11 b 29 b 7 b 11 b 10 b 30 b 0 b 15 b 14 b 6 b 28 b 12 b 30 b 10 b 15 b 0 b 13 b 28 b 6 b 30 b 12 b 11 b 14 b 13 b 0 b 27 b 26 b 8 b 7 b 29 b 6 b 28 b 27 b 0 b 15 b 9 b 29 b 7 b 28 b 6 b 26 b 15 b 0 b 29 b 9 b 8 b 27 b 26 b 6 b 14 b 13 b 28 b 27 b 26 b 9 b 8 b 7 b 12 b 11 b 4 b 3 b 25 b 2 b 24 b 23 b 1 b 21 b 5 b 25 b 3 b 24 b 2 b 22 b 21 b 1 b 25 b 5 b 4 b 23 b 22 b 2 b 20 b 19 b 24 b 23 b 22 b 5 b 4 b 3 b 18 b 17 b 21 b 20 b 19 b 18 b 17 b 16 b 5 b 4 b 15 b 14 b 13 b 12 b 11 b 10 b 9 b 8 b 29 b 28 b 4 b 5 b 25 b 24 b 21 b 15 b 9 b 27 b 3 b 25 b 5 b 23 b 20 b 14 b 8 b 26 b 25 b 3 b 4 b 22 b 19 b 13 b 27 b 9 b 2 b 24 b 23 b 5 b 18 b 12 b 26 b 8 b 24 b 2 b 22 b 4 b 17 b 11 b 6 b 7 b 23 b 22 b 2 b 3 b 16 b 10 b 14 b 12 b 1 b 21 b 20 b 18 b 5 b 9 b 13 b 11 b 21 b 1 b 19 b 17 b 4 b 8 b 0 b 10 b 20 b 19 b 1 b 16 b 3 b 7 b 10 b 0 b 18 b 17 b 16 b 1 b 2 b 6 b 20 b 18 b 0 b 15 b 14 b 12 b 9 b 5 b 19 b 17 b 15 b 0 b 13 b 11 b 8 b 4 b 1 b 16 b 14 b 13 b 0 b 10 b 7 b 3 b 16 b 1 b 12 b 11 b 10 b 0 b 6 b 2 b 3 b 2 b 9 b 8 b 7 b 6 b 0 b 1 b 7 b 6 b 5 b 4 b 3 b 2 b 1 b 0 .
The direct multiplication of the matrix-vector product in Equation (1) requires 1024 real multiplications and 992 additions. We shall present an algorithm that reduces computation complexity to 192 multiplications and 384 additions of real numbers.

3. Synthesis of a Rationalized Algorithm for Computing Kaluza Numbers Product

We first rearrange the rows and columns of the matrix respectively using the permutations: π r = (11, 17, 2, 6, 13, 19, 3, 7, 0, 1, 4, 8, 10, 16, 22, 26, 30, 31, 23, 27, 15, 21, 5, 9, 14, 20, 25, 29, 12, 18, 24, 28) and π c = (10, 16, 3, 7, 0, 1, 2, 6, 13, 19, 22, 26, 11, 17, 4, 8, 12, 18, 5, 9, 14, 20, 23, 27, 15, 21, 24, 28, 30, 31, 25, 29). Next, we change the sign of the selected rows { 8 , 9 , 12 , 13 , 14 , 15 , 26 , 27 } and columns { 2 , 3 , 6 , 7 , 8 , 9 , 12 , 13 } by multiplying them by −1. We can easily see that this transformation leads in the future to minimizing the computational complexity of the final algorithm. Then we can write:
Y 32 × 1 = M 32 ( r ) B ˘ 32 M 32 ( c ) X 32 × 1 ,
where the monomial matrices M 32 ( r ) , M 32 ( c ) are products of an appriopriate alternating sign changing matrices S 32 ( r ) , S 32 ( c ) and a permutation matrix P 32 ( r ) , P 32 ( c ) :
M 32 ( r ) = S 32 ( r ) P 32 ( r ) ,
M 32 ( c ) = P 32 ( c ) S 32 ( c ) ,
where:
P 32 ( c ) = P 16 ( c , ( 0 , 0 ) ) P 16 ( c , ( 0 , 1 ) ) P 16 ( c , ( 1 , 0 ) ) P 16 ( c , ( 1 , 1 ) ) ,
P 16 ( c ( 0 , 0 ) ) = 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,
P 16 ( c ( 0 , 1 ) ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ,
P 16 ( c ( 1 , 0 ) ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,
P 16 ( c ( 1 , 1 ) ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 ,
S 32 ( c ) = diag 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ,
P 32 ( r ) = P 16 ( r , ( 0 , 0 ) ) P 16 ( r , ( 0 , 1 ) ) P 16 ( r , ( 1 , 0 ) ) P 16 ( r , ( 1 , 1 ) ) ,
P 16 ( r , ( 0 , 0 ) ) = 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,
P 16 ( r , ( 0 , 1 ) ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 ,
P 16 ( r , ( 1 , 0 ) ) = 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ,
P 16 ( r , ( 1 , 1 ) ) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 ,
S 32 ( r ) = diag 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 .
The matrix B ˘ 32 is calculated from:
B ˘ 32 = M 32 ( r ) 1 B 32 M 32 ( c ) 1 ,
If we interpret the B ˘ 32 matrix as a block matrix, it is easy to see that it has a bisymmetric structure:
B ˘ 32 = B ˘ 16 ( 0 ) B ˘ 16 ( 1 ) B ˘ 16 ( 1 ) B ˘ 16 ( 0 ) ,
where
B ˘ 16 ( 0 ) = b 13 b 19 b 3 b 7 b 11 b 17 b 2 b 6 b 10 b 16 b 22 b 26 b 0 b 1 b 4 b 8 b 19 b 13 b 7 b 3 b 17 b 11 b 6 b 2 b 16 b 10 b 26 b 22 b 1 b 0 b 8 b 4 b 22 b 26 b 10 b 16 b 4 b 8 b 0 b 1 b 3 b 7 b 13 b 19 b 2 b 6 b 11 b 17 b 26 b 22 b 16 b 10 b 8 b 4 b 1 b 0 b 7 b 3 b 19 b 13 b 6 b 2 b 17 b 11 b 11 b 17 b 2 b 6 b 13 b 19 b 3 b 7 b 0 b 1 b 4 b 8 b 10 b 16 b 22 b 26 b 17 b 11 b 6 b 2 b 19 b 13 b 7 b 3 b 1 b 0 b 8 b 4 b 16 b 10 b 26 b 22 b 4 b 8 b 0 b 1 b 22 b 26 b 10 b 16 b 2 b 6 b 11 b 17 b 3 b 7 b 13 b 19 b 8 b 4 b 1 b 0 b 26 b 22 b 16 b 10 b 6 b 2 b 17 b 11 b 7 b 3 b 19 b 13 b 10 b 16 b 22 b 26 b 0 b 1 b 4 b 8 b 13 b 19 b 3 b 7 b 11 b 17 b 2 b 6 b 16 b 10 b 26 b 22 b 1 b 0 b 8 b 4 b 19 b 13 b 7 b 3 b 17 b 11 b 6 b 2 b 3 b 7 b 13 b 19 b 2 b 6 b 11 b 17 b 22 b 26 b 10 b 16 b 4 b 8 b 0 b 1 b 7 b 3 b 19 b 13 b 6 b 2 b 17 b 11 b 26 b 22 b 16 b 10 b 8 b 4 b 1 b 0 b 0 b 1 b 4 b 8 b 10 b 16 b 22 b 26 b 11 b 17 b 2 b 6 b 13 b 19 b 3 b 7 b 1 b 0 b 8 b 4 b 16 b 10 b 26 b 22 b 17 b 11 b 6 b 2 b 19 b 13 b 7 b 3 b 2 b 6 b 11 b 17 b 3 b 7 b 13 b 19 b 4 b 8 b 0 b 1 b 22 b 26 b 10 b 16 b 6 b 2 b 17 b 11 b 7 b 3 b 19 b 13 b 8 b 4 b 1 b 0 b 26 b 22 b 16 b 10 ,
B ˘ 16 ( 1 ) = b 15 b 21 b 5 b 9 b 30 b 31 b 23 b 27 b 12 b 18 b 24 b 28 b 14 b 20 b 25 b 29 b 21 b 15 b 9 b 5 b 31 b 30 b 27 b 23 b 18 b 12 b 28 b 24 b 20 b 14 b 29 b 25 b 24 b 28 b 12 b 18 b 25 b 29 b 14 b 20 b 5 b 9 b 15 b 21 b 23 b 27 b 30 b 31 b 28 b 24 b 18 b 12 b 29 b 25 b 20 b 14 b 9 b 5 b 21 b 15 b 27 b 23 b 31 b 30 b 30 b 31 b 23 b 27 b 15 b 21 b 5 b 9 b 14 b 20 b 25 b 29 b 12 b 18 b 24 b 28 b 31 b 30 b 27 b 23 b 21 b 15 b 9 b 5 b 20 b 14 b 29 b 25 b 18 b 12 b 28 b 24 b 25 b 29 b 14 b 20 b 24 b 28 b 12 b 18 b 23 b 27 b 30 b 31 b 5 b 9 b 15 b 21 b 29 b 25 b 20 b 14 b 28 b 24 b 18 b 12 b 27 b 23 b 31 b 30 b 9 b 5 b 21 b 15 b 12 b 18 b 24 b 28 b 14 b 20 b 25 b 29 b 15 b 21 b 5 b 9 b 30 b 31 b 23 b 27 b 18 b 12 b 28 b 24 b 20 b 14 b 29 b 25 b 21 b 15 b 9 b 5 b 31 b 30 b 27 b 23 b 5 b 9 b 15 b 21 b 23 b 27 b 30 b 31 b 24 b 28 b 12 b 18 b 25 b 29 b 14 b 20 b 9 b 5 b 21 b 15 b 27 b 23 b 31 b 30 b 28 b 24 b 18 b 12 b 29 b 25 b 20 b 14 b 14 b 20 b 25 b 29 b 12 b 18 b 24 b 28 b 30 b 31 b 23 b 27 b 15 b 21 b 5 b 9 b 20 b 14 b 29 b 25 b 18 b 12 b 28 b 24 b 31 b 30 b 27 b 23 b 21 b 15 b 9 b 5 b 23 b 27 b 30 b 31 b 5 b 9 b 15 b 21 b 25 b 29 b 14 b 20 b 24 b 28 b 12 b 18 b 27 b 23 b 31 b 30 b 9 b 5 b 21 b 15 b 29 b 25 b 20 b 14 b 28 b 24 b 18 b 12 .
There is an effective method of factorization of this type matrices, which during the calculation of the matrix-vector products allows to reduce the number of multiplications from 32 2 to 3 / 4 · 32 2 at the expense of increasing additions from 32 · 31 to 5 / 4 · 32 · 31 [27]. The matrix B ˘ 32 used in the procedure of multiplication (2) can be described as:
B ˘ 32 = I 16 0 16 I 16 0 16 I 16 I 16 ( B ˘ 16 ( 0 ) B ˘ 16 ( 1 ) ) 0 16 0 16 0 16 ( B ˘ 16 ( 0 ) + B ˘ 16 ( 1 ) ) 0 16 0 16 0 16 B ˘ 16 ( 1 ) I 16 0 16 0 16 I 16 I 16 I 16 ,
where I 16 is an identity matrix and 0 16 is a null matrix. Thus, we can write a new procedure for calculating the product of Kaluza numbers in the following form:
Y 32 × 1 = M 32 ( r ) T 32 × 48 B 48 T 48 × 32 M 32 ( c ) X 32 × 1 ,
where
T 32 × 48 = 1 0 1 0 1 1 I 16 ,
B 48 = quasidiag B 16 ( ) B 16 ( + ) B 16 ( 1 ) ,
B 16 ( ) = B 16 ( 0 ) B 16 ( 1 ) ,
B 16 ( + ) = B 16 ( 0 ) + B 16 ( 1 ) ,
T 48 × 32 = 1 0 0 1 1 1 I 16 ,
where the symbol “⊗” denotes the tensor product of two matrices and quasidiag ( ) means a block-diagonal matrix.
We introduce the following notation to (4) and (5):
c 0 = b 13 + b 15 , c 1 = b 19 + b 21 , c 2 = b 5 b 3 , c 3 = b 7 b 9 , c 4 = b 30 b 11 , c 5 = b 31 b 17 , c 6 = b 2 + b 23 , c 7 = b 6 + b 27 , c 8 = b 12 b 10 , c 9 = b 18 b 16 , c 10 = b 22 + b 24 , c 11 = b 26 + b 28 , c 12 = b 0 + b 14 , c 13 = b 1 + b 20 , c 14 = b 25 b 4 , c 15 = b 8 b 29 , c 16 = b 11 + b 30 , c 17 = b 17 + b 31 , c 18 = b 23 b 2 , c 19 = b 6 b 27 , c 20 = b 15 b 13 , c 21 = b 21 b 19 , c 22 = b 3 + b 5 , c 23 = b 7 + b 9 , c 24 = b 14 b 0 , c 25 = b 20 b 1 , c 26 = b 4 + b 25 , c 27 = b 8 + b 29 , c 28 = b 10 + b 12 , c 29 = b 16 + b 18 , c 30 = b 24 b 22 , c 31 = b 26 b 28 ,
we obtain:
B 16 ( ) = c 0 c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 8 c 9 c 10 c 11 c 12 c 13 c 14 c 15 c 1 c 0 c 3 c 2 c 5 c 4 c 7 c 6 c 9 c 8 c 11 c 10 c 13 c 12 c 15 c 14 c 10 c 11 c 8 c 9 c 14 c 15 c 12 c 13 c 2 c 3 c 0 c 1 c 6 c 7 c 4 c 5 c 11 c 10 c 9 c 8 c 15 c 14 c 13 c 12 c 3 c 2 c 1 c 0 c 7 c 6 c 5 c 4 c 16 c 17 c 18 c 19 c 20 c 21 c 22 c 23 c 24 c 25 c 26 c 27 c 28 c 29 c 30 c 31 c 17 c 16 c 19 c 18 c 21 c 20 c 23 c 22 c 25 c 24 c 27 c 26 c 29 c 28 c 31 c 30 c 26 c 27 c 24 c 25 c 30 c 31 c 28 c 29 c 18 c 19 c 16 c 17 c 22 c 23 c 20 c 21 c 27 c 26 c 25 c 24 c 31 c 30 c 29 c 28 c 19 c 18 c 17 c 16 c 23 c 22 c 21 c 20 c 8 c 9 c 10 c 11 c 12 c 13 c 14 c 15 c 0 c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 9 c 8 c 11 c 10 c 13 c 12 c 15 c 14 c 1 c 0 c 3 c 2 c 5 c 4 c 7 c 6 c 2 c 3 c 0 c 1 c 6 c 7 c 4 c 5 c 10 c 11 c 8 c 9 c 14 c 15 c 12 c 13 c 3 c 2 c 1 c 0 c 7 c 6 c 5 c 4 c 11 c 10 c 9 c 8 c 15 c 14 c 13 c 12 c 24 c 25 c 26 c 27 c 28 c 29 c 30 c 31 c 16 c 17 c 18 c 19 c 20 c 21 c 22 c 23 c 25 c 24 c 27 c 26 c 29 c 28 c 31 c 30 c 17 c 16 c 19 c 18 c 21 c 20 c 23 c 22 c 18 c 19 c 16 c 17 c 22 c 23 c 20 c 21 c 26 c 27 c 24 c 25 c 30 c 31 c 28 c 29 c 19 c 18 c 17 c 16 c 23 c 22 c 21 c 20 c 27 c 26 c 25 c 24 c 31 c 30 c 29 c 28 ,
B 16 ( + ) = c 20 c 21 c 22 c 23 c 16 c 17 c 18 c 19 c 28 c 29 c 30 c 31 c 24 c 25 c 26 c 27 c 21 c 20 c 23 c 22 c 17 c 16 c 19 c 18 c 29 c 28 c 31 c 30 c 25 c 24 c 27 c 26 c 30 c 31 c 28 c 29 c 26 c 27 c 24 c 25 c 22 c 23 c 20 c 21 c 18 c 19 c 16 c 17 c 31 c 30 c 29 c 28 c 27 c 26 c 25 c 24 c 23 c 22 c 21 c 20 c 19 c 18 c 17 c 16 c 4 c 5 c 6 c 7 c 0 c 1 c 2 c 3 c 12 c 13 c 14 c 15 c 8 c 9 c 10 c 11 c 5 c 4 c 7 c 6 c 1 c 0 c 3 c 2 c 13 c 12 c 15 c 14 c 9 c 8 c 11 c 10 c 14 c 15 c 12 c 13 c 10 c 11 c 8 c 9 c 6 c 7 c 4 c 5 c 2 c 3 c 0 c 1 c 15 c 14 c 13 c 12 c 11 c 10 c 9 c 8 c 7 c 6 c 5 c 4 c 3 c 2 c 1 c 0 c 28 c 29 c 30 c 31 c 24 c 25 c 26 c 27 c 20 c 21 c 22 c 23 c 16 c 17 c 18 c 19 c 29 c 28 c 31 c 30 c 25 c 24 c 27 c 26 c 21 c 20 c 23 c 22 c 17 c 16 c 19 c 18 c 22 c 23 c 20 c 21 c 18 c 19 c 16 c 17 c 30 c 31 c 28 c 29 c 26 c 27 c 24 c 25 c 23 c 22 c 21 c 20 c 19 c 18 c 17 c 16 c 31 c 30 c 29 c 28 c 27 c 26 c 25 c 24 c 12 c 13 c 14 c 15 c 8 c 9 c 10 c 11 c 4 c 5 c 6 c 7 c 0 c 1 c 2 c 3 c 13 c 12 c 15 c 14 c 9 c 8 c 11 c 10 c 5 c 4 c 7 c 6 c 1 c 0 c 3 c 2 c 6 c 7 c 4 c 5 c 2 c 3 c 0 c 1 c 14 c 15 c 12 c 13 c 10 c 11 c 8 c 9 c 7 c 6 c 5 c 4 c 3 c 2 c 1 c 0 c 15 c 14 c 13 c 12 c 11 c 10 c 9 c 8 .
The matrices B 16 ( ) , B 16 ( + ) and B 16 ( 1 ) have similar structures. If we now change the signs of all of the elements of the sixth and seventh rows, as well as all of the elements of the second, third, sixth and seventh columns, to the opposite, then the matrices B 16 ( ) , B 16 ( + ) and B 16 ( 1 ) will have structures of type A N / 2 B N / 2 B N / 2 A N / 2 , which leads to reducing the number of real multiplications during matrix-vector product calculation. We can write the sign transformation matrices for rows S 16 ( r ) and columns S 16 ( c ) as:
S 16 ( r ) = diag 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ,
S 16 ( c ) = diag 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 .
Then, we obtain new standardized matrices:
B ˘ 16 ( ) = S 16 ( r ) B 16 ( ) S 16 ( c ) = B 8 ( 0 ) B 8 ( 1 ) B 8 ( 1 ) B 8 ( 0 ) ,
B ˘ 16 ( + ) = S 16 ( r ) B 16 ( + ) S 16 ( c ) = B 8 ( 2 ) B 8 ( 3 ) B 8 ( 3 ) B 8 ( 2 ) ,
B ˘ 16 ( 1 ) = S 16 ( r ) B 16 ( 1 ) S 16 ( c ) = B 8 ( 4 ) B 8 ( 5 ) B 8 ( 5 ) B 8 ( 4 ) ,
where:
B 8 ( 0 ) = c 0 c 1 c 2 c 3 c 4 c 5 c 6 c 7 c 1 c 0 c 3 c 2 c 5 c 4 c 7 c 6 c 10 c 11 c 8 c 9 c 14 c 15 c 12 c 13 c 11 c 10 c 9 c 8 c 15 c 14 c 13 c 12 c 16 c 17 c 18 c 19 c 20 c 21 c 22 c 23 c 17 c 16 c 19 c 18 c 21 c 20 c 23 c 22 c 26 c 27 c 24 c 25 c 30 c 31 c 28 c 29 c 27 c 26 c 25 c 24 c 31 c 30 c 29 c 28 ,
B 8 ( 1 ) = c 8 c 9 c 10 c 11 c 12 c 13 c 14 c 15 c 9 c 8 c 11 c 10 c 13 c 12 c 15 c 14 c 2 c 3 c 0 c 1 c 6 c 7 c 4 c 5 c 3 c 2 c 1 c 0 c 7 c 6 c 5 c 4 c 24 c 25 c 26 c 27 c 28 c 29 c 30 c 31 c 25 c 24 c 27 c 26 c 29 c 28 c 31 c 30 c 18 c 19 c 16 c 17 c 22 c 23 c 20 c 21 c 19 c 18 c 17 c 16 c 23 c 22 c 21 c 20 ,
B 8 ( 2 ) = c 20 c 21 c 22 c 23 c 16 c 17 c 18 c 19 c 21 c 20 c 23 c 22 c 17 c 16 c 19 c 18 c 30 c 31 c 28 c 29 c 26 c 27 c 24 c 25 c 31 c 30 c 29 c 28 c 27 c 26 c 25 c 24 c 4 c 5 c 6 c 7 c 0 c 1 c 2 c 3 c 5 c 4 c 7 c 6 c 1 c 0 c 3 c 2 c 14 c 15 c 12 c 13 c 10 c 11 c 8 c 9 c 15 c 14 c 13 c 12 c 11 c 10 c 9 c 8 ,
B 8 ( 3 ) = c 28 c 29 c 30 c 31 c 24 c 25 c 26 c 27 c 29 c 28 c 31 c 30 c 25 c 24 c 27 c 26 c 22 c 23 c 20 c 21 c 18 c 19 c 16 c 17 c 23 c 22 c 21 c 20 c 19 c 18 c 17 c 16 c 12 c 13 c 14 c 15 c 8 c 9 c 10 c 11 c 13 c 12 c 15 c 14 c 9 c 8 c 11 c 10 c 6 c 7 c 4 c 5 c 2 c 3 c 0 c 1 c 7 c 6 c 5 c 4 c 3 c 2 c 1 c 0 ,
B 8 ( 4 ) = b 15 b 21 b 5 b 9 b 30 b 31 b 23 b 27 b 21 b 15 b 9 b 5 b 31 b 30 b 27 b 23 b 24 b 28 b 12 b 18 b 25 b 29 b 14 b 20 b 28 b 24 b 18 b 12 b 29 b 25 b 20 b 14 b 30 b 31 b 23 b 27 b 15 b 21 b 5 b 9 b 31 b 30 b 27 b 23 b 21 b 15 b 9 b 5 b 25 b 29 b 14 b 20 b 24 b 28 b 12 b 18 b 29 b 25 b 20 b 14 b 28 b 24 b 18 b 12 ,
B 8 ( 5 ) = b 12 b 18 b 24 b 28 b 14 b 20 b 25 b 29 b 18 b 12 b 28 b 24 b 20 b 14 b 29 b 25 b 5 b 9 b 15 b 21 b 23 b 27 b 30 b 31 b 9 b 5 b 21 b 15 b 27 b 23 b 31 b 30 b 14 b 20 b 25 b 29 b 12 b 18 b 24 b 28 b 20 b 14 b 29 b 25 b 18 b 12 b 28 b 24 b 23 b 27 b 30 b 31 b 5 b 9 b 15 b 21 b 27 b 23 b 31 b 30 b 9 b 5 b 21 b 15 .
There is a possiblity to use a method of factorization for the standardized matrices (6)–(8). This allows us to reduce the number of multiplications to 8 2 / 2 using 8 ( 8 + 1 ) additions for each of above matrices. Therefore, similar to the previous we can write [27,28]:
A N / 2 B N / 2 B N / 2 A N / 2 = I N / 2 I N / 2 I N / 2 I N / 2 1 2 A N / 2 + B N / 2 0 N / 2 0 N / 2 1 2 A N / 2 B N / 2 ) I N / 2 I N / 2 I N / 2 I N / 2 ,
where A N / 2 , B N / 2 are some matrices. Therefore, we can rewrite (6)–(8) as:
B ˘ 16 ( ) = I 8 I 8 I 8 I 8 1 2 B 8 ( 0 + ) 0 8 0 8 1 2 B 8 ( 0 ) I 8 I 8 I 8 I 8 ,
B ˘ 16 ( + ) = I 8 I 8 I 8 I 8 1 2 B 8 ( 1 + ) 0 8 0 8 1 2 B 8 ( 1 ) I 8 I 8 I 8 I 8 ,
B ˘ 16 ( 1 ) = I 8 I 8 I 8 I 8 1 2 B 8 ( 2 + ) 0 8 0 8 1 2 B 8 ( 2 ) I 8 I 8 I 8 I 8 ,
where:
B 8 ( 0 + ) = B 8 ( 0 ) + B 8 ( 1 ) ,
B 8 ( 0 ) = B 8 ( 0 ) B 8 ( 1 ) ,
B 8 ( 1 + ) = B 8 ( 2 ) + B 8 ( 3 ) ,
B 8 ( 1 ) = B 8 ( 2 ) B 8 ( 3 ) ,
B 8 ( 2 + ) = B 8 ( 4 ) + B 8 ( 5 ) ,
B 8 ( 2 ) = B 8 ( 4 ) B 8 ( 5 ) .
Combining partial decompositions in a single procedure we can rewrite procedure, (3) as following:
Y 32 × 1 = M 32 ( r ) T 32 × 48 S 48 ( r ) W 48 ( 1 ) B ˜ 48 W 48 ( 1 ) S 48 ( c ) T 48 × 32 M 32 ( c ) X 32 × 1 ,
where
B ˜ 48 = quasidiag 1 2 B 8 ( 0 + ) , 1 2 B 8 ( 0 ) , 1 2 B 8 ( 1 + ) , 1 2 B 8 ( 1 ) , 1 2 B 8 ( 2 + ) , 1 2 B 8 ( 2 ) ,
S 48 ( r ) = I 3 S 16 ( r ) ,
S 48 ( c ) = I 3 S 16 ( c ) ,
W 48 ( 1 ) = I 3 H 2 I 8 ,
H 2 is the order 2 Hadamard matrix, i.e.:
H 2 = 1 1 1 1 .
Introducing the following notation:
d 0 = c 0 + c 8 , d 1 = c 1 + c 9 , d 2 = c 2 + c 10 , d 3 = c 11 c 3 , d 4 = c 4 c 12 , d 5 = c 5 c 13 , d 6 = c 14 c 6 , d 7 = c 7 + c 15 , d 8 = c 2 c 10 , d 9 = c 3 + c 11 , d 10 = c 0 c 8 , d 11 = c 9 c 1 , d 12 = c 6 + c 14 , d 13 = c 7 c 15 , d 14 = c 4 + c 12 , d 15 = c 5 + c 13 , d 16 = c 16 + c 24 , d 17 = c 17 + c 25 , d 18 = c 18 + c 26 , d 19 = c 27 c 19 , d 20 = c 20 c 28 , d 21 = c 21 c 29 , d 22 = c 30 c 22 , d 23 = c 23 + c 31 , d 24 = c 26 c 18 , d 25 = c 19 + c 27 , d 26 = c 24 c 16 , d 27 = c 17 c 25 , d 28 = c 22 + c 30 , d 29 = c 31 c 23 , d 30 = c 20 + c 28 , d 31 = c 21 + c 29 .
to (10)–(13), we obtain:
B 8 ( 0 + ) = d 0 d 1 d 2 d 3 d 4 d 5 d 6 d 7 d 1 d 0 d 3 d 2 d 5 d 4 d 7 d 6 d 8 d 9 d 10 d 11 d 12 d 13 d 14 d 15 d 9 d 8 d 11 d 10 d 13 d 12 d 15 d 14 d 16 d 17 d 18 d 19 d 20 d 21 d 22 d 23 d 17 d 16 d 19 d 18 d 21 d 20 d 23 d 22 d 24 d 25 d 26 d 27 d 28 d 29 d 30 d 31 d 25 d 24 d 27 d 26 d 29 d 28 d 31 d 30 ,
B 8 ( 0 ) = d 10 d 11 d 8 d 9 d 14 d 15 d 12 d 13 d 11 d 10 d 9 d 8 d 15 d 14 d 13 d 12 d 2 d 3 d 0 d 1 d 6 d 7 d 4 d 5 d 3 d 2 d 1 d 0 d 7 d 6 d 5 d 4 d 26 d 27 d 24 d 25 d 30 d 31 d 28 d 29 d 27 d 26 d 25 d 24 d 31 d 30 d 29 d 28 d 18 d 19 d 16 d 17 d 22 d 23 d 20 d 21 d 19 d 18 d 17 d 16 d 23 d 22 d 21 d 20 ,
B 8 ( 1 + ) = d 30 d 31 d 28 d 29 d 26 d 27 d 24 d 25 d 31 d 30 d 29 d 28 d 27 d 26 d 25 d 24 d 22 d 23 d 20 d 21 d 18 d 19 d 16 d 17 d 23 d 22 d 21 d 20 d 19 d 18 d 17 d 16 d 14 d 15 d 12 d 13 d 10 d 11 d 8 d 9 d 15 d 14 d 13 d 12 d 11 d 10 d 9 d 8 d 6 d 7 d 4 d 5 d 2 d 3 d 0 d 1 d 7 d 6 d 5 d 4 d 3 d 2 d 1 d 0 ,
B 8 ( 1 ) = d 20 d 21 d 22 d 23 d 16 d 17 d 18 d 19 d 21 d 20 d 23 d 22 d 17 d 16 d 19 d 18 d 28 d 29 d 30 d 31 d 24 d 25 d 26 d 27 d 29 d 28 d 31 d 30 d 25 d 24 d 27 d 26 d 4 d 5 d 6 d 7 d 0 d 1 d 2 d 3 d 5 d 4 d 7 d 6 d 1 d 0 d 3 d 2 d 12 d 13 d 14 d 15 d 8 d 9 d 10 d 11 d 13 d 12 d 15 d 14 d 9 d 8 d 11 d 10 .
In order to simplify, we introduce the following notation for the elements of matrix B 8 ( 2 + ) (14):
c 32 = b 12 + b 15 , c 33 = b 18 + b 21 , c 34 = b 5 + b 24 , c 35 = b 9 + b 28 , c 36 = b 14 b 30 , c 37 = b 20 b 31 , c 38 = b 23 b 25 , c 39 = b 29 b 27 , c 40 = b 24 b 5 , c 41 = b 28 b 9 , c 42 = b 12 b 15 , c 43 = b 21 b 18 , c 44 = b 23 + b 25 , c 45 = b 27 + b 29 , c 46 = b 14 + b 30 , c 47 = b 20 + b 31 ,
we obtain:
B 8 ( 2 + ) = c 32 c 33 c 34 c 35 c 36 c 37 c 38 c 39 c 33 c 32 c 35 c 34 c 37 c 36 c 39 c 38 c 40 c 41 c 42 c 43 c 44 c 45 c 46 c 47 c 41 c 40 c 43 c 42 c 45 c 44 c 47 c 46 c 46 c 47 c 44 c 45 c 42 c 43 c 40 c 41 c 47 c 46 c 45 c 44 c 43 c 42 c 41 c 40 c 38 c 39 c 36 c 37 c 34 c 35 c 32 c 33 c 39 c 38 c 37 c 36 c 35 c 34 c 33 c 32 .
Now, we introduce the following notation for the elements of matrix B 8 ( 2 ) (15):
c 48 = b 12 b 15 , c 49 = b 18 b 21 , c 50 = b 5 b 24 , c 51 = b 28 b 9 , c 52 = b 14 + b 30 , c 53 = b 20 + b 31 , c 54 = b 23 + b 25 , c 55 = b 27 + b 29 , c 56 = b 5 + b 24 , c 57 = b 9 + b 28 , c 58 = b 12 + b 15 , c 59 = b 18 + b 21 , c 60 = b 23 b 25 , c 61 = b 27 b 29 , c 62 = b 30 b 14 , c 63 = b 20 b 31 ,
we obtain:
B 8 ( 2 ) = c 48 c 49 c 50 c 51 c 52 c 53 c 54 c 55 c 49 c 48 c 51 c 50 c 53 c 52 c 55 c 54 c 56 c 57 c 58 c 59 c 60 c 61 c 62 c 63 c 57 c 56 c 59 c 58 c 61 c 60 c 63 c 62 c 62 c 63 c 60 c 61 c 58 c 59 c 56 c 57 c 63 c 62 c 61 c 60 c 59 c 58 c 57 c 56 c 54 c 55 c 52 c 53 c 50 c 51 c 48 c 49 c 55 c 54 c 53 c 52 c 51 c 50 c 49 c 48 .
All of the above matrices have the same internal structure. We can permute rows and columns using the π r = (5 1 2 7 4 0 3 6) and π c = (5 1 2 6 4 0 3 7) permutation rules, respectively. We obtain the following form:
B 8 ( γ ) = P 8 ( r ) B ^ 8 ( x ) P 8 ( c ) ,
where B 8 ( γ ) , B ^ 8 ( γ ) are the corresponding items in the sets:
B 8 ( γ ) B 8 ( 0 + ) , B 8 ( 0 ) , B 8 ( 1 + ) , B 8 ( 1 ) , B 8 ( 2 + ) , B 8 ( 2 ) , B ^ 8 ( γ ) B ^ 8 ( 0 + ) , B ^ 8 ( 0 ) , B ^ 8 ( 1 + ) , B ^ 8 ( 1 ) , B ^ 8 ( 2 + ) , B ^ 8 ( 2 )
and
P 8 ( r ) = 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 , P 8 ( c ) = 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 .
The matrices B ^ 8 ( γ ) (16) are calculated via the following equation:
B ^ 8 ( γ ) = P 8 ( r ) 1 B 8 ( x ) P 8 ( c ) 1
and have a standardized form (9) that reduces the number of multiplications. Thus, we can write:
B ^ 8 ( 0 + ) = P 8 ( r ) 1 B 8 ( 0 + ) P 8 ( c ) 1 = B 4 ( 0 ) B 4 ( 1 ) B 4 ( 1 ) B 4 ( 0 ) ,
B ^ 8 ( 0 ) = P 8 ( r ) 1 B 8 ( 0 ) P 8 ( c ) 1 = B 4 ( 2 ) B 4 ( 3 ) B 4 ( 3 ) B 4 ( 2 ) ,
B ^ 8 ( 1 + ) = P 8 ( r ) 1 B 8 ( 1 + ) P 8 ( c ) 1 = B 4 ( 4 ) B 4 ( 5 ) B 4 ( 5 ) B 4 ( 4 ) ,
B ^ 8 ( 1 ) = P 8 ( r ) 1 B 8 ( 1 ) P 8 ( c ) 1 = B 4 ( 6 ) B 4 ( 7 ) B 4 ( 7 ) B 4 ( 6 ) ,
B ^ 8 ( 2 + ) = P 8 ( r ) 1 B 8 ( 2 + ) P 8 ( c ) 1 = B 4 ( 8 ) B 4 ( 9 ) B 4 ( 9 ) B 4 ( 8 ) ,
B ^ 8 ( 2 ) = P 8 ( r ) 1 B 8 ( 2 ) P 8 ( c ) 1 = B 4 ( 10 ) B 4 ( 11 ) B 4 ( 11 ) B 4 ( 10 ) ,
where:
B 4 ( 0 ) = d 20 d 16 d 19 d 23 d 4 d 0 d 3 d 7 d 13 d 9 d 10 d 14 d 28 d 24 d 27 d 31 , B 4 ( 1 ) = d 21 d 17 d 18 d 22 d 5 d 1 d 2 d 6 d 12 d 8 d 11 d 15 d 29 d 25 d 26 d 30 ,
B 4 ( 2 ) = d 30 d 26 d 25 d 29 d 14 d 10 d 9 d 13 d 7 d 3 d 0 d 4 d 22 d 18 d 17 d 21 , B 4 ( 3 ) = d 31 d 27 d 24 d 28 d 15 d 11 d 8 d 12 d 6 d 2 d 1 d 5 d 23 d 19 d 16 d 20 ,
B 4 ( 4 ) = d 10 d 14 d 13 d 9 d 26 d 30 d 29 d 25 d 19 d 23 d 20 d 16 d 2 d 6 d 5 d 1 , B 4 ( 5 ) = d 11 d 15 d 12 d 8 d 27 d 31 d 28 d 24 d 18 d 22 d 21 d 17 d 3 d 7 d 4 d 0 ,
B 4 ( 6 ) = d 0 d 4 d 7 d 3 d 16 d 20 d 23 d 19 d 25 d 29 d 30 d 26 d 8 d 12 d 15 d 11 , B 4 ( 7 ) = d 1 d 5 d 6 d 2 d 17 d 21 d 22 d 18 d 24 d 28 d 31 d 27 d 9 d 13 d 14 d 10 ,
B 4 ( 8 ) = c 42 c 46 c 45 c 41 c 36 c 32 c 35 c 39 c 45 c 41 c 42 c 46 c 34 c 38 c 37 c 33 , B 4 ( 9 ) = c 43 c 47 c 44 c 40 c 37 c 33 c 34 c 38 c 44 c 40 c 43 c 47 c 35 c 39 c 36 c 32 ,
B 4 ( 10 ) = c 58 c 62 c 61 c 57 c 52 c 48 c 51 c 55 c 61 c 57 c 58 c 62 c 50 c 54 c 53 c 49 , B 4 ( 11 ) = c 59 c 63 c 60 c 56 c 53 c 49 c 50 c 54 c 60 c 56 c 59 c 63 c 51 c 55 c 52 c 48 .
We can use the multiplication procedure (9) and represent the above matrices in a form:
B ^ 8 ( 0 + ) = I 4 I 4 I 4 I 4 1 2 B 4 ( 0 ) + B 4 ( 1 ) 0 0 1 2 B 4 ( 0 ) B 4 ( 1 ) I 4 I 4 I 4 I 4 ,
B ^ 8 ( 0 ) = I 4 I 4 I 4 I 4 1 2 B 4 ( 2 ) + B 4 ( 3 ) 0 0 1 2 B 4 ( 2 ) B 4 ( 3 ) I 4 I 4 I 4 I 4 ,
B ^ 8 ( 1 + ) = I 4 I 4 I 4 I 4 1 2 B 4 ( 4 ) + B 4 ( 5 ) 0 0 1 2 B 4 ( 4 ) B 4 ( 5 ) I 4 I 4 I 4 I 4 ,
B ^ 8 ( 1 ) = I 4 I 4 I 4 I 4 1 2 B 4 ( 6 ) + B 4 ( 7 ) 0 0 1 2 B 4 ( 6 ) B 4 ( 7 ) I 4 I 4 I 4 I 4 ,
B ^ 8 ( 2 + ) = I 4 I 4 I 4 I 4 1 2 B 4 ( 8 ) + B 4 ( 9 ) 0 0 1 2 B 4 ( 8 ) B 4 ( 9 ) I 4 I 4 I 4 I 4 ,
B ^ 8 ( 2 ) = I 4 I 4 I 4 I 4 1 2 B 4 ( 10 ) + B 4 ( 11 ) 0 0 1 2 B 4 ( 10 ) B 4 ( 11 ) I 4 I 4 I 4 I 4 ,
where
B 4 ( 0 ) + B 4 ( 1 ) = d 20 + d 21 d 16 + d 17 d 19 d 18 d 22 + d 23 d 4 + d 5 d 0 + d 1 d 3 d 2 d 6 + d 7 d 12 + d 13 d 8 d 9 d 10 + d 11 d 14 d 15 d 29 d 28 d 24 + d 25 d 26 + d 27 d 31 d 30 ,
B 4 ( 0 ) B 4 ( 1 ) = d 20 d 21 d 16 d 17 d 18 + d 19 d 23 d 22 d 4 d 5 d 0 d 1 d 2 + d 3 d 7 d 6 d 13 d 12 d 8 d 9 d 10 d 11 d 14 + d 15 d 28 d 29 d 24 d 25 d 27 d 26 d 30 + d 31 ,
B 4 ( 2 ) + B 4 ( 3 ) = d 30 + d 31 d 27 d 26 d 24 d 25 d 28 d 29 d 14 + d 15 d 10 d 11 d 8 d 9 d 13 d 12 d 6 d 7 d 2 d 3 d 1 d 0 d 5 d 4 d 23 d 22 d 18 + d 19 d 16 d 17 d 20 d 21 ,
B 4 ( 2 ) B 4 ( 3 ) = d 30 d 31 d 26 d 27 d 24 d 25 d 28 d 29 d 14 d 15 d 10 + d 11 d 8 d 9 d 12 + d 13 d 6 d 7 d 2 d 3 d 0 d 1 d 4 d 5 d 22 d 23 d 18 d 19 d 16 d 17 d 20 d 21 ,
B 4 ( 4 ) + B 4 ( 5 ) = d 10 d 11 d 14 + d 15 d 13 d 12 d 8 d 9 d 27 d 26 d 30 + d 31 d 28 d 29 d 24 d 25 d 18 + d 19 d 23 d 22 d 20 d 21 d 16 d 17 d 2 d 3 d 6 d 7 d 5 d 4 d 1 d 0 ,
B 4 ( 4 ) B 4 ( 5 ) = d 10 + d 11 d 14 d 15 d 12 + d 13 d 8 d 9 d 26 d 27 d 30 d 31 d 28 d 29 d 24 d 25 d 19 d 18 d 22 + d 23 d 20 + d 21 d 16 + d 17 d 3 d 2 d 6 + d 7 d 4 + d 5 d 0 + d 1 ,
B 4 ( 6 ) + B 4 ( 7 ) = d 0 + d 1 d 4 + d 5 d 6 + d 7 d 3 d 2 d 16 + d 17 d 20 + d 21 d 22 + d 23 d 19 d 18 d 24 + d 25 d 29 d 28 d 31 d 30 d 26 + d 27 d 8 d 9 d 12 + d 13 d 14 d 15 d 10 + d 11 ,
B 4 ( 6 ) B 4 ( 7 ) = d 0 d 1 d 4 d 5 d 7 d 6 d 2 + d 3 d 16 d 17 d 20 d 21 d 23 d 22 d 18 + d 19 d 25 d 24 d 28 + d 29 d 30 d 31 d 26 d 27 d 8 + d 9 d 12 d 13 d 14 d 15 d 11 d 10 ,
B 4 ( 8 ) + B 4 ( 9 ) = c 42 c 43 c 46 c 47 c 44 c 45 c 41 c 40 c 36 + c 37 c 32 c 33 c 34 c 35 c 38 + c 39 c 44 c 45 c 40 + c 41 c 42 + c 43 c 47 c 46 c 34 + c 35 c 38 c 39 c 37 c 36 c 32 c 33 ,
B 4 ( 8 ) B 4 ( 9 ) = c 42 + c 43 c 47 c 46 c 44 c 45 c 40 + c 41 c 36 c 37 c 33 c 32 c 34 c 35 c 39 c 38 c 44 c 45 c 41 c 40 c 42 c 43 c 46 c 47 c 34 c 35 c 38 + c 39 c 36 + c 37 c 32 c 33 ,
B 4 ( 10 ) + B 4 ( 11 ) = c 58 c 59 c 63 c 62 c 60 c 61 c 56 c 57 c 52 c 53 c 48 + c 49 c 50 + c 51 c 54 c 55 c 60 + c 61 c 56 + c 57 c 58 c 59 c 62 + c 63 c 51 c 50 c 54 c 55 c 53 c 52 c 48 c 49 ,
B 4 ( 10 ) B 4 ( 11 ) = c 59 c 58 c 62 c 63 c 60 c 61 c 56 c 57 c 53 c 52 c 48 c 49 c 51 c 50 c 54 c 55 c 61 c 60 c 57 c 56 c 58 + c 59 c 62 c 63 c 50 c 51 c 55 c 54 c 52 + c 53 c 48 c 49 .
Combining the calculations for of the all above matrices in a single procedure we finally obtain:
Y 32 × 1 = M 32 ( r ) T 32 × 48 S 48 ( r ) W 48 ( 1 ) P 48 ( r ) W 48 ( 2 ) B ^ 48 W 48 ( 2 ) P 48 ( c ) W 48 ( 1 ) S 48 ( c ) T 48 × 32 M 32 ( c ) X 32 × 1 ,
where:
B ^ 48 = quasidiag 1 4 B 4 ( 0 ) + B 4 ( 1 ) 1 4 B 4 ( 0 ) B 4 ( 1 ) 1 4 B 4 ( 2 ) + B 4 ( 3 ) 1 4 B 4 ( 2 ) B 4 ( 3 ) 1 4 B 4 ( 4 ) + B 4 ( 5 ) 1 4 B 4 ( 4 ) B 4 ( 5 ) 1 4 B 4 ( 6 ) + B 4 ( 7 ) 1 4 B 4 ( 6 ) B 4 ( 7 ) 1 4 B 4 ( 8 ) + B 4 ( 9 ) 1 4 B 4 ( 8 ) B 4 ( 9 ) 1 4 B 4 ( 10 ) + B 4 ( 11 ) 1 4 B 4 ( 10 ) B 4 ( 11 ) ,
P 48 ( r ) = I 6 P 8 ( r ) ,
P 48 ( c ) = I 6 P 8 ( c ) ,
W 48 ( 2 ) = I 6 H 2 I 4 .
Figure 1 shows a data flow diagram describing the new algorithm for the computation of the product of Kaluza numbers (17). In this paper, the data flow diagram is oriented from left to right. Straight lines in the figure denote the operations of data transfer. Points, where lines converge, denote summation. The dotted lines indicate the subtraction operation. We use the regular lines without arrows on purpose, so as not to clutter the picture. The rectangles indicate the matrix-vector multiplications with matrices inscribed inside a rectangle.

4. Evaluation of Computational Complexity

We will now calculate how many multiplications and additions of real numbers are required for the implementation of the new algorithm and will compare this with the number of operations required both for direct computation of matrix-vector products in Equation (1) and for implementing our previous algorithm [25]. The number of real multiplications required using the new algorithm is 192. Thus, using the proposed algorithm, the number of real multiplications needed to calculate the Kaluza number product is significantly reduced. The number of real additions required using our algorithm is 384. We observe that the direct computation of the Kaluza number product requires 608 additions more than the proposed algorithm. Thus, our proposed algorithm saves 832 multiplications and 960 additions of real numbers compared with the direct method. Thus, the total number of arithmetic operations for the proposed algorithm is approximately 71.4% less than that of the direct computation. The previously proposed algorithm [25] calculates the same result using 512 multiplications and 576 additions of real numbers. Thus, our proposed algorithm saves 62.5% of multiplications and 33.3% of additions of real numbers compared with our previous algorithm. Hence, the total number of arithmetic operations for the new proposed algorithm is approximately 47% less than that of our previous algorithm.

5. Conclusions

We presented a new effective algorithm for calculating the product of two Kaluza numbers. The use of this algorithm reduces the computational complexity of multiplications of Kaluza numbers, thus reducing implementation complexity and leading to a high-speed resource-effective architecture suitable for parallel implementation on VLSI platforms. Additionally, we note that the total number of arithmetic operations in the new algorithm is less than the total number of operations in the compared algorithms. Therefore, the proposed algorithm is better than the compared algorithms, even in terms of its software implementation on a general-purpose computer.
The proposed algorithm can be used in metacognitive neural networks using Kaluza numbers for data representation and processing. The effect in this case is achieved by using non-commutative finite groups based on the properties of the hypercomplex algebra [24]. When using the Kaluza number, in this case, the rule for generating the elements of the group will be set, as well as the rule for performing the group operation of multiplication. Such a system can contain two components: a neural network based on Kaluza numbers, which represents a cognitive component, and a metacognitive component, which serves to self-regulate the learning algorithm. At each stage, the metacognitive component will decide how and when the learning takes place. The algorithm removes unnecessary samples and keeps only those that are used. This decision will be determined by the magnitude and 31 phases of the Kaluza number. However, these matters are beyond the scope of this article and require more detailed research.

Author Contributions

Conceptualization, A.C.; methodology, A.C., G.C. and J.P.P.; validation, J.P.P.; formal analysis, A.C. and J.P.P.; writing—original draft preparation, A.C. and J.P.P.; writing—review and editing, A.C. and J.P.P.; visualization, A.C. and J.P.P.; supervision, A.C., G.C. and J.P.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kantor, I.L.; Solodovnikov, A.S. Hypercomplex Numbers: An Elementary Introduction to Algebras; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
  2. Alfsmann, D. On families of 2N-dimensional hypercomplex algebras suitable for digital signal processing. In Proceedings of the 2006 14th European Signal Processing Conference, Florence, Italy, 4–8 September 2006; pp. 1–4. [Google Scholar]
  3. Alfsmann, D.; Göckler, H.G.; Sangwine, S.J.; Ell, T.A. Hypercomplex algebras in digital signal processing: Benefits and drawbacks. In Proceedings of the 2007 15th European Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 1322–1326. [Google Scholar]
  4. Bayro-Corrochano, E. Multi-resolution image analysis using the quaternion wavelet transform. Numer. Algorithms 2005, 39, 35–55. [Google Scholar] [CrossRef]
  5. Belfiore, J.C.; Rekaya, G. Quaternionic lattices for space-time coding. In Proceedings of the 2003 IEEE Information Theory Workshop (Cat. No. 03EX674), Paris, France, 31 March–4 April 2003; pp. 267–270. [Google Scholar]
  6. Bulow, T.; Sommer, G. Hypercomplex signals-a novel extension of the analytic signal to the multidimensional case. IEEE Trans. Signal Process. 2001, 49, 2844–2852. [Google Scholar] [CrossRef] [Green Version]
  7. Calderbank, R.; Das, S.; Al-Dhahir, N.; Diggavi, S. Construction and analysis of a new quaternionic space-time code for 4 transmit antennas. Commun. Inf. Syst. 2005, 5, 97–122. [Google Scholar]
  8. Ertuğ, Ö. Communication over Hypercomplex Kähler Manifolds: Capacity of Multidimensional-MIMO Channels. Wirel. Person. Commun. 2007, 41, 155–168. [Google Scholar] [CrossRef]
  9. Le Bihan, N.; Sangwine, S. Hypercomplex analytic signals: Extension of the analytic signal concept to complex signals. In Proceedings of the 15th European Signal Processing Conference (EUSIPCO-2007), Poznan, Poland, 3–7 September 2007; p. A5P–H. [Google Scholar]
  10. Moxey, E.C.; Sangwine, S.J.; Ell, T.A. Hypercomplex correlation techniques for vector images. IEEE Trans. Signal Process. 2003, 51, 1941–1953. [Google Scholar] [CrossRef]
  11. Comminiello, D.; Lella, M.; Scardapane, S.; Uncini, A. Quaternion convolutional neural networks for detection and localization of 3d sound events. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8533–8537. [Google Scholar]
  12. de Castro, F.Z.; Valle, M.E. A broad class of discrete-time hypercomplex-valued Hopfield neural networks. Neural Netw. 2020, 122, 54–67. [Google Scholar] [CrossRef] [Green Version]
  13. Gaudet, C.J.; Maida, A.S. Deep quaternion networks. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  14. Isokawa, T.; Kusakabe, T.; Matsui, N.; Peper, F. Quaternion neural network and its application. In Proceedings of the International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Oxford, UK, 3–5 September 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 318–324. [Google Scholar]
  15. Liu, Y.; Zheng, Y.; Lu, J.; Cao, J.; Rutkowski, L. Constrained quaternion-variable convex optimization: A quaternion-valued recurrent neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1022–1035. [Google Scholar] [CrossRef]
  16. Parcollet, T.; Morchid, M.; Linarès, G. A survey of quaternion neural networks. Artif. Intell. Rev. 2020, 53, 2957–2982. [Google Scholar] [CrossRef]
  17. Saoud, L.S.; Ghorbani, R.; Rahmoune, F. Cognitive quaternion valued neural network and some applications. Neurocomputing 2017, 221, 85–93. [Google Scholar] [CrossRef]
  18. Saoud, L.S.; Ghorbani, R. Metacognitive octonion-valued neural networks as they relate to time series analysis. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 539–548. [Google Scholar] [CrossRef] [PubMed]
  19. Vecchi, R.; Scardapane, S.; Comminiello, D.; Uncini, A. Compressing deep-quaternion neural networks with targeted regularisation. CAAI Trans. Intell. Technol. 2020, 5, 172–176. [Google Scholar] [CrossRef]
  20. Vieira, G.; Valle, M.E. A General Framework for Hypercomplex-valued Extreme Learning Machines. arXiv 2021, arXiv:2101.06166. [Google Scholar]
  21. Wu, J.; Xu, L.; Wu, F.; Kong, Y.; Senhadji, L.; Shu, H. Deep octonion networks. Neurocomputing 2020, 397, 179–191. [Google Scholar] [CrossRef]
  22. Zhu, X.; Xu, Y.; Xu, H.; Chen, C. Quaternion convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 631–647. [Google Scholar]
  23. Bojesomo, A.; Liatsis, P.; Marzouqi, H.A. Traffic flow prediction using Deep Sedenion Networks. arXiv 2020, arXiv:2012.03874. [Google Scholar]
  24. Saoud, L.S.; Al-Marzouqi, H. Metacognitive sedenion-valued neural network and its learning algorithm. IEEE Access 2020, 8, 144823–144838. [Google Scholar] [CrossRef]
  25. Cariow, A.; Cariowa, G.; Łentek, R. An algorithm for multipication of Kaluza numbers. arXiv 2015, arXiv:1505.06425. [Google Scholar]
  26. Silvestrov, V.V. Number Systems. Soros Educ. J. 1998, 8, 121–127. [Google Scholar]
  27. Ţariov, A. Algorithmic Aspects of Computing Rationalization in Digital Signal Processing. (Algorytmiczne Aspekty Racjonalizacji Obliczeń w Cyfrowym Przetwarzaniu Sygnałów); West Pomeranian University Press: Szczecin, Poland, 2012. (In Polish) [Google Scholar]
  28. Ţariov, A. Strategies for the synthesis of fast algorithms for the computation of the matrix-vector products. J. Signal Process. Theory Appl. 2014, 3, 1–19. [Google Scholar]
Figure 1. A data flow diagram for the proposed algorithm.
Figure 1. A data flow diagram for the proposed algorithm.
Applsci 11 08203 g001
Table 1. Multiplication rules of Kaluza numbers for e 0 , e 1 , …, e 15 and e 0 , e 1 , …, e 15 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
Table 1. Multiplication rules of Kaluza numbers for e 0 , e 1 , …, e 15 and e 0 , e 1 , …, e 15 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
ine×0123456789101112131415
ine00123456789101112131415
11067892345161718192021
22−60101112−1−16−17−18345222324
33−7−10−01314161−19−202−22−23−4−525
44−8−11−13−01517191−21222−243−25−5
55−9−12−14−15−01820211232422534
66−21161718−0−10−11−12789262728
77−3−16−11920100−13−146−26−27−8−929
88−4−17−19−12111130−15266−287−29−9
99−5−18−20−21−11214150272862978
101016−3−22223−7−626270−13−14−11−1230
111117−4−22−224−8−26−628130−1510−30−12
121218−5−23−24−2−9−27−28−614150301011
131319224−325268−72911−1030−015−14
141420235−25−3279−29−712−30−10−15−013
15152124255−428299−83012−1114−13−1
ine
Table 2. Multiplication rules of Kaluza numbers for e 0 , e 1 , …, e 15 and e 16 , e 17 , …, e 31 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
Table 2. Multiplication rules of Kaluza numbers for e 0 , e 1 , …, e 15 and e 16 , e 17 , …, e 31 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
ine×16171819202122232425262728293031
ine016171819202122232425262728293031
110111213141526272829222324253130
2−7−8−9−26−27−28−13141530−19−20−21−3125−29
3−6262789−291112−30−15−17−18312124−28
4−26−628−7299−1030121416−31−18−20−2327
5−27−28−6−29−7−8−30−10−11−133116171922−26
6−3−4−5−22−23−2419202131−13−14−15−3029−25
7−2222345−251718−31−21−11−12301528−24
8−22−224−3255−1631182010−30−12−14−2723
9−23−24−2−25−3−4−31−16−17−193010111326−22
101−19−20−17−183145−25−2489−29−281521
11191−2116−31−18−325523−729927−14−20
1220211311617−25−3−4−22−29−7−8−261319
1317−1631−121−20−224−23−5−628−27−9−12−18
1418−31−16−21−119−24−2224−28−62681117
153118−1720−19−123−22−2−327−26−6−7−10−16
ine
Table 3. Multiplication rules of Kaluza numbers for e 16 , e 17 , …, e 31 and e 0 , e 1 , …, e 15 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
Table 3. Multiplication rules of Kaluza numbers for e 16 , e 17 , …, e 31 and e 0 , e 1 , …, e 15 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
ine×0123456789101112131415
ine161610−7−62627−3−222231−19−20−17−1831
171711−8−26−628−4−22−224191−2116−31−18
181812−9−27−28−6−5−23−24−220211311617
191913268−729224−32517−1631−121−20
202014279−29−7235−25−318−31−16−21−119
21211528299−824255−43118−1720−19−1
2222−261311−1030−19−1716−314−325−224−23
2323−271412−30−10−20−1831165−25−3−24−222
2424−28153012−11−21−31−1817255−423−22−2
2525−29−30−1514−133121−201924−2322−54−3
2626−221917−1631−13−1110−308−729−628−27
2727−232018−31−16−14−1230109−29−7−28−626
2828−24213118−17−15−30−1211299−827−26−6
2929−25−31−2120−193015−141328−2726−98−7
303031−25−2423−22−29−2827−2615−1413−1211−10
313130−29−2827−26−25−2423−2221−2019−1817−16
ine
Table 4. Multiplication rules of Kaluza numbers for e 16 , e 17 , …, e 31 and e 16 , e 17 , …, e 31 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
Table 4. Multiplication rules of Kaluza numbers for e 16 , e 17 , …, e 31 and e 16 , e 17 , …, e 31 (elements e i denoted by their subscripts, i.e.,  i = e i ) .
ine×16171819202122232425262728293031
ine160−13−14−11−123089−29−2845−25−242115
17130−1510−30−12−729927−325523−20−14
1814150301011−29−7−8−26−25−3−4−221913
1911−1030−015−14−628−27−9−224−23−5−18−12
2012−30−10−15−013−28−6268−24−22241711
213012−1114−13−027−26−6−723−22−2−3−16−10
22−87−296−2827−015−14−121−212018−59
23−9297286−26−15−01311211−19−174−8
24−29−98−2726614−13−0−10−2019116−37
25−2827−269−8712−11100−1817−16−1−26
26−43−252−2423−121−20−180−151412−95
27−5253242−22−21−11917150−13−118−4
28−25−54−2322220−19−1−16−1413010−73
29−2423−225−4318−17161−1211−10−0−62
3021−2019−1817−165−4329−876−0−1
3115−1413−1211−109−8765−432−1−1
ine
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cariow, A.; Cariowa, G.; Paplinski, J.P. An Algorithm for Fast Multiplication of Kaluza Numbers. Appl. Sci. 2021, 11, 8203. https://doi.org/10.3390/app11178203

AMA Style

Cariow A, Cariowa G, Paplinski JP. An Algorithm for Fast Multiplication of Kaluza Numbers. Applied Sciences. 2021; 11(17):8203. https://doi.org/10.3390/app11178203

Chicago/Turabian Style

Cariow, Aleksandr, Galina Cariowa, and Janusz P. Paplinski. 2021. "An Algorithm for Fast Multiplication of Kaluza Numbers" Applied Sciences 11, no. 17: 8203. https://doi.org/10.3390/app11178203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop