Next Article in Journal
A General Overview of Scientific Production in China, Japan and Korea of the Water-Gas Shift (WGS) Process
Previous Article in Journal
Information and Energy/Matter
Previous Article in Special Issue
Fröhlich Condensate: Emergence of Synergetic Dissipative Structures in Information Processing Biological and Condensed Matter Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quaternionic Multilayer Perceptron with Local Analyticity

by
Teijiro Isokawa
1,*,
Haruhiko Nishimura
2 and
Nobuyuki Matsui
1
1
Graduate School of Engineering, University of Hyogo, 2167 Shosha, Himeji, Hyogo 671-2280, Japan
2
Graduate School of Applied Informatics, University of Hyogo, 7-1-28 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
*
Author to whom correspondence should be addressed.
Information 2012, 3(4), 756-770; https://doi.org/10.3390/info3040756
Submission received: 11 September 2012 / Revised: 13 November 2012 / Accepted: 20 November 2012 / Published: 28 November 2012
(This article belongs to the Special Issue Brain like Computing, Communication and Machines)

Abstract

:
A multi-layered perceptron type neural network is presented and analyzed in this paper. All neuronal parameters such as input, output, action potential and connection weight are encoded by quaternions, which are a class of hypercomplex number system. Local analytic condition is imposed on the activation function in updating neurons’ states in order to construct learning algorithm for this network. An error back-propagation algorithm is introduced for modifying the connection weights of the network.

1. Introduction

Processing multi-dimensional data is an important problem for artificial neural networks. A single neuron can take only one real value as its input, thus a network should be configured so that several neurons are used for accepting multi-dimensional data. This type of configuration is sometimes unnatural in applications of artificial neural networks to engineering problem, such as processing of acoustic signals or coordinates in the plane. Thus, complex number systems have been utilized to represent two-dimensional data elements as a single entity. Application of complex numbers to neural networks have been extensively investigated, as summarized in the references [1,2,3].
Though complex values can treat two-dimensional data elements as a single entity, what we should treat data with more than two-dimension in artificial neural networks? Although this problem can of course be solved by applying several real-valued or complex-valued neurons, it would be useful to introduce a number system with higher dimensions, the so-called hypercomplex number systems.
Quaternion is a four-dimensional hypercomplex number system introduced by Hamilton [4,5]. This number system has been extensively employed in several fields, such as modern mathematics, physics, control of satellites, computer graphics, etc. [6,7,8]. One of the benefits provided by quaternions is that affine transformations of geometric figures in three-dimensional spaces, especially spatial rotations, can be represented compactly and efficiently. Applying quaternions to the field of neural networks has been recently explored in an effort to naturally represent high-dimensional information, such as color and three-dimensional body coordinates, by a quaternionic neuron, rather than complex-valued or real-valued neurons.
Thus, there has been a growing number of studies concerning the use of quaternions in neural networks. Multilayer perceptron (MLP) models have been developed in [9,10,11,12,13,14]. The use of quaternion in MLP models has been applied to several engineering problems such as control problems [10], color image compression [12], color night vision [15,16], and predictions for the output of chaos circuits and winds in three-dimensional space [13,14]. Other types of network models has also been explored, such as the computational ability of a single quaternionic neuron [17] and the existence condition of an energy function in continuous-time and continuous-state recurrent networks [18]. There are several types of quaternionic Hopfield-type networks with discrete-time driven, such as bipolar state [19,20], continuous state [21,22], multistate by phase representation [23,24]. Learning schemes for these networks have also been proposed in [25].
One of the difficulties in constructing neural networks in the quaternionic domain is about the introduction of suitable functions for the activation function in updating the neurons’ states. A typical type of activation function is the so-called “split” type function, in which a real-valued function is appliedto update each component of a quaternionic value [10]. Real-valued sigmoidal function and hyperbolic function, which are analytic (differentiable) functions, are often used for this purpose. However, as an activation function, the split-type quaternionic function is not appropriate due to lack of analyticity. Thus, it is necessary to define other types of differential functions in order to construct learning schemes such as the error back-propagation algorithm. The Cauthy-Riemann-Fueter (CRF) equation defines the analytic condition for the quaternionic functions, which corresponds to the Cauthy-Riemann equation for the complex-valued functions. The functions satisfying the CRF equation turn out to be only linear functionsor constants; therefore it is impossible to introduce non-linearity into the updates of the neurons’ states.
Recently, another class of analyticity for the quaternionic functions has been developed [26,27,28]. Called “local analyticity”, this analytic condition is derived at a quaternionic point with its local coordinate, rather than in a quaternionic space with a global coordinate. The derivation of local analytic condition shows that a quaternion in the local coordinate system is isomorphic to the complex number system and thus it can be treated as a complex value. A neural network with an activation function with local analyticity has been first proposed and analyzed in [13,14] in terms of MLP-type network. It is shown that the performance for this network is superior to the network with a split-type activation function through several applications. Another application of analytic activation function to quaternionic neural networks has been investigated in terms of Hopfield-type network [22]. The local analytic condition is constructed based on [28], and the stability conditions are derived in the case of quaternionic tanh function being used for an activation function is deduced in this network, in which the complex-valued tanh function used in [29] can be used as a quaternionic function.
This paper presents an MLP-type quaternionic neural network with locally analytic activation function. All variables in this network, such as input, output, action potential and connection weights, are encoded by quaternions. A learning scheme, a quaternionic equivalent of error back-propagation algorithm, is presented and theoretically explored. The derivation of the learning scheme in this papera dopts the Wirtinger calculus [30], which has been invented in the field of complex analysis, where a quaternionic value and its conjugate are treated independent of each other. This calculus enables the derivations to be more straightforward of its description, than by using the conventional description, i.e., Cartesian representation.

2. Quaternionic Algebra

2.1. Definition of Quaternion

Quaternions forma class of hypercomplex numbers consistingofareal number and three imaginary numbers-i, j, and k. Formally, a quaternion number is defined as a vector x in a four-dimensional vector space,
Information 03 00756 i001
where x(e), x(i), x(j), and x(k) are real numbers. The division ring of quaternions, H, constitutes the four-dimensional vector space over the real numbers with bases 1, i, j, and k. Equation (1) can also be written using 4-tuple or 2-tuple notation as
Information 03 00756 i002
where Information 03 00756 i046 = { x(i), x(j), x(k)}. In this representation, x(e) is the scalar part of x, and Information 03 00756 i046 forms the vector part. The quaternion conjugate is defined as
Information 03 00756 i003
Quaternion bases satisfy the following identities,
Information 03 00756 i004
Information 03 00756 i005
known as the Hamilton rule. From these rules, it follows immediately that multiplication of quaternions is not commutative.
Next, we define the operations between quaternions p = (p(e), Information 03 00756 i047) = (p(e), p(i), p(j), p(k)) and q = (q(e), Information 03 00756 i047) = (q(e), q(i), q(j), q(k)). The addition and subtraction of quaternions are defined in a similar manner as for complex-valued numbers or vectors, i.e.,
Information 03 00756 i006
Information 03 00756 i007
The product of p and q is determined by Equation (5) as
Information 03 00756 i008
where Information 03 00756 i047 · Information 03 00756 i048 and Information 03 00756 i047 × Information 03 00756 i048 denote the dot and cross products, respectively, between three-dimensional vectors Information 03 00756 i047 and Information 03 00756 i048. The conjugate of the product is given as
Information 03 00756 i009
The quaternion norm of x, denoted by |x|, is defined as
Information 03 00756 i010

2.2. Quaternionic Analyticity

It is important to introduce an analytic function (or differentiable function) to serve as the activation function in the neural network. This section describes the required analyticity of the function in the quaternionic domain, in order to construct activation functions for quaternionic neural networks.
The condition for differentiability of the quaternionic function f is given by
Information 03 00756 i011
The analytic condition for the quaternionic function, called the Cauchy-Riemann-Fueter (CRF) equation, yields:
Information 03 00756 i012
This is an extension of the Cauchy-Riemann (CR) equations defined for the complex domain. However, only linear functions and constants satisfy the CRF equation [14,26,27].
An alternative approach to assure analyticity in the quaternionic domain has been explored in [26,27,28]. This approach is called local analyticity and is distinguished from the standard analyticity, i.e., global analyticity. In the following, we introduce local derivatives with Wirtinger representation and analytic conditions for quaternionic functions, with reference to [28].
A quaternion x can be alternatively represented as:
Information 03 00756 i013
Information 03 00756 i014
Information 03 00756 i015
From the definition in Equation (15), we deduce that u2 x = −1. If ux holds a commutative property against a difference of x, then the system with ux can be regarded as locally isomorphic to the complex number system.
A quaternionic difference of x, denoted by dx =(dx(e), dx(i), dx(j), dx(k)), can be decomposed by using:
Information 03 00756 i016
where,
Information 03 00756 i033
Then, the following relations hold:
Information 03 00756 i034
when we set dx = 0, i.e., dx + uxdxux = 0,which results in uxdx = dxux. This leads to ux × d Information 03 00756 i046 =0, because ux is a quaternion without a real part. Thus, ux and d Information 03 00756 i046 are parallel to each other. Then, d Information 03 00756 i046 = uxδ can be obtained, where δ is a real-valued constant. From Equation (14), it follows that
Information 03 00756 i035
Then,
Information 03 00756 i036
Considering Information 03 00756 i046 = uxr and d Information 03 00756 i046 = uxδ, we obtain
Information 03 00756 i037
Hence, δ = dr is derived and d Information 03 00756 i046 = uxdr is obtained. dx is represented as Information 03 00756 i049.
Figure 1 shows a schematic coordinate system for defining a local complex plane. The component k is omitted (x(k) = 0) in this figure due to difficulties in representing a four-dimensional vector space. In this example, for a given quaternion x, its unit vector ux is defined in the i-j plane. Then, a complex plane is defined by spanning the components x(e)(real axis) and x(r) in the quaternionic space, and the analytic condition is constrained in this plane.
Figure 1. A schematic illustration of local complex plane in a quaternionic space, where the component k is omitted for simplicity.
Figure 1. A schematic illustration of local complex plane in a quaternionic space, where the component k is omitted for simplicity.
Information 03 00756 g001
The local derivative operators are introduced, corresponding to the form of dx, as follows:
Information 03 00756 i038
where
Information 03 00756 i039
with the properties
Information 03 00756 i040
Note that the variables x and x turn out to be independent of each other. These derivative operators are quaternionic equivalents to the well-known Wirtinger derivative in the complex domain [30].
F (x + dx) can be expanded using the above-mentioned representations as
Information 03 00756 i017
where,
Information 03 00756 i041
When dx = 0, the local derivative of F (x) is written as
Information 03 00756 i042
and the local analytic condition for the function F (x) is given by
Information 03 00756 i043
in the corresponding local complex plane. This result corresponds to the one presented in [27], where dx =0 always holds.
Moreover, if F is a function with the two arguments, x and x, it becomes:
Information 03 00756 i018
with x and x being independent of each other. As a result, we can treat quaternionic functions in the same manner as complex-valued functions under the condition of local analyticity.

3. Quaternionic Multilayer Perceptron

3.1. Network Model

The structure of the network assumed in this paper is shown in Figure 2. This network is a so-called multilayer perceptron network with one hidden layer, and the parameters in the network are encoded by quaternionic values.
The numbers of neurons in the input, hidden and output layers are set to M, N and K, respectively. A set of quaternionic signals denoted by z is input to the neurons in the input layer of the network. The outputs of the neurons in the input layer are the same as input z’s. In the hidden layer, each neuron takes the weighted sum of the output signals from the input layer. The (connection) weight from the n-th neuron in the input layer to the m-th neuron in the hidden layer is denoted by vnm. The output of the neuron in the hidden layer, denoted by xn, is determined by
Information 03 00756 i019
where g is a quaternionic activation function introducing non-linearity between the action potential and output in the neuron. This function satisfies the following condition:
Information 03 00756 i020
Figure 2. The structure of the multilayer perceptron in this paper.
Figure 2. The structure of the multilayer perceptron in this paper.
Information 03 00756 g002
Processing the neurons’ outputs in the output layer can be defined in the same manner as in the hidden layer. The output of the neuron in the output layer, yk, is defined as
Information 03 00756 i021
where the function h is a quaternionic activation function from the action potential to the output, and wkn is the connection weight between the n-th neuron in the hidden layer and the k-th neuron in the output layer. The function h also satisfies
Information 03 00756 i022
The connection weights should be modified by the so-called learning algorithms, in order to obtain the desired output signals with respect to the input signals. One of the learning algorithms for MLP-type networks is the error back-propagation (EBP) algorithm. The following section describes our derivation of this algorithm for the presented network.

3.2. Learning Algorithm

An EBP algorithm works so that the output error, calculated by the neurons’ outputs at the output layer and the desired output signals, is minimized. In the case of networks with three layers as shown in Figure 2, the connection weights between the hidden and output layers are first modified, and then the weights between the input and hidden layers are modified. In general (networks with n layers), EBP algorithms first modify the connection weights between the n-th layer (the output layer) and (n − 1)-th layer, and then between the (n − 1)-th layer and the (n − 2)-th layer, etc. This section only describes the three-layers case.
First, let dk be a quaternionic desired signal for the k-th output neuron when z’s are input to the network. The connection weights affect the output signals with respect to a set of input signals, thus the error E is regarded as a function with arguments wkn’s and wkn’s. The output error E at the time t is then defined as
Information 03 00756 i023
The output error should be real-valued so that it can be minimized.
Suppose that the connection weights are updated at the time (t + 1) by
Information 03 00756 i024
where Δwkn is a quantity in updating. Then, the output error at the time (t + 1) can be written as
Information 03 00756 i025
Note that the local analytic condition in quaternionic domain should be satisfied in calculating the derivatives. Thus, if we set Δwkn as
Information 03 00756 i026
where µ is a quaternionic constant, the temporal difference of the output error, ∆E, becomes
Information 03 00756 i027
If the real part of µ is positive, ∆E ≤ 0 holds. This indicates that the output error would decrease upon updating weights according to Equations (24) and (26). For calculating the updated quantity in Equation (26), the component ∂E/wkn are expanded by using chain rule of derivative and ∂y/w= 0 from the local analytic condition is applied:
Information 03 00756 i028
where h′ is the (local) derivative of the activation function h, and δk is defined as Information 03 00756 i050.
Similarly, the updates for the connection weights v’s can be deduced. The output error function E isa function with arguments vnm’s and vnm’s, thus the output error at the time (t + 1) can be represented by
Information 03 00756 i029
Hence, vnm is updated with the quantity ∆vnm,
Information 03 00756 i030
Information 03 00756 i031
This leads to ∆E ≤ 0 under the condition of the real part of µ being positive. The component ∂E/*vnmcan be expanded by chain rules and local analytic conditions, x/∂v* = 0, y/x* = 0, and x/∂v* = 0 are applied:
Information 03 00756 i044
Using the derivatives Information 03 00756 i051, and Information 03 00756 i052, we finally obtain
Information 03 00756 i032
Once a set of network output {yk} is obtained for a set of net work input, the output error with respect to a target set {dk} can be calculated by Equation (23). Then, the connection weights between hidden and output layers are modified by Equations (24-28). The connection weights between input and hidden layers are finally modified by Equations (30-32).

3.3. Universal Approximation Capability

As an example of activation functions for neurons (g and h), the quaternionic tanh function [22] can be used. Other types of activation functions are also available, because complex-valued functions can be used for the presented network and the properties of several functions have been explored for the activation functions in [29]. It is important to consider the capability of the proposed quaternionic network with these activation functions, i.e., whether the proposed network can approximate given functions.
This concern is known as the universal approximation theorem [31,32]. In the real-valued MLPs with single hidden layer,the universality has been proven with a so-called sigmoidal function being used as an activation function of neurons. Other than sigmoidal functions, the bounded and differentiable functions are also available for activation functions.
This theorem is also discussed in the case of complex-valued networks [29]. The condition for boundedness in the real-valued MLPs is not required in some complex-valued MLPs. There are three types of activation functions discussed in [29], which are categorized by the properties of complex-valued functions, and for each of them, it is shown that universal approximation can be achieved.
The first type of complex-valued functions concerns the functions without anysingular points. These functions can be used as activation functions and the networks with this type of activation functions are shown as good approximators. Although some of the functions are not bounded, they can be used by introducing bounding operation for their regions. The second type concerns the functions having the bounded singular points, e.g., the discontinuous functions. These singularities can be removed and thus they can also be used for activation functions and can achieve their universality. The last type is for the functions with the so-called essential singularities, i.e., their singularities cannot be removed. These functions can also be used as activation functions, with the consideration of restricting the regions for them so that their regions never cover their singularities.
In the proposed quaternionic MLPs, for example, a quaternionic tanh function can be used as an activation function. This function is unbounded and may contain several kinds of singularities as in the case of complex-valued functions described above. Thus this quaternionic MLPs would face the same problem, i.e., the existence of singularities, but this can also be handled similarly as in the case of complex-valued MLPs for the removal or avoidance of such singularities. It could be possible to show the universality with handling singularities for the proposed MLPs according to the ways adopted in the complex-valued MLPs [29],but it remains as our future work.

4. Conclusions and Discussion

This paper has proposed a multilayer type neural network and an error back-propagation algorithm for its learning scheme in the quaternionic domain. The neurons in this network adopt locally analytic activation functions. The quaternionic functions with local analytic conditions are isomorphic to the complex functions, thus several activation functions, such as complex-valued tanh function, can be used extendedly in the quaternionic domain. The Wirtinger calculus, where a quaternion and its conjugate are treated as independent of each other, makes the derivation of the learning scheme clear and compact.
Analytic conditions for quaternionic functions are derived by defining a complex plane at a quaternionic point, which is a kind of reduction from quaternionic domain to complex domain. There exists another type of reduction in quaternionic domains, such as the commutative quaternion, which is a four-dimensional hypercomplex number system with commutativity in its multiplication [33,34,35,36]. A principal property is that a commutative quaternion can be decomposed and represented by two complex numbers with two linearly independent bases (called the decomposed form by idempotent bases). Commutative quaternions have been applied to neural networks only in terms of Hopfield-type network [24], but defining multilayer perceptron type networks can alsobe straightforward. Thus, it will be interesting to explore the relationship between commutative quaternion-based networks and networks with the local analyticity.
Showing the universality of the proposed network is also an important issue. Quaternionic functions, such as tanh function, may contain several kinds of singularities where the values of functions or their differentials are not defined in particular regions. In complex-valued networks with fully complex-valued function [29], the universality of the networks can be shown by dealing with these singularities, so that such singularities are removed or avoided by restricting the regions. It is expected to show the universality of the proposed network, in a similar way to the case of complex-valued networks.
Also, it is necessary to investigate the performances of the proposed network, though in this paper the experimental exploration could not be accomplished. The proposed network is similar to the networks proposed in [13,14], due to the introduction of local analytic function in quaternionic domain, thus the performances for both types of networks may have similar tendencies. Performance comparisons can also be conducted between the proposed network and the ones in [29], because both networks adopt the same representation in their constructions, i.e., Wirtinger calculus. A wide variety of activation functions have been investigated including the split-type function, phase-preserving function [37], and circular-type function [38]. Similar experiments should be conducted for the quaternionic networks.
Application of the presented network to engineering problems is also challenging. The processing of three or four dimensional vector data, such as color/multi-spectral image processing, predictions for three-dimensional protein structures, and controls of motion in three-dimensional space, will be the candidates from now on.

Acknowledgments

This study was financially supported by Japan Society for the Promotion of Science (Grant-in-Aids for Young Scientists (B) 24700227 and Scientific Research (C) 23500286).

References

  1. Hirose, A. Complex-Valued Neural Networks: Theories and Application; World Scientific Publishing: Singapore, 2003. [Google Scholar]
  2. Hirose, A. Complex-Valued Neural Networks; Springer-Verlag: Berlin, Germany, 2006. [Google Scholar]
  3. Nitta, T. Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters; Information Science Reference: New York, NY, USA, 2009. [Google Scholar]
  4. Hamilton, W.R. Lectures on Quaternions; Hodges and Smith: Dublin, Ireland, 1853. [Google Scholar]
  5. Hankins, T.L. Sir William Rowan Hamilton; Johns Hopkins University Press: Baltimore, MD, USA, 1980. [Google Scholar]
  6. Mukundan, R. Quaternions: From classical mechanics to computer graphics, and beyond. In Proceedings of the 7th Asian Technology Conference in Mathematics, Melaka, Malaysia, 17-21 December 2002; pp. 97–105.
  7. Kuipers, J.B. Quaternions and Rotation Sequences: A Primer with Applications to Orbits, Aerospace and Virtual Reality; Princeton University Press: Princeton, NJ, USA, 1998. [Google Scholar]
  8. Hoggar, S.G. Mathematics for Computer Graphics; Cambridge University Press: Cambridge, MA, USA, 1992. [Google Scholar]
  9. Nitta, T. An extension of the back-propagation algorithm to quaternions. In Proceedings of International Conference on Neural Information Processing (ICONIP’96), Hong Kong, China, 24-27 September 1996; 1, pp. 247–250.
  10. Arena, P.; Fortuna, L.; Muscato, G.; Xibilia, M. Multilayer perceptronsto approximate quaternion valued functions. Neural Netw. 1997, 10, 335–342. [Google Scholar] [CrossRef]
  11. Buchholz, S.; Sommer, G. Quaternionic spinor MLP. In Proceeding of 8th European Symposium on Artificial Neural Networks (ESANN 2000), Bruges, Belgium, 26-28 April 2000; pp. 377–382.
  12. Matsui, N.; Isokawa, T.; Kusamichi, H.; Peper, F.; Nishimura, H. Quaternion neural network with geometrical operators. J. Intell. Fuzzy Syst. 2004, 15, 149–164. [Google Scholar]
  13. Mandic, D.P.; Jahanchahi, C.; Took, C.C. A quaternion gradient operator and its applications. IEEE Signal Proc. Lett. 2011, 18, 47–50. [Google Scholar] [CrossRef]
  14. Ujang, B.C.; Took, C.C.; Mandic, D.P. Quaternion-valued nonlinear adaptive filtering. IEEE Trans. Neural Netw. 2011, 22, 1193–1206. [Google Scholar] [CrossRef]
  15. Kusamichi, H.; Isokawa, T.; Matsui, N.; Ogawa, Y.; Maeda, K. Anewschemeforcolornight vision by quaternion neural network. In Proceedings of the 2nd International Conferenceon Autonomous Robots and Agents (ICARA2004), Palmerston North, New Zealand, 13-15 December 2004; pp. 101–106.
  16. Isokawa, T.; Matsui, N.; Nishimura, H. Quaternionic neural networks: Fundamental properties and applications. In Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters; Nitta, T., Ed.; Information Science Reference: New York, NY, USA, 2009; pp. 411–439, Chapter XVI. [Google Scholar]
  17. Nitta, T. A solution to the 4-bit parity problem with a single quaternary neuron. Neural Inf. Process. Lett. Rev. 2004, 5, 33–39. [Google Scholar]
  18. Yoshida, M.; Kuroe, Y.; Mori, T. Models of hopfield-type quaternion neural networks and their energy functions. Int. J. Neural Syst. 2005, 15, 129–135. [Google Scholar] [CrossRef]
  19. Isokawa, T.; Nishimura, H.; Kamiura, N.; Matsui, N. Fundamental properties of quaternionic hopfield neural network. In Proceedings of 2006 International Joint Conference on Neural Networks, Vancouver BC, USA, 30 October 2006; pp. 610–615.
  20. Isokawa, T.; Nishimura, H.; Kamiura, N.; Matsui, N. Associative memoryin quaternionic hopfield neural network. Int. J. Neural Syst. 2008, 18, 135–145. [Google Scholar] [CrossRef]
  21. Isokawa, T.; Nishimura, H.; Kamiura, N.; Matsui, N. Dynamics of discrete-time quaternionic hopfield neural networks. In Proceedings of 17th International Conference on Artificial Neural Networks, Porto, Portugal, 9-13 September 2007; pp. 848–857.
  22. Isokawa, T.; Nishimura, H.; Matsui, N. On the fundamental properties of fully quaternionic hopfield network. In Proceedings of IEEE World Congress on Computational Intelligence (WCCI2012), Brisbane, Australia, 10-15 June 2012; pp. 1246–1249.
  23. Isokawa, T.; Nishimura, H.; Saitoh, A.; Kamiura, N.; Matsui, N. On the scheme of quaternionic multistate hopfield neural network. In Proceedings of Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on Advanced Intelligent Systems (SCIS&ISIS 2008), Nagoya, Japan, 17-21 September 2008; pp. 809–813.
  24. Isokawa, T.; Nishimura, H.; Matsui, N. Commutative quaternion and multistate hopfield neural networks. In Proceedings of IEEE World Congress on Computational Intelligence (WCCI2010), Barcelona, Spain, 18-23 July 2010; pp. 1281–1286.
  25. Isokawa, T.; Nishimura, H.; Matsui, N. An iterative learning schemefor multistate complex-valued and quaternionic hopfield neural networks. In Proceedings of International Joint Conference on Neural Networks (IJCNN2009), Atlanta, GA, USA, 14-19 June 2009; pp. 1365–1371.
  26. Leo, S.D.; Rotelli, P.P. Local hypercomplex analyticity. 1997. Available online: http://arxiv.org/abs/funct-an/9703002 (accessed on 20 November 2012).
  27. Leo, S.D.; Rotelli, P.P. Quaternonic analyticity. Appl. Math. Lett. 2003, 16, 1077–1081. [Google Scholar] [CrossRef]
  28. Schwartz, C. Calculus with a quaternionic variable. J. Math. Phys. 2009, 50, 013523:1–013523:11. [Google Scholar]
  29. Kim, T.; Adalı, T. Approximationby fully complex multilayer perceptrons. Neural Comput. 2003, 15, 1641–1666. [Google Scholar] [CrossRef]
  30. Wirtinger, W. Zur formalen theorie der funktionen von mehr komplexen veränderlichen. Math. Ann. 1927, 97, 357–375. [Google Scholar] [CrossRef]
  31. Cybenko, G. Approximations by superpositions of sigmoidal functions. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  32. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 215–257. [Google Scholar]
  33. Segre, C. The real representations of complex elements and extension to bicomplex systems. Math. Ann. 1892, 40, 322–335. [Google Scholar]
  34. Catoni, F.; Cannata, R.; Zampetti, P. An Introduction to commutative quaternions. Adv. Appl. CliffordAlgebras 2006, 16, 1–28. [Google Scholar] [CrossRef]
  35. Davenport, C.M. A commutative hypercomplex algebra with associated function theory. In Clifford Algebra With Numeric and Symbolic Computation; Ablamowicz, R., Ed.; Birkhauser: Boston, MA, USA, 1996; pp. 213–227. [Google Scholar]
  36. Pei, S.C.; Chang, J.H.; Ding, J.J. Commutative reduced biquaternions and their Fourier Transformfor signal and image processing applications. IEEE Trans. Signal Proc. 2004, 52, 2012–2031. [Google Scholar] [CrossRef]
  37. Hirose, A. Continuous complex-valued back-propagation learning. Electron. Lett. 1992, 28, 1854–1855. [Google Scholar] [CrossRef]
  38. Georgiou, G.M.; Koutsougeras, C. Complex domain backpropagation. IEEE Trans. Circuits Syst. II 1992, 39, 330–334. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Isokawa, T.; Nishimura, H.; Matsui, N. Quaternionic Multilayer Perceptron with Local Analyticity. Information 2012, 3, 756-770. https://doi.org/10.3390/info3040756

AMA Style

Isokawa T, Nishimura H, Matsui N. Quaternionic Multilayer Perceptron with Local Analyticity. Information. 2012; 3(4):756-770. https://doi.org/10.3390/info3040756

Chicago/Turabian Style

Isokawa, Teijiro, Haruhiko Nishimura, and Nobuyuki Matsui. 2012. "Quaternionic Multilayer Perceptron with Local Analyticity" Information 3, no. 4: 756-770. https://doi.org/10.3390/info3040756

APA Style

Isokawa, T., Nishimura, H., & Matsui, N. (2012). Quaternionic Multilayer Perceptron with Local Analyticity. Information, 3(4), 756-770. https://doi.org/10.3390/info3040756

Article Metrics

Back to TopTop