Next Article in Journal
Geophysical Frequency Domain Electromagnetic Field Simulation Using Physics-Informed Neural Network
Next Article in Special Issue
An Improved Summary–Explanation Method for Promoting Trust Through Greater Support with Application to Credit Evaluation Systems
Previous Article in Journal
Zagreb Root-Indices of Graphs with Chemical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mutual-Energy Inner Product Optimization Method for Constructing Feature Coordinates and Image Classification in Machine Learning

Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
Mathematics 2024, 12(23), 3872; https://doi.org/10.3390/math12233872
Submission received: 10 November 2024 / Revised: 29 November 2024 / Accepted: 3 December 2024 / Published: 9 December 2024
(This article belongs to the Special Issue Advances in Machine Learning and Graph Neural Networks)

Abstract

:
As a key task in machine learning, data classification is essential to find a suitable coordinate system to represent the data features of different classes of samples. This paper proposes the mutual-energy inner product optimization method for constructing a feature coordinate system. First, by analyzing the solution space and eigenfunctions of the partial differential equations describing a non-uniform membrane, the mutual-energy inner product is defined. Second, by expressing the mutual-energy inner product as a series of eigenfunctions, it shows the significant advantage of enhancing low-frequency features and suppressing high-frequency noise, compared to the Euclidean inner product. And then, a mutual-energy inner product optimization model is built to extract the data features, and the convexity and concavity properties of its objective function are discussed. Next, by combining the finite element method, a stable and efficient sequential linearization algorithm is constructed to solve the optimization model. This algorithm only solves positive definite symmetric matrix equations and linear programming with a few constraints, and its vectorized implementation is discussed. Finally, the mutual-energy inner product optimization method is used to construct feature coordinates, and multi-class Gaussian classifiers are trained on the MINST training set. Good prediction results of the Gaussian classifiers are achieved on the MINST test set.

1. Introduction

In machine learning, data classification plays a very important role. Up to now, a large number of data classification methods have emerged and powered the development of machine learning and its practical applications in different domains [1], such as image detection [2], speech recognition [3], text understanding [4], disease diagnosis [5,6], and financial prediction [7].
Currently, popular data classification methods include the Support Vector Machine (SVM) [8,9], Decision Tree (DT) [10,11], Naive Bayes (NB) [12], K-Nearest Neighbors (KNN) [13], Random Forest (RF) [14], Deep Learning (DL) [15], and Deep Reinforcement Learning (DRL) [16]. SVM is based on optimization theory [17]. DL is implemented through a multilayer neural network under the guidance of optimization techniques, such as the stochastic gradient descent algorithm [18]. DRL combines DL with Reinforcement Learning, and it is effective in real-time scenarios [16]. The others fall in the title of statistical methods [19,20].
Many comparative studies are employed to evaluate these classification methods by analyzing their accuracies, time costs, stability, and sensitivity, as well as their advantages and disadvantages [21,22,23]. SVM is efficient when there is a clear margin of separation between the classes, but the choice of its kernel function is difficult, and it does not work with noisy datasets [23,24]. DL is developing rapidly, but its training is a very time-consuming process because a large number of parameters need to be optimized through the stochastic gradient descent algorithm. In addition, some hyperparameters in DL are set empirically, such as the number of layers in the neural network, the number of nodes in each layer, and the learning rate, resulting in high sensitivity in the performance, dependent on the hyperparameters and specific problems [25,26]. KNN and DT are easy to apply. However, KNN requires the calculation of the Euclidean distance between all the points, leading to a high computation cost. DT is unsuitable for continuous variables, and it has a problem of overfitting [23]. Other classical methods also obtained great successes [27,28]. In order to improve classification accuracy, an ensemble learning scheme, such as AdaBoost [29,30], Bagging [31], Stacking [32] and Gradient Boosting [33], is usually adopted to solve an intricate or large-scale problem [34,35].
Inspired by the ability of our brain to recognize the musical notes played by any musical instrument in a noisy environment, this paper proposes an optimization method for constructing feature coordinates for data classification by simulating a non-uniform membrane structure model. No matter how complex a musical instrument’s structure is, or how different its vibration patterns are, when we listen to a piece of music played by an instrument, our brain can extract the fundamental tone of its vibration at every moment, and can recognize the beautiful melody as time goes by. Mathematically, this can be clearly explained. The vibration of the musical instrument at every moment is adaptively expanded on its own eigenfunction system, and our brain can grasp the lowest eigenvalue and its eigenfunction components corresponding to the musical notes every moment, and enjoy the beautiful melody over time. In order to extract the data features from complex samples, we simulate the adaptively generating process of the eigenfunction coordinate system of a musical instrument and build the mapping from data features to the low-frequency subspace of the eigenfunction system. Through analyzing the solution space and the eigenfunctions of the partial differential equations describing the vibration of a non-uniform membrane, which is a simple musical instrument, the mutual-energy inner product is defined and is used to extract data features. The introduction of the mutual-energy inner product can not only avoid generating an eigenfunction system to reduce the computational complexity, but also can enhance the feature information and filter out data noise, furthermore, it can benefit the simplification of the data classifier training.
The full paper is divided into six sections. Section 1 briefly introduces popular data classification methods and the research background. Section 2 analyzes the solution space of the partial differential equations describing a non-uniform membrane, and defines the concept of the mutual-energy inner product. Section 3, by making use of the eigenvalues and the eigenfunctions of the non-uniform membrane vibration equations, the mutual-energy inner product is expressed as a series of eigenfunctions, and its potential in data classification is pointed out for enhancing feature information and filtering out data noise. Section 4 builds a mutual-energy inner product optimization model and discusses the convexity and concavity properties of its objective function. Section 5 designs a sequential linearization algorithm to solve the optimization model by combing the finite element method (FEM). Section 6, the mutual-energy inner product optimization method for constructing feature coordinates is applied to a 2-D image classification problem, and numerical examples are given in combination with Gaussian classifiers and the handwritten digit MINST dataset. Section 7, we summarize the full paper and introduce the future scope of the work.

2. Mutual-Energy Inner Product

Consider the linear partial differential equations
{ L [ u ( x ) ] = f ( x ) x Ω l [ u ( x ) ] = 0 x Γ ,
where L [ u ( x ) ] is a homogeneous linear self-adjoint differential operator; f ( x ) is a piecewise continuous function; Ω is the domain of definition, with a boundary Γ ; and l [ u ( x ) ] is a homogeneous linear differential operator on the boundary Γ , describing the Robin boundary condition.
{ L [ u ( x ) ] = i = 1 n x i ( p ( x ) u ( x ) x i ) + q ( x ) u ( x ) l [ u ( x ) ] = p ( x ) u ( x ) n + σ ( x ) u ( x ) .
Expression (2) can be regarded as static equilibrium equations of a simple elastic structure, such as a 1-D string and a 2-D membrane, and can be expanded to an n-dimensional problem. For a 2-dimensional problem, Ω is a domain occupied by a membrane with its boundary Γ ; p ( x ) and q ( x ) stand for the elastic modulus and distributed support elastic coefficient of the membrane, respectively; σ ( x ) is the support elastic coefficient on the boundary; f ( x ) is an external force acting on the membrane; u ( x ) is the deformations of the membrane due to f ( x ) , and has a piecewise continuous first-order derivative; u / n is the derivative of the deformations in the outward-pointing normal direction of Γ . In this research, it is required that, p ( x ) , q ( x ) , σ ( x ) are piecewise continuous functions, and p ( x ) > 0 , q ( x ) > 0 , σ ( x ) 0 .
A structure subjected to an external force f ( x ) will generate the deformation u ( x ) , and its deformation energy E [ u ( x ) ] can be expressed as
E [ u ( x ) ] = 1 2 Ω f ( x ) u ( x ) d Ω .
If the structure is simultaneously subjected to another external force g ( x ) , then it will generate an additional deformation v ( x ) . The total deformation u ( x ) + v ( x ) satisfies the superposition principle due to the linearity of Expression (1). The deformation v ( x ) can cause additional work performed by f ( x ) . Generally, the additional deformation energy U [ u ( x ) , v ( x ) ] is called the mutual energy between u ( x ) and v ( x ) or the mutual work between f ( x ) and g ( x ) . The mutual energy describes the correlation of the two external forces, and can be expressed as
U [ u ( x ) , v ( x ) ] = Ω f ( x ) v ( x ) d Ω .
Substituting Expression (1) into Expression (4), by integrating by parts, we obtain
U [ u ( x ) , v ( x ) ] = Ω ( i = 1 n u ( x ) x i p ( x ) v ( x ) x i + u ( x ) q ( x ) v ( x ) ) d Ω + Γ u ( x ) σ ( x ) v ( x ) d Γ .
Expression (5) is a bilinear functional. Comparing Expressions (3) and (4), we have
E [ u ( x ) ] = 1 2 U [ u ( x ) , u ( x ) ] .
Due to p ( x ) > 0 , q ( x ) > 0 , σ ( x ) 0 , according to the Expressions (5) and (6), the mutual energy satisfies
U [ u ( x ) , u ( x ) ] 0 ( i f o n l y i f u ( x ) = 0 , U [ u ( x ) , u ( x ) ] = 0 ) .
Expression (7) describes a simple physical phenomenon: when the elastic modulus of the structural material is positive, if the structure deforms, deformation energy is generated; otherwise, the deformation energy is zero.
Expression (5) also shows that the mutual energy is symmetrical and satisfies the commutative law. Combined with Expression (7), it can be inferred that the mutual energy satisfies the Cauchy–Schwarz inequality
( U [ u ( x ) , v ( x ) ] ) 2 U [ u ( x ) , u ( x ) ] · U [ v ( x ) , v ( x ) ] .
The Expressions (7) and (8) show that the mutual energy can be regarded as an inner product of the structural deformation functions. For simplicity, we use u , v U and u , v to represent the mutual-energy inner product and the Euclidean inner product, respectively; that is,
{ u , v U = U [ u ( x ) , v ( x ) ] u , v = Ω u ( x ) v ( x ) d Ω .
We define u 2 as the norm derived from u , v , and u U as the norm derived from u , v U . Based on Expression (6), u U satisfies
u U = 2 E [ u ( x ) ] .
u U is proportional to the square root of the deformation energy, and is also the energy norm. According to the Cauchy–Schwarz inequality (8), u U satisfies the triangle inequality
u + v U u U + v U .
Based on Expression (1), when a structure is subjected to a piecewise continuous external force, its deformation function has piecewise continuous first-order derivatives on the domain Ω and satisfies the boundary condition l [ u ( x ) ] = 0 . The set of these deformation functions can span a space V ( Ω ) , which can be equipped either with the Euclidean inner product u , v or with the mutual-energy inner product u , v U .
In addition, applying the variational principle, Expression (1) can also be rewritten as the minimum energy principle expression
min u ( x ) π [ u ] = 1 2 u , u U f , u .
Here, the feasible domain of u ( x ) has piecewise continuous first-order derivatives on Ω , and does not need to satisfy homogeneous boundary conditions.

3. Signal Processing Property of Mutual-Energy Inner Product

The eigenequation of L [ u ( x ) ] can be written as
{ L [ u ( x ) ] = λ u ( x ) x Ω p ( x ) u ( x ) n + σ ( x ) u ( x ) = 0 x Γ .
For Expression (13), its non-zero solutions φ ( x ) and the corresponding coefficients λ are called eigenfunctions and eigenvalues, respectively. These eigenfunctions and eigenvalues have the following properties due to p ( x ) > 0 , q ( x ) > 0 , σ ( x ) 0 [36].
(1)
Expression (13) has infinite eigenvalues λ n and eigenfunctions φ n ( x ) , i.e., n = 1 , 2 . If all the eigenvalues are ranked like λ 1 λ 2 λ n then they satisfy λ 1 > 0 and lim n λ n = + . Meanwhile, λ n has continuous dependence on p ( x ) , q ( x ) and σ ( x ) , and will increase with the increase in p ( x ) , q ( x ) and σ ( x ) .
(2)
Normalized eigenfunctions φ n ( x ) satisfy the orthogonality condition (14), and can form a set of orthogonal and complete basis functions to span the deformation function space V ( Ω ) .
φ m , φ n = { 1 m = n 0 m n .
Therefore, the solutions of Expression (1) can be expressed by φ n ( x ) . For u ( x ) V ( Ω ) , u ( x ) can be presented as a series of eigenfunctions satisfying absolute and uniform convergence, i.e.,
u ( x ) = n = 1 + c n φ n ( x ) , c n = u , φ n .
Expression (15) has profound physical meaning. λ n and φ n ( x ) are the n t h order structural natural frequency and the n t h order vibration mode. If u ( x ) is regarded as a vibration amplitude function, it can be decomposed into a superposition of the vibration modes at each order natural frequency, where the coefficient c n is the vibration magnitude at φ n ( x ) . This is equivalent to spectral decomposition. Imagine such a scene. When we enjoy a piece of music, our brains constantly decompose the instantaneous vibration amplitude u ( x ) according to Expression (15), and meanwhile, perceive the vibration coefficients c n and mark them with λ n . For a musical instrument, λ 1 is its fundamental frequency (tone) and the remaining eigenvalues are overtones. Different musical instruments have different vibration patterns, and their eigenfunctions { φ n ( x ) , n = 1 , 2 } are also different. However, after tuning the tone of the different musical instruments, the fundamental frequency of each note is consistent.
The eigenfunctions and eigenvalues satisfy Expression (13), so we have
L [ φ n ( x ) ] = λ n φ n ( x ) .
Multiplying both sides of Expression (16) by φ n ( x ) and integrating by parts, we can yield
φ n , φ m U = { λ n m = n 0 m n .
Expression (17) shows that the eigenfunctions also satisfy the orthogonal condition with respect to the mutual-energy inner product. So, these eigenfunctions { φ n ( x ) / λ n , n = 1 , 2 } can also be used as basis functions to span the mutual-energy inner product space V ( Ω ) .
Substituting Expression (15) into Expression (5) and applying Expression (17), we have
u , u U = n = 1 c n 2 λ n .
If u ( x ) satisfies the normalization condition u 2 2 = 1 or n = 1 c n 2 = 1 , based on the Expressions (14) and (18), the eigenvalue λ n satisfies
λ n = min u V ( Ω ) u , u U s . t . u , u = 1 u , φ i = 0 i = 1 , 2 n 1 ,
where the optimal solution of u ( x ) is the eigenfunction φ n ( x ) .
Similarly, the deformation v ( x ) caused by g ( x ) can be expressed as
v ( x ) = n = 1 d n φ n ( x ) ,
where d n is the amplitude coefficient and can be interpreted as the component of v ( x ) at the n t h vibration mode φ n ( x ) . Substituting Expression (20) into Expression (12) and using the orthogonal condition (17), we have
min d n , n = 1 , 2 π [ d 1 , d 2 ] = n = 1 ( 1 2 λ n d n 2 g n d n ) ,
where the coefficient g n is the projection of g ( x ) on φ n ( x ) with respect to the Euclidean inner product
g n = g , φ n .
Enforcing the derivative of π [ d 1 , d 2 ] in Expression (21) with respect to d n to zero, we have
d n = g n λ n n = 1 , 2 .
According to the series representation of u ( x ) in Expression (15), if u ( x ) is the deformation caused by f ( x ) , the coefficient c n satisfies
c n = f n λ n , f n = f , φ n n = 1 , 2 .
Substituting the Expressions (15), (20), (23) and (24) into Expression (5) and using the orthogonal condition (17), we have
u , v U = n = 1 f n · g n λ n .
Generally speaking, the external force f ( x ) V ( Ω ) , because f ( x ) does not satisfy the homogeneous boundary conditions, i.e., l [ f ( x ) ] 0 . In this case, n = 1 f n φ n ( x ) is equal to the projection of f ( x ) on V ( Ω ) or the optimal approximation of f ( x ) in V ( Ω ) . Of course, in order to make f ( x ) V ( Ω ) , we may expand the design domain and simplify the boundary condition. For example, after expanding the design domain, we can set a fixed boundary and let σ ( x ) = , or set a mirror boundary and let σ ( x ) = 0 . In these cases, l [ f ( x ) ] = 0 and f ( x ) = n = 1 f n φ n ( x ) . Then, applying the orthogonal condition (14) yields
f , g = n = 1 f n · g n .
After f ( x ) and g ( x ) are expressed as a superposition of the eigenfunctions of the operator L , through comparing the mutual-energy inner product u , v U in Expression (25) and the Euclidean inner product f , g in Expression (26), it can be found that the mutual-energy inner product has the advantage of enhancing the low-frequency coordinate components ( λ n < 1 ) and suppressing the high-frequency coordinate components ( λ n > 1 ). In other words, if f ( x ) and g ( x ) are regarded as signals, the mutual-energy inner product can augment the low-frequency eigenfunction components and filter out the high-frequency eigenfunction components of the signals, with the help of a structural model.

4. Mutual-Energy Inner Product Optimization Model for Feature Extraction

Assume that D = { ( X ( i ) ( x ) , y ( i ) ) } i = 1 N is a training dataset with N samples, and each sample is represented as X ( i ) ( x ) , while y ( i ) represents the class labels. For example, the samples are divided into two classes, and D includes two subsets D 1 = { X ( i ) ( x ) | ( X ( i ) ( x ) D , y ( i ) = 1 ) } i = 1 N 1 and D 0 = { X ( i ) ( x ) | ( X ( i ) ( x ) D , y ( i ) = 0 ) } i = 1 N 0 , where N = N 1 + N 0 . Generally, the samples in different classes are assumed to be random variables, which are independent and have identical distributions.
We hope to find an appropriate feature coordinate system to represent X ( i ) ( x ) D and use fewer coordinate components to classify the samples. If there is no further information, we may select the means of the probability distribution of D 1 and D 0 as reference features. In order to design a feature extraction model, two points should be considered: one to enhance the feature information, and the other to suppress the effect of random noise. We resort to a structural model and use the mutual-energy inner product to extract the features. Its main idea is to map the data features to a low-frequency eigenfunction space of the structural model.
If f ( x ) and g ( x ) are used to represent the means of the probability distribution of D 1 and D 0 , respectively, their unbiased estimates can be written as
f ( x ) = 1 N 1 X ( i ) ( x ) D 1 X ( i ) ( x ) , g ( x ) = 1 N 0 X ( i ) ( x ) D 0 X ( i ) ( x ) .
We regard X ( i ) ( x ) , f ( x ) and g ( x ) as external forces acting on the structural model, and use d ( i ) ( x ) , u ( x ) and v ( x ) to represent their corresponding deformations, respectively. If we represent the selected reference feature in V ( Ω ) as α ( x ) , we can use the mutual-energy inner product d ( i ) , α U to extract the feature coordinate component of X ( i ) ( x ) . In order to construct the feature extraction optimization model, we first select u ( x ) as the reference feature α ( x ) and try to explore the physical meanings of the structural model when d ( i ) , u U is the maximum, the minimum or equal to zero.
In order to enhance the feature information of the samples in D 1 , a high statistical mean value μ 1 should be given
μ 1 = 1 N 1 X ( i ) ( x ) D 1 d ( i ) , u U = 1 N 1 X ( i ) ( x ) D 1 d ( i ) , u U = u , u U ,
with a primary objective
max p ( x ) , q ( x ) μ 1 = u , u U .
In Expression (29), the mutual-energy inner product and deformations are functions of p ( x ) and q ( x ) , and its physical meaning is not intuitive. So, next, we will conduct a quantitative analysis to reveal the structural characteristics hidden in Expression (29).
According to the minimum energy principle (12), if an optimal solution of u ( x ) is obtained, the derivative of the objective at the optimal solution in any direction δ u ( x ) is zero, satisfying
d d τ π [ u ( x ) + τ · δ u ( x ) ] = 0   δ u ( x ) C ( Ω ) .
Through calculating Expression (30), we obtain the relationship between u ( x ) and f ( x )
u , δ u U f , δ u = 0   δ u ( x ) C ( Ω ) .
Expression (31) is a structural static equilibrium equation, and is also a constraint on u ( x ) in optimization problem (29). In Expression (31), letting δ u ( x ) = u ( x ) yields
u , u U = f , u .
Substituting Expression (32) into Expression (12) yields the optimal value π [ u ] of the objective
π [ u ] = 1 2 u , u U .
Through substituting Expressions (12) and (33) into the optimization problem (29), Expression (29) is transformed into an unconstrained optimization problem
max p ( x ) , q ( x ) , u ( x ) μ 1 [ u , p , q ] = 2 f , u u , u U .
If p ( x ) and q ( x ) are given, μ 1 [ u , p , q ] in Expression (34) is a quadratic and concave functional with respect to u ( x ) , due to p ( x ) > 0 and q ( x ) > 0 . If u ( x ) is given, μ 1 [ u , p , q ] is a linear function with respect to p ( x ) and q ( x ) .Through using the Univariate Search Method to solve Expression (34), if p ( x ) and q ( x ) are given, the maximum value of μ 1 [ u , p , q ] can be found by solving Expression (31) for u ( x ) , and if u ( x ) is given, the maximum value of μ 1 [ u , p , q ] will be reached on the lower bounds of p ( x ) and q ( x ) . So, the lower bounds of p ( x ) and q ( x ) must be larger than zero to ensure that Expression (29) has a finite optimal solution. In addition, the upper bounds of p ( x ) and q ( x ) should also be constrained to avoid the trivial solution u ( x ) = 0 . Therefore, when the optimization objective is to maximize the mutual-energy inner product, as shown in Expression (29), its optimal structural model would be the minimum stiffness structure, and the selected feature belongs to a low-frequency eigenfunction subspace. On the contrary, if the optimization objective is to minimize the mutual-energy inner product, the optimal structural model would be the maximum stiffness, and the selected feature would be mapped to a high-frequency eigenfunction subspace.
In addition, when using the mutual-energy inner product to extract feature information f ( x ) of the samples in D 1 , the feature information g ( x ) of the samples in D 0 should be suppressed. So, a small statistical mean value μ 0 is given
μ 0 = 1 N 0 X ( i ) ( x ) D 0 d ( i ) , u U = 1 N 0 X ( i ) ( x ) D 0 d ( i ) , u U = v , u U .
Here, we may set μ 0 to be zero or even negative, and impose constraints on the structural model
v , u U 0 .
In Expression (31), setting δ u ( x ) = v ( x ) yields u , v U = f , v . Replacing f ( x ) with g ( x ) , and exchanging u ( x ) , v ( x ) , we have
u , v U = f , v = g , u .
If Expression (36) satisfies v , u U = 0 , then u ( x ) and v ( x ) are required to be orthogonal with respect to the mutual-energy inner product. Although the means of the two classes of the samples are generally not orthogonal in the continuous function space C ( Ω ) , i.e., f , g 0 , the orthogonality of u ( x ) and v ( x ) can be easily realized according to Expression (37). For example, if setting p ( x ) = 0 and dividing the domain Ω into two sub-regions according to the same or opposite signs of f ( x ) and g ( x ) , we can adjust q ( x ) in the two sub-regions and control the positive and negative work performed by the external forces g ( x ) on the deformations u ( x ) , so as to make the total work g , u in Expression (37) zero. According to Expression (25), this can also be understood as designing a structural model and adjusting its eigenfunctions and eigenvalues, so as to use these eigenvalues as weights to achieve the weighted orthogonality of f ( x ) and g ( x ) . Further, v , u U 0 can be regarded as the relaxation of the orthogonal constraints on the mutual-energy inner product, which can be realized by adjusting p ( x ) and q ( x ) to make g , u < 0 . Geometrically, this means that the angle between u ( x ) and v ( x ) in the mutual-energy inner product space V ( Ω ) is not an acute angle. If μ 0 is required to be minimal
min p ( x ) , q ( x ) μ 0 = v , u U ,
based on Expression (12), similar to the discussion on Expression (29), the optimization problem (38) can be transformed into an unconstrained form
min u ( x ) , v ( x ) , p ( x ) , q ( x ) max z ( x ) μ 0 [ z , u , v , p , q ] ,
where z ( x ) C ( Ω ) is a slack variable introduced to relax the constraint, which is the constraint of the static equilibrium equation describing the structural deformation due to f ( x ) and g ( x ) acting on the structure simultaneously. The objective can be expressed as
μ 0 [ z , u , v , p , q ] = 1 2 u , u U + 1 2 v , v U 1 2 z , z U f , u z g , v z .
Obviously, if p ( x ) > 0 and q ( x ) > 0 are given, μ 0 [ z , u , v , p , q ] is a quadratic functional of u ( x ) , v ( x ) , and z ( x ) . μ 0 [ z , u , v , p , q ] is convex with respect to u ( x ) and v ( x ) , and is concave with respect to z ( x ) . If u ( x ) , v ( x ) and z ( x ) are given, μ 0 [ z , u , v , p , q ] is linear with respect to p ( x ) and q ( x ) .
In order to design a feature coordinate to classify the samples in D , the objective is to maximize μ 1 μ 0 first. By combining the Expressions (28) and (35), the optimization objective can be expressed as
min p ( x ) , q ( x ) μ 0 μ 1 = v u , u U .
Then, to improve the classification accuracy, the distributions of the samples in D 1 and D 0 along the feature coordinate u ( x ) should also be considered, and their variances should be small. The variances of D 1 and D 0 are high-order functions of u ( x ) , v ( x ) , p ( x ) and q ( x ) , so putting them into the optimization objective function (41) will destroy its low-order characteristics.
In order to improve the computational efficiency, the sum of the absolute values of the sample deviations from the mean are used to replace the variances, and only some samples in D 1 and D 0 are selected for calculation. In the subset D 1 , we only select M 1 samples S 1 = { X ( i ) ( x ) | X ( i ) ( x ) D 1 , X ( i ) , u < μ 1 } i = 0 M 1 , whose components on u ( x ) are less than μ 1 , and calculate their mean absolute deviation δ 1 . In the subset D 0 , we only select M 0 samples S 0 = { X ( i ) ( x ) | X ( i ) ( x ) D 0 , X ( i ) , u > μ 0 } i = 0 M 0 , whose components on u ( x ) are larger than μ 0 , and calculate their mean absolute deviation δ 0 . δ 1 and δ 0 can be expressed as
{ δ 1 = 1 M 1 X ( i ) ( x ) S 1 f X ( i ) , u = u , u U 1 M 1 X ( i ) ( x ) S 1 X ( i ) , u δ 0 = 1 M 0 X ( i ) ( x ) S 0 X ( i ) g , u = 1 M 0 X ( i ) ( x ) S 0 X ( i ) , u v , u U .
Through using Expressions (41) and (42), and considering the means and the mean absolute deviations of the samples, the optimization objective can be written as
min p ( x ) , q ( x ) J [ p ( x ) , q ( x ) ] = λ ( μ 0 μ 1 ) + ( 1 λ ) ( δ 0 + δ 1 ) ,
where λ is a weight variable, satisfying 0 λ 1 . To simplify Expression (42), the auxiliary deformation function w ( x ) V ( Ω ) is defined as
w , δ w U h , δ w = 0   δ w ( x ) C ( Ω ) ,
where h ( x ) can be regarded as an external force corresponding to w ( x ) , satisfying
h ( x ) = 1 M 0 X ( i ) ( x ) S 0 X ( i ) ( x ) 1 M 1 X ( i ) ( x ) S 1 X ( i ) ( x ) .
By substituting Expressions (41), (42), (44) and (45) into Expression (43), the optimization objective is simplified as
min p ( x ) , q ( x ) J [ p ( x ) , q ( x ) ] = c , u U .
Here, c ( x ) V ( Ω ) is a combination of the deformation functions, and can be expressed as
c ( x ) = ( 1 2 λ ) ( u ( x ) v ( x ) ) + ( 1 λ ) w ( x ) .
In order to improve the generalization of the data classifier, regularizers should be added to the optimization model. Here, p 1 and q 1 stand for the 1-norms of p ( x ) and q ( x ) , respectively, and are used as regularizers to avoid increasing the order of the optimization model. Meanwhile, these regularizers are treated as two constraints by directly setting the values of p 1 and q 1 . Due to p ( x ) > 0 and q ( x ) > 0 , p 1 and q 1 can be simply written as
p ( x ) 1 = Ω p ( x ) d Ω , q ( x ) 1 = Ω q ( x ) d Ω .
It should be noted that objective (46) is built by taking the mean f ( x ) of D 1 as the reference feature and selecting the deformation u ( x ) as the reference feature coordinate axis. If other deformation functions α ( x ) are selected as the reference feature coordinate axis, the results are similar. For example, α ( x ) can be set as u ( x ) , v ( x ) , u ( x ) v ( x ) , or others. Through setting α ( x ) as the reference feature coordinate axis, the optimization model can be summarized as
min p ( x ) , q ( x ) J [ p , q ] = c , α U s . t . u , δ u U = f , δ u , v , δ v U = g , δ v , w , δ w U = h , δ w u , v U 0 p 1 = T o l p , q 1 = T o l q p min p ( x ) q min q ( x ) .
Here, δ u , δ v and δ w are arbitrary continuous functions on Ω ; p min > 0 and q min > 0 , are lower bounds of p ( x ) and q ( x ) ; T o l p and T o l q are two constants; f ( x ) , g ( x ) , h ( x ) and c ( x ) are given in Expressions (27), (45) and (47). S 1 and S 0 should be determined according to the reference feature coordinate axis, and can be rewritten as
S 1 = { X ( i ) ( x ) | X ( i ) ( x ) D 1 , X ( i ) , α < f , α } i = 0 M 1 S 0 = { X ( i ) ( x ) | X ( i ) ( x ) D 0 , X ( i ) , α > g , α } i = 0 M 0 .

5. Mutual-Energy Inner Product Feature Coordinate Optimization Algorithm

The EFM is used to solve the differential Equation (1) to realize the mapping from f ( x ) , g ( x ) , and h ( x ) to u ( x ) , v ( x ) , and w ( x ) in the optimization model (49). We divide the domain Ω into N e elements Ω s ( e )   ( e = 1 , 2 N e ) , and assume the e t h element Ω s ( e ) has N d nodes. For the i t h   ( i = 1 , 2 N d ) node in Ω s ( e ) , its global coordinate in Ω , deformation value u ( x i ( e ) ) , and interpolation basis function are denoted as x i ( e ) , u i ( e ) , N i ( ξ ) , respectively, where ξ R n , is the local coordinate of the element Ω s ( e ) . In this way, for an element, its global and local coordinate relationship x ( e ) ( ξ ) and the element deformation function u ( e ) ( ξ ) can be expressed as [37]
x ( e ) ( ξ ) = j = 1 N d N j ( ξ ) x j ( e ) , u ( e ) ( ξ ) = j = 1 N d N j ( ξ ) u j ( e ) .
It is assumed that N is an N d -dimensional row vector with the j t h component N j ( ξ ) ; L is an n × N d matrix with the entry L i j = N j ( ξ ) / ξ i , where ξ i is the i t h component of the local coordinate ξ ; and X is an N d × n matrix with the entry X i j = x i j ( e ) , where x i j ( e ) is the j t h component of the element node coordinates x i ( e ) . Applying Expression (51), the n × n Jacobi matrix J for the transformation between the global and local coordinates, the deformation function u ( e ) ( x ) and its n -dimensional gradient vector u ( e ) ( x ) = [ u ( e ) / x 1 u ( e ) / x 2 u ( e ) / x n ] T , can be expressed in the concise and compact form
u ( e ) ( x ) = N · u ( e ) , u ( e ) ( x ) = B · u ( e ) , J = L · X , B = J 1 L ,
where u ( e ) = [ u 1 ( e ) u 2 ( e ) u N d ( e ) ] T is a vector with the component u i ( e ) , which is the deformation value of the i t h node in the e t h element, and B is an n × N d matrix. In the optimization model (49), the design variables are p ( x ) and q ( x ) . We assume p ( x ) and q ( x ) in each element are constants p e and q e . So, the design variables can be expressed as p = [ p 1 p 2 p N e ] T and q = [ q 1 q 2 q N e ] T in Ω .
Substituting Expression (52) into the mutual-energy expressions (5) and (9) yields
u , u U = e = 1 N e u ( e ) T K s ( e ) u ( e ) , f , u = e = 1 N e u ( e ) T f ( e ) .
Here, K s ( e ) is an N d × N d element stiffness matrix, which is a positive semidefinite symmetric matrix and can be expressed as
K s ( e ) = p e K p ( e ) + q e K q ( e ) + K σ ( e ) .
In Expression (54), K s ( e ) is a linear function of p e and q e ; K p ( e ) and K q ( e ) are corresponding coefficient matrices; and K σ ( e ) is the contribution of the boundary constraint to the element stiffness matrix. If the element boundary does not overlap with the design domain boundary, then K σ ( e ) = 0 . Here, K p ( e ) , K q ( e ) , K σ ( e ) can be calculated by
K p ( e ) = Ω s ( e ) B T B d Ω , K q ( e ) = Ω s ( e ) N T N d Ω , K σ ( e ) = Γ s ( e ) σ ( x ) N T N d Ω .
In Expression (53), f ( e ) is the equivalent node input vector, resulting from the equivalent action between the force f ( x ) on the element and the force f ( e ) on the node, and satisfies
f ( e ) = Ω s ( e ) f ( x ) N T d Ω .
It is assumed that the design domain Ω comprises M element nodes. We number these nodes globally, and use two M -dimension vectors u and f to denote the values of u ( x ) and f ( x ) at all the nodes. The components of u and f are u i and f i , where the subscript i is the global node number. The component f i can be calculated through Expression (56). Expression (56) is calculated for each element adjacent to the i t h global node, and f i is the superposition of the element node corresponding to the i t h global node.
Based on the relationship between the local and global node numbers, Expression (53) can be rewritten as
u , u U = u T K u = e = 1 N e u ( e ) T K s ( e ) u ( e ) , f , u = u T f = e = 1 N e u ( e ) T f ( e ) ,
where K is the global stiffness matrix, an M × M positive definite symmetric matrix. Substituting Expression (57) into Expression (12) yields
min u π [ u ] = 1 2 u T K u u T f .
Based on Expression (58), the solution of the differential Equation (1) satisfies
K u f = 0 .
Similarly, assume that the input of Expression (1) is g ( x ) and the corresponding solution is v ( x ) ; v is the global node vector corresponding to v ( x ) on Ω , and v ( e ) is the element node vector corresponding to v ( x ) on Ω s ( e ) ; and g is the equivalent node input vector corresponding to g ( x ) . We have
K v g = 0 .
Similarly to the derivation of Expression (57), through using Expressions (59) and (60), the mutual-energy expression of u ( x ) and v ( x ) can be derived
u , v U = u T K v = e = 1 N e u ( e ) T K s ( e ) v ( e ) , u , v U = u T f = v T g .
In Expression (61), the first equation is used for model optimization, and the second equation is used for data classifier training and prediction, avoiding the need to solve for the Expressions (59) and (60).
After discretizing the design domain by finite elements, the differential Equation (1) is converted into a system of linear equations, and the mutual-energy definition (5) can be expressed by the matrix and vector product. In this way, the optimization model (49) can be rewritten in the vector form
min p , q J [ p , q ] = c T K α s . t . K u = f , K v = g , K w = h G [ p , q ] = u T K v 0 e = 1 N e p e = T o l p , e = 1 N e q e = T o l q p min p e , q min q e , e = 1 , 2 N e .
Here, α is the finite element node vector corresponding to the selected reference feature coordinate, and can be the statistical features of the sample sets or their combination; for example,
α = u o r α = v o r α = u v .
Meanwhile, f , g , and h are the finite element node vectors corresponding to the mean and deviation of the samples, and c is the temporary node vector generated by the mean and deviation. Expression (47) can be rewritten as
c = ( 1 2 λ ) ( u v ) + ( 1 λ ) w .
The significant advantage of the optimization model (62) is that K is a positive definite symmetric matrix and is linear with respect to the design variables p and q , and meanwhile, the coefficient matrices corresponding to the components of the design variables are positive semidefinite matrices, convenient for the algorithm design. Intermediate variables u , v , w are functions of the design variables and can be calculated by using the linear equations, and the optimization model (62) can be solved by the sequential linearization algorithm. The objective J [ p , q ] and the constraint G [ p , q ] are nonlinear, and their derivatives with respect to the design variables need to be calculated. The derivative of G [ p , q ] with respect to p e is
G p e = u T p e K v + u T K p e v + u T K v p e e = 1 , 2 N e ,
where u p e and v p e are determined by taking the derivative of K u = f and K v = g with respect to p e
K p e u + K u p e = 0 , K p e v + K v p e = 0 .
Substituting Expression (66) into Expression (44) yields
G p e = u T K p e v .
Substituting Expression (54) into Expression (67) yields G / p e . Similarly, G / q e can also be computed
G p e = u ( e ) T K p ( e ) v ( e ) , G q e = u ( e ) T K q ( e ) v ( e ) .
The Expressions (63) and (64) show that c and α are linear combinations of u , v , and w . According to the superposition principle, c / p e and α / p e also satisfy equations similar to Expression (66), and have exactly the same derivation as Expression (68). So, we obtain
J p e = c ( e ) T K p ( e ) α ( e ) , J q e = c ( e ) T K q ( e ) α ( e ) .
Optimization Algorithm 1: Mutual-energy inner product feature coordinate optimization algorithm
Based on Expressions (68) and (69), the optimization model (62) can be solved by the sequential linearization algorithm. The algorithm steps are summarized as follows:
(1)
Use vectors to represent the sample data
Convert the sample data X ( i ) ( x ) in the training subsets D 1 and D 0 into the finite element node vectors X ( i ) R M . Based on Expression (70), first calculate the element node vectors X ( i ) ( e ) R N d , and then use them to assemble the global node vector X ( i ) .
X ( i ) ( e ) = Ω s ( e ) X ( i ) ( x ) N T d Ω e = 1 , 2 N e .
(2)
Set the optimization constants and initial values of the design variables
Set the optimization constants
Set λ , the weight of the mean and deviation, with the requirement λ [ 0 , 1 ] ; set the total amount T o l p , T o l q and the lower bounds p min , q min of the design variables; set the moving limit Δ x max of the design variables for the linear programming; set the design variable minimum increment ε x and the objective function minimum increment ε J , which are used to determine if the optimization ends or not.
Set the initial values of the design variables
Set p e = p e ( 0 ) , q e = q e ( 0 )   ( e = 1 , 2 N e ) . Generally, set p e ( 0 ) = T o l p / M , q e ( 0 ) = T o l q / M .
(3)
Calculate the current value of the objective function
Calculate the element stiffness matrices and assemble the global stiffness matrix
Based on Expressions (54) and (55), calculate the element stiffness matrices K s ( e ) ( e = 1 , 2 N e ) . The element stiffness matrix is linear with respect to p e and q e , and the coefficient matrices are determined only by the element interpolation basis functions, so the calculation can be performed prior to the optimization to speed up the optimization process. Then, assemble the global stiffness matrix according to the node numbers. Since K is a positive definite symmetric matrix, through performing Cholesky decomposition on it, we can have K = L · L T , where L is a lower triangular matrix.
Compute the mean vectors u and v , and select the reference feature coordinate axis α
{ u = L T ( L 1 f ) f = 1 N 1 X ( i ) D 1 X ( i ) , { v = L T ( L 1 g ) g = 1 N 0 X ( i ) D 0 X ( i ) ,
where f and g represent the means of the sample data in D 1 and D 0 ; N 1 and N 0 are the sample numbers in D 1 and D 0 ; α can be selected and calculated by Expression (63).
Compute the deviation vector w and the intermediate vector c
{ w = L T ( L 1 h ) h = 1 M 0 X ( i ) S 0 X ( i ) 1 M 1 X ( i ) S 1 X ( i ) , { S 1 = { X ( i ) | X ( i ) D 1 , α T X ( i ) < μ 1 } i = 1 M 1 S 0 = { X ( i ) | X ( i ) D 0 , α T X ( i ) > μ 0 } i = 1 M 0 ,
where h is the deviation of the sample data and only the sample data in S 1 and S 0 are calculated. μ 1 = α T f and μ 0 = α T g represent the projections of the means of the sample data in D 1 and D 0 on α . After u , v , w are obtained, c can be obtained by Expression (64).
Calculate the current values of the objective function and the constraint
Based on the optimization model (62), the current values of J [ p , q ] and G [ p , q ] can be calculated by
J 0 = c T K α , G 0 = u T K v .
(4)
Calculate the gradient vectors of the objective function and the constraint
Apply Expressions (68) and (69) to calculate J / p e , J / q e , G / p e and G / q e . Then, express them as the compact gradient vectors p J , q J , p G and q G . Here, p J is defined as p J = [ J / p 1 J / p 2 J / p M ] T and the other gradient vector definitions are similar. In Expressions (68) and (69), K p ( e ) and K q ( e ) are only determined by the element interpolation basis functions and are constant matrices independent of the design variables. So, K p ( e ) and K q ( e ) can be calculated prior to the optimization, and the gradient vectors of J [ p , q ] and G [ p , q ] can be achieved through the mapping relationship between the local and global node numbers.
(5)
Obtain increments of the design variables by solving the sequential linearization optimization model
Construct the sequential linearization optimization model
min x p , x q J [ x p , x q ] = J 0 + ( p J ) T x p + ( q J ) T x q s . t . G [ x p , x q ] = G 0 + ( p G ) T x p + ( q G ) T x q 0 i = 1 N e x p , i = T o l x p , e = 1 N e x q , i = T o l x q x p min , i x p , i Δ x max , x q min , i x q , i Δ x max i = 1 , 2 N e ,
where the design variables p R M , q R M ; x p and x q are increments of the design variables, and their i t h components are x p , i and x q , i ; T o l x p , T o l x q , and x p min , i , x q min , i can be calculated by
{ T o l x p = T o l p i = 1 N e p i T o l x q = T o l q i = 1 N e q i , { x p min , i = max ( p min p i , Δ x max ) x q min , i = max ( q min q i , Δ x max ) .
Solve the sequential linearization optimization model (74) to obtain x p and x q
When solving Expression (74), slack variables are added to G [ x p , x q ] 0 to facilitate the initial feasible solution construction.
(6)
Determine whether to end the optimization iteration
Store the design variables, the objective function, and the constraint function of the previous step of the sequential linearization optimization.
Store the design variables p o l d = p , q o l d = q , the objective function value J 0 o l d = J 0 , and the constraint function value G 0 o l d = G 0 .
Update the design variables and the objective function value.
Let p = p + x p , q = q + x q , then execute step (3) to update the objective function value J 0 .
Determine whether to end the iteration.
If | J 0 J 0 o l d | ε J or max [ x p , x p ] ε x , then end the iteration. Otherwise, if J 0 < J 0 o l d , go to step (4) to continue the iteration; if J 0 J 0 o l d , reduce the moving limits of the design variables by letting Δ x max = γ · Δ x max ; here, γ = 0.5 0.85 , then go to step (5) to iteratively calculate the design variable increments x p and x q .

6. Algorithm Implementation and Image Classifier

Image classification is used to determine if an image has certain given features and can be realized by algorithms for extracting the feature information of the image. Applying the mutual-energy inner product to extract the image features has the advantage of enhancing the feature information and suppressing other high-frequency noise. If we select multiple features of an image, we can design multiple mutual-energy inner products, and each mutual-energy inner product can be regarded as one feature coordinate of the image. Using multiple mutual-energy inner products to characterize an image is equivalent to using multiple feature coordinates to describe the image, or equivalent to representing the high-dimensional image in a low-dimensional space, reducing the dimensionality of image data.
This part will discuss the implementation of Optimization Algorithm 1 and its application in 2-D grayscale image classification. Assume that each sample X ( i ) ( x ) in the training datasets D 1 and D 0 is a 2-D grayscale image; the domain Ω occupied by the image is rectangular; each image is expressed by n 1 × n 2 pixels; and each pixel is a square with a side length of 1. In this case, x Ω R 2 and Ω = { ( x 1 , x 2 ) | 0 x 1 n 1 , 0 x 2 n 2 } .

6.1. Vectorized Implementation of Optimization Algorithm 1

While using FEM to discretize the design domain, we regard each pixel as a finite element and divide the domain Ω into n 1 × n 2 quadrilateral elements Ω s ( e ) , i.e., e = 1 , 2 N e and N e = n 1 × n 2 . In Ω , the global element numbering uses column priority, where the upper left corner element is numbered 1 and the lower right corner element is numbered N e . A planar quadrilateral element is used to interpolate the deformation functions. Each element has four nodes, so the total number of nodes is M = ( n 1 + 1 ) × ( n 2 + 1 ) , and the total number of boundary nodes is n Γ = 2 × ( n 1 + n 2 ) . The global node numbering also uses column priority, where the upper left corner node is numbered 1 and the lower right corner node is numbered M . The interpolation basis functions of the quadrilateral element are
N i ( ξ ) = 1 4 ( 1 + ξ 1 ( i ) ξ 1 ) ( 1 + ξ 2 ( i ) ξ 2 ) i = 1 , 2 , 3 , 4 ,
where the domain of the definition is square and is expressed as Ω ξ = { ( ξ 1 , ξ 2 ) | 1 ξ 1 1 , 1 ξ 2 1 } . The element nodes are four corner points of the quadrilateral. The node with the coordinate ( 1 , 1 ) is numbered 1, in counter-clockwise order, and the other nodes with the coordinates ( + 1 , 1 ) , ( + 1 , + 1 ) , ( 1 , + 1 ) are numbered 2, 3, and 4, respectively. The interpolation basis function N i ( ξ ) corresponds to the i t h node, where ( ξ 1 ( i ) , ξ 2 ( i ) ) is the corresponding node coordinate. The mapping relationship between the element node numbers and the global node numbers can be described by an N e × 4 matrix Θ , and its i t h row corresponds to the i t h element. If Θ i , j denotes its entry at the i t h row and the j t h column, then Θ i , 1 , Θ i , 2 , Θ i , 3 , Θ i , 4 are the global node numbers corresponding to the element node numbers 1, 2, 3 and 4 of the i t h element. So, we have
{ Θ i , 1 = i + 1 + r Θ i , 2 = Θ i , 1 + n 1 + 1 , { Θ i , 3 = Θ i , 1 + n 1 Θ i , 4 = Θ i , 1 1 ,
where r is a module when i is divided by n 1 . Since all the elements are same squares, the isoparametric transformation x ( e ) ( ξ ) in Expression (51) is actually a scaling transformation. Through substituting N i ( ξ ) into Expressions (52) and (55), we can find that the coefficient matrices K p ( e ) and K q ( e ) are independent of the element node numbers. So, we use K p and K q to express K p ( e ) and K q ( e ) , and calculate them directly by
K p = 1 24 [ 4 1 2 1 1 4 1 2 2 1 4 1 1 2 1 4 ] , K q = 1 36 [ 4 2 1 2 2 4 2 1 1 2 4 2 2 1 2 4 ] .
When a side of an element overlaps with the boundary of the domain Ω , the influence of the boundary conditions l [ u ( x ) ] = 0 in Expression (1) on K s ( e ) should be considered, so a 4 × 4 matrix K σ ( e ) should be calculated. Assume that the j t h side of the element overlaps with the boundary of Ω and the entry in the i t h row and j t h column of K σ ( e ) is K σ i , j ( e ) . Then, the non-zero entries in K σ ( e ) can be calculated by
K σ j , j ( e ) = K σ j ^ , j ^ ( e ) = 2 3 σ j , j ^ ( e ) , K σ j , j ^ ( e ) = K σ j ^ , j ( e ) = 1 3 σ j , j ^ ( e ) .
In Expression (78), the subscripts j and j ^ stand for the starting and end points of the j t h side of the element, where the starting point is the element node numbered j and the end point is determined along the side in counterclockwise order; σ j , j ^ ( e ) is a constant, equal to the approximate value of σ ( x ) on the j t h side. In this paper, we handle the influence of l [ u ( x ) ] = 0 on K s ( e ) while assembling the global stiffness matrix. We just simply replace the subscripts j and j ^ of K σ ( e ) in Expression (78) with global node numbers, then directly use them to assemble the global stiffness matrix.
Because each element corresponds to a pixel, we can assume that its grayscale value is a constant X g r a y ( i ) ( e ) . In this way, a sample image X ( i ) ( x ) can be expressed as X g r a y ( i ) = [ X g r a y ( i ) ( 1 ) X g r a y ( i ) ( 2 ) X g r a y ( i ) ( e ) X g r a y ( i ) ( N e ) ] T . Through substituting Expression (75) into Expression (70), the relationship between element node vectors and image grayscale values can be obtained
X ( i ) ( e ) = X g r a y ( i ) ( e ) F ,
where F = 0.25 × [ 1 1 1 1 ] T , which can be regarded as mapping coefficients from the image grayscale to the element node vector.
While using element stiffness matrices K s ( e ) and element node vectors X ( i ) ( e ) to assemble the global stiffness matrix K and the global node vector X ( i ) , the functions for generating a sparse matrix in MATLAB R2020a or the Python 2.7 SciPy module can be used, and the input arguments include the row index vector, the column index vector, and the values of the non-zero entries. More importantly, these sparse matrix generation functions can sum the non-zero entries with the same indexes, which is consistent with the process of assembling K and X ( i ) .
In order to convert the image grayscale vector X g r a y ( i ) to the global node vector X ( i ) , a 4 × N e matrix F ( X g r a y ( i ) ) T should first be calculated, whose e t h column corresponds to the element node vector X ( i ) ( e ) . Then, F ( X g r a y ( i ) ) T is converted to a 4 N e -dimensional column vector V X in column-major order. Obviously, if we divide the components of V X into multiple groups in sequence and each group includes four components, then the e t h group corresponds to the element node vector X ( i ) ( e ) . V X can be calculated by
V X = r e s h a p e ( F ( X g r a y ( i ) ) T , 4 N e , 1 ) ,
where the function r e s h a p e ( A , m , n ) can convert the dimension of the matrix A into m × n while keeping the total number of the entries unchanged.
Through the mapping matrix Θ , the position indexes of components of V X in the global node vector X ( i ) can be obtained. We transpose the N e × 4 matrix Θ to the 4 × N e matrix Θ T , whose e t h column corresponds to global node numbers of the e t h element, and then convert Θ T to a 4 N e -dimensional column vector I X in the column-major order. I X can be figured out by
I X = r e s h a p e ( Θ T , 4 N e , 1 ) .
I X is the row index vector for generating X ( i ) by a sparse matrix generation function. Since X ( i ) has only one column, we use J X to denote a 4 N e -dimensional column index vector and set all the components of J X to 1. Through substituting V X , I X , J X into the sparse matrix generation function, we can yield X ( i ) .
Similarly, the global stiffness matrix K can be assembled by using the sparse matrix generation function. A vector V K ( p q ) related to K s ( e ) should be first calculated by
V K ( p q ) = r e s h a p e ( K s ( p q ) , 16 N e , 1 ) , K s ( p q ) = K p p T + K q q T ,
where the operator denotes the Kronecker product of the matrices; K s ( p q ) is a 4 × 4 N e matrix; p and q are the design variables. If K s ( p q ) is divided into multiple blocks from left to right and each block is a 4 × 4 matrix, the e t h block is the calculation result of the first two terms of K s ( e ) in Expression (54), without including K σ ( e ) . Therefore, if V K ( p q ) are divided into multiple blocks in sequence and each block includes 16 components, the e t h block will correspond to a 1-dimensional vector converted from the e t h element stiffness matrix in the column priority. We set 1 = [ 1 1 1 1 ] T , and use I K ( p q ) , J K ( p q ) to denote the row indexes and column indexes of the entries in the global stiffness matrix. Then, I K ( p q ) , J K ( p q ) corresponding to the components of V K ( p q ) can be calculated by
I K ( p q ) = r e s h a p e ( Θ T 1 T , 16 N e , 1 ) , J K ( p q ) = r e s h a p e ( Θ T 1 , 16 N e , 1 ) .
As mentioned above, the constraint on the design boundary Γ can generate additional stiffness K σ ( e ) for the adjacent elements. If we regard an element side overlapping with Γ as a 2-node line element, then its stiffness matrix will be a 2 × 2 matrix K Γ ( e ) , which can be figured out by Expression (78). Similarly, these line element stiffness matrices can be assembled into the global stiffness matrix. While designing an image classifier based on the mutual-energy inner products, we set a fixed boundary for Expression (1), i.e., u ( x ) = 0 ( x Γ ) . This boundary condition can be handled by adding a relatively large number σ 0 to the diagonal entries of K , where its diagonal entries correspond to the boundary node numbers. The sparse matrix generation function is used to implement this boundary condition. First, we set the dimension of the vector V K ( σ 0 ) as n Γ , which is the total number of the boundary nodes, and set all the components of V K ( σ 0 ) to σ 0 . Meanwhile, we let the n Γ -dimensional row and column index vectors be the same, i.e., I K ( σ 0 ) = J K ( σ 0 ) , and set their components to be the boundary node numbers. Finally, we combine V K ( p q ) and V K ( σ 0 ) , I K ( p q ) and I K ( σ 0 ) , J K ( p q ) and J K ( σ 0 ) , respectively, and input them into the sparse matrix generation function to obtain K .
Based on Expressions (68) and (69), the gradients of the objective and the constraint can be efficiently obtained by using Θ . For example, if we have two M -dimensional global node vectors c and α , we can adopt fancy indexing to generate two N e × 4 matrices N c = c ( Θ ) and N α = α ( Θ ) whose e t h rows correspond to the node vectors of the e t h element. According to Expression (69), the objective function gradients p J , q J can be calculated by
p J = s u m ( N c K p · N α , 2 ) , q J = s u m ( N c K q · N α , 2 ) ,
where · stands for multiplying the corresponding entries of the matrices, and s u m ( A , 2 ) is summing the rows of a matrix to obtain a column vector. Mathematically, Expression (84) can be written as p J = d i a g ( N c K p N α T ) and p J = d i a g ( N c K q N α T ) , where the function d i a g (   ) is used to extract the main diagonal entries from a square matrix. Similarly, the constraint function gradients p G , q G can be calculated by replacing c , α with u , v .

6.2. Image Classifier

For a given training dataset D , in order to use Optimization Algorithm 1 to construct the mutual-energy inner product coordinate axes α m ( m = 1 , 2 N α ) , we select the subset D s of D as the reference training set and select the mean of samples of the class “0” or the class “1” in D s or a combination of these means as the reference feature α . The subset D s is gradually generated as the coordinate α m is generated. Prior to generating the coordinate α m + 1 , m mutual-energy inner product coordinate axes α 1 , α 2 α m have been generated and there are m subsets D s ( 1 ) , D s ( 2 ) D s ( m ) . One of the m subsets is selected as a subset D s to generate the coordinate α m + 1 . In order to explain how the generation of new axes work, we use a set S T to manage the m generated subsets, i.e., S T = { D s ( 1 ) , D s ( 2 ) D s ( m ) } 1 m . If D s ( i ) has M 0 ( i ) samples of the class “0” and M 1 ( i ) samples of the class “1”, the subset D s ( k ) in S T is taken as the reference training sample set D s to generate α m + 1 and its index k satisfies
k = argmax i min ( M 0 ( i ) , M 1 ( i ) ) .
After determining the subset D s and the reference feature α , α m + 1 can be obtained by Optimization Algorithm 1. Next, we divide D s into two subsets. First, for each sample X ( i ) in D s , we calculate its coordinate component z m + 1 ( i ) on the axis α m + 1 by z m + 1 ( i ) = α m + 1 , X ( i ) ; we calculate μ 0 , m + 1 and μ 1 , m + 1 , the means of samples of the class “0” and the class “1” and set a threshold z m + 1 t h = ( μ 0 , m + 1 + μ 1 , m + 1 ) / 2 . Second, according to z m + 1 ( i ) and z m + 1 t h , we divide D s into two subsets satisfying D s I = { X ( i ) | X ( i ) D s , z m + 1 ( i ) z m + 1 t h } and D s I I = { X ( i ) | X ( i ) D s , z m + 1 ( i ) > z m + 1 t h } . Finally, we add D s I and D s I I into S T , and delete D s ( k ) from S T . At this time, S T contains m + 1 training sample subsets, and one of them will be selected to calculate the coordinate axis α m + 2 .
The following summarizes the detailed steps of generating mutual-energy inner product feature coordinates.
Algorithm 2: Mutual-energy inner product feature coordinates generation
(1)
Let m = 0 and S T = { D } ;
(2)
According to Expression (85), select D s in S T to generate the coordinate axis α m + 1 and delete D s ( k ) from S T ;
(3)
Adopt Optimization Algorithm 1 to calculate α m + 1 based on the determined reference subset D s and the selected reference feature α ;
(4)
For each sample X ( i ) in D s , calculate its coordinate components z m + 1 ( i ) on the axis α m + 1 , the means μ 0 , m + 1 and μ 1 , m + 1 of the class “0” and the class “1”, as well as the threshold z m + 1 t h = ( μ 0 , m + 1 + μ 1 , m + 1 ) / 2 ;
(5)
According to z m + 1 ( i ) and z m + 1 t h , divide D s into two subsets D s I and D s I I , and add them into S T ;
(6)
Judge if m < N α , set m = m + 1 and go to Step (2); otherwise, stop.
After generating mutual-energy inner product coordinate axes α m ( m = 1 , 2 N α ) by Algorithm 2, the coordinate components z m ( i ) ( m = 1 , 2 N m ) of each sample in D can be calculated and are represented by a feature vector z ( i ) = [ z 1 ( i ) z 2 ( i ) z N m ( i ) ] T . Based on z ( i ) , a simple Gaussian classifier is used to classify the images. We use D j to represent a training dataset comprising M j samples, where the subscript j is the class index of the samples. A Gaussian classifier can be used to classify the samples into multiple classes. We use y ( i ) = j ( j = 0 , 1 C 1 ) to indicate the class of a sample and use C to denote the total number of classes. In D , the probability of the class y ( i ) = j is
p ( y ( i ) = j ) = M j k = 0 C 1 M k .
Furthermore, it is assumed that, for the samples in the same class, their feature vectors z ( i ) follow the Gaussian distribution
p ( z ( i ) | y ( i ) = j ) N ( z ( i ) | μ j , j ) ,
where μ j is the mean of z ( i ) ; j is the covariance matrix of z ( i ) ; and the subscript j corresponds to the class y ( i ) = j . Using the training sample dataset D j , their maximum likelihood estimates can be calculated by [38]
μ j = 1 M j i = 1 M j z ( i ) , j = 1 M j i = 1 M j ( z ( i ) μ j ) ( z ( i ) μ j ) T .
Here, z ( i ) D j . Based on Expressions (86) and (87), when giving the feature vector of a sample, the posterior probability of the sample belonging to the class y = j is
p ( y = j | z ) = e β j ( z ) i = 1 C e β i ( z ) ,
where p ( y = j | z ) is the posterior probability, and β j ( z ) can be expressed as
{ β j ( z ) = 1 2 z T H j z + b j T z + c j H j = Σ j 1 b j = Σ j 1 μ j c j = 1 2 μ j T Σ j 1 μ j 1 2 ln ( | Σ j | ) + ln ( p ( y = j ) ) .
Finally, the class of the sample is determined based on the posterior probability
y = arg   max j   p ( y = j | z ) j { 0 , 1 C 1 } .

6.3. Numerical Examples

The MNIST dataset has become one of the benchmark datasets in machine learning. It comprises 60,000 sample images in the training set and 10,000 sample images in the test set, and each one is a 28-by-28-pixel grayscale image of the handwritten digits 0–9. In this section, we will use the MNIST to design Gaussian image classifiers based on Optimization Algorithm 1.
Before designing Gaussian image classifiers, image preprocessing is conducted to align the image centroids and normalize the sample images. In Optimization Algorithm 1, the selected parameters are λ = 0.3 , p min = q min = 10 3 , T o l p = T o l q = 2.0 , σ 0 = 10 5 , Δ x max = 0.08 , ε x = 8 × 10 4 , and ε J = 10 7 .

6.3.1. Binary Gaussian Classifier: Identify Digits “0” and “1”

The MINST training set comprises 6742 samples “1” and 5923 samples “0”. We select the difference between the means of samples “1” and “0” as the reference feature, i.e., α = u v . Optimization Algorithm 1 converges after 166 iterations. The means of samples “1” and “0”, the design variables, and the reference feature coordinate α ( x ) , are visualized in Figure 1, Figure 2 and Figure 3. Due to obvious differences in the mean feature, digits “0” and “1” can be identified using only one mutual-energy inner product coordinate α . Figure 4a shows the training sample distribution in accordance with the components on α . Figure 5a gives the Confusion Matrix of the classification results, where the horizontal and vertical axes correspond to the target class and the output class of the classifier, respectively. In the Confusion Matrix, the column on the far right shows the precision of all the examples predicted to belong to each class, and the row at the bottom shows the recall of all the examples belonging to each class; the entry in the bottom right shows the overall accuracy; the diagonal entries are the correctly classified numbers of digits “0” and “1” and the off-diagonal entries correspond to the wrong classifications. This binary Gaussian classifier on the training set achieves a very high overall accuracy of 99.66%, shown at the bottom right of the Confusion Matrix.
The binary Gaussian classifier is tested on the MINST test set, which comprises 1135 samples “1” and 980 samples “0”. The test results are visualized in Figure 4b and Figure 5b. Its overall accuracy can reach 99.91%, higher than that on the training set.

6.3.2. Binary Gaussian Classifier: Identify Digits “0” and “2”

The MINST training set comprises 5958 samples “2” and 5923 samples “0”, and the MINST test set comprises 1032 samples “2” and 980 samples “0”. Similarly to the previous classifier, the reference feature is also selected as α = u v . The difference in the mean features of digits “2” and “0” is not as significant as that of digits “1” and “0”. If only one mutual-energy inner product coordinate is used for classification, the accuracy is only 96.72% on the training set and 97.81% on the test set. In order to improve the classification accuracy, we use Algorithm 2 to generate 60 mutual-energy inner product coordinates based on the training sample set and its subsets, and construct a 60-dimensional Gaussian classifier. The Confusion Matrices of the classification results are given in Figure 6a,b, showing an overall accuracy of 99.55% on the training set and a higher overall accuracy of 99.85% on the test set.

6.3.3. Binary Gaussian Classifier: Identify Digits “3” and “4”

The MINST training set comprises 6131 samples of “3” and 5842 samples of “4”, and the MINST test set comprises 1010 samples of “3” and 982 samples of “4”. Here, we select the means of samples “3” and “4” as reference features, i.e., α = u and α = v , and then use Algorithm 2 to generate 50 mutual-energy inner product coordinates, respectively, finally forming 100 classification coordinates. Because these coordinates are not linearly independent, we use matrix singular value decomposition to construct a 50-dimensional Gaussian classifier. Figure 6c,d is the Confusion Matrices, showing an overall accuracy of 99.67% on the training set and a higher overall accuracy of 99.80% on the test set.

6.3.4. Multiclass Gaussian Classifier: Identify Digits “0”, “1”, “2”, “3” and “4”

In the training set, we select one digit from samples “0”, “1”, “2”, “3” and “4” as the first class and the other training samples of the five digits as the second class, and we take the two classes as the training sample set. Then, we select the difference between the means of the samples in the two classes as the reference feature, i.e., α = u v , and use Algorithm 2 to generate 120 mutual-energy inner product coordinates. In this way, we construct 5 training sample sets and finally generate 600 coordinates. However, many of them are linearly dependent; in order to identify the digits “0”, “1”, “2”, “3” and “4”, we use matrix singular value decomposition to reduce its dimensions from 600 to 60 and construct a 60-dimensional multiclass Gaussian classifier. Figure 7 shows an overall accuracy of 98.22% on the training set and a higher overall accuracy of 98.83% on the test set.

7. Discussion

Based on the solution space of the partial differential equations describing the vibration of a non-uniform membrane, the concept of the mutual-energy inner product is defined. By expending the mutual-energy inner product as a superposition of the eigenfunctions of the partial differential equations, an important property is found: the mutual-energy inner product has the significant advantage of enhancing the low-frequency eigenfunction components and suppressing the high-frequency eigenfunction components, compared to the Euclidean inner product.
In data classification, if the reference data features of the samples belong to a low-frequency subspace of the set of the eigenfunctions, these data features can be extracted through the mutual-energy inner product, which can not only enhance feature information but also filter out high-frequency data noise. As a result, a mutual-energy inner product optimization model is built to extract the feature coordinates of the samples, which can enhance the data features, reduce the sample deviations, and regularize the design variables. We make use of the minimum energy principle to eliminate the constraints of the partial differential equations in the optimization model and obtain an unconstrained optimization objective function. The objective function is a quadratic functional, which is convex with respect to the variables that minimize the objective function, is concave with respect to the variables that maximize the objective function, and is linear with respect to the design variables. These properties facilitate the design of optimization algorithms.
FEM is used to discrete the design domain, and the design variables of each element are set as constants. Based on these finite elements, the gradients of the mutual-energy inner product relative to the element design variables are analyzed, and a sequential linearization algorithm is constructed to solve the mutual-energy inner product optimization model. Algorithm implementation only involves solving equations with the positive definite symmetric matrix when calculating the intermediate variables and only needs to handle a few constraints in the nested linear optimization module, guaranteeing the stability and effectiveness of the algorithm.
The mutual-energy inner product optimization model is applied to extract the feature coordinates of the sample images and construct a low-dimensional coordinate system to represent the sample images. Multiclass Gaussian classifiers are trained and tested to classify the 2-D images. Here, only the means of the training sample set and its subsets are selected as reference features in Optimization Algorithm 1, and the vectorized implementation of Optimization Algorithm 1 is discussed. Generating mutual-energy inner product coordinates via the optimization model and training or testing Gaussian classifiers are two independent steps. In training or testing Gaussian classifiers, calculating mutual-energy inner products can be converted into calculating the Euclidean inner products between the reference feature coordinates and the sample data, not adding computational complexity to the Gaussian classifiers.
In the MINST dataset, the mutual-energy inner product feature coordinate extraction method is used to train a 1-dimensional two-class Gaussian classifier, a 50-dimensional two-class Gaussian classifier, a 60-dimensional two-class Gaussian classifier, and a 60-dimensional five-class Gaussian classifier, and good prediction results are achieved. The feature coordinate extraction method achieves a higher overall accuracy on the test set than that on the training set, indicating that the classification model is experiencing underfitting. This shows large potential in the achievable accuracy of this method that has not yet been explored.
From the view of theory and algorithm, this feature extraction method is obviously different from the existing techniques in machine learning. Its limitation is the need of the given reference features in advance. In this paper, only the mean features of a sample dataset and its subsets are selected as the reference features to construct Gaussian classifiers. In the future, convolution operation can be adopted to construct other image reference features, such as image edge features, local features, textures [39], and multi-scale features, and these image features can be combined to generate a mutual-energy inner product feature coordinate system. In addition, other ensemble classifiers, such as Bagging and AdaBoost, can be introduced to improve the performances of the image classifiers. Meanwhile, the feasibility of applying the mutual-energy inner product optimization method to the neural network will also be explored.

Funding

This research received no external funding.

Data Availability Statement

Data and code are available upon request from the author.

Acknowledgments

We appreciate Brian Barsky and Stuart Russell for their encouragement during our difficult times.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  2. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef]
  3. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N. Deep Neural Networks for Acoustic Modeling in Speech Recognition. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  4. Kowsari, K.; Jafari Meimandi, K.; Heidarysafa, M.; Mendu, S.; Barnes, L.; Brown, D. Text Classification Algorithms: A Survey. Information 2019, 10, 150. [Google Scholar] [CrossRef]
  5. Ramana, B.V.; Babu, M.S.P.; Venkateswarlu, N.B. A Critical Study of Selected Classification Algorithms for Liver Disease Diagnosis. Int. J. Database Manag. Syst. 2011, 3, 101–114. [Google Scholar] [CrossRef]
  6. Sisodia, D.; Sisodia, D.S. Prediction of Diabetes using Classification Algorithms. Procedia Comput. Sci. 2018, 132, 1578–1585. [Google Scholar] [CrossRef]
  7. Barboza, F.; Kimura, H.; Altman, E. Machine Learning Models and Bankruptcy Prediction. Expert Syst. Appl. 2017, 83, 405–417. [Google Scholar] [CrossRef]
  8. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  9. Jain, S.; Rastogi, R. Parametric non-parallel support vector machines for pattern classification. Mach. Learn. 2022, 113, 1567–1594. [Google Scholar] [CrossRef]
  10. Kotsiantis, S.B. Decision Trees: A Recent Overview. Artif. Intell. Rev. 2013, 39, 261–283. [Google Scholar] [CrossRef]
  11. Patel, H.H.; Prajapati, P. Study and Analysis of Decision Tree Based Classification Algorithms. Int. J. Comput. Sci. Eng. 2018, 6, 74–78. [Google Scholar] [CrossRef]
  12. Friedman, N.; Koller, D. Being Bayesian about Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks. Mach. Learn. 2003, 50, 5–125. [Google Scholar] [CrossRef]
  13. Muja, M.; Lowe, D.G. Scalable Nearest Neighbor Algorithms for High Dimensional Data. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2227–2240. [Google Scholar] [CrossRef] [PubMed]
  14. Breiman, L. Radom Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  15. Hinton, G.; Salakhutdinov, R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  16. Yerramreddy, D.R.; Marasani, J.; Ponnuru, S.V.G.; Min, D. Harnessing deep reinforcement learning algorithms for image categorization: A multi algorithm approach. Eng. Appl. Artif. Intell. 2024, 136, 108925. [Google Scholar] [CrossRef]
  17. Wang, M.J.; Chen, H.L. Chaotic Multi-Swarm Whale Optimizer Boosted Support Vector Machine for Medical Diagnosis. Appl. Soft Comput. J. 2020, 88, 105946. [Google Scholar] [CrossRef]
  18. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  19. Sen, P.C.; Hajra, M.; Ghosh, M. Supervised Classification Algorithms in Machine Learning: A Survey and Review. In Emerging Technology in Modelling and Graphics: Proceedings of IEM Graph 2018; Advances in Intelligent Systems and Computing; Springer: Singapore, 2018; Volume 937, pp. 99–111. [Google Scholar]
  20. Schonlau, M.; Zou, R.Y. The Random Forest Algorithm for Statistical Learning. Stata J. 2020, 20, 3–29. [Google Scholar] [CrossRef]
  21. Zhang, C.; Liu, C.; Zhang, X.; Almpanidis, G. An Up-To-Date Comparison of State-of-the-art Classification Algorithms. Expert Syst. Appl. 2017, 82, 128–150. [Google Scholar] [CrossRef]
  22. Wang, P.; Fan, E.; Wang, P. Comparative Analysis of Image Classification Algorithms Based on Traditional Machine Learning and Deep Learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
  23. Bansal, M.; Goyal, A.; Choudhary, A. A comparative analysis of K-Nearest Neighbor, Genetic, Support Vector Machine, Decision Tree, and Long Short Term Memory algorithms in machine learning. Decis. Anal. J. 2022, 3, 100071. [Google Scholar] [CrossRef]
  24. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine Versus Random Forest for Remote Sensing Image Classification: A Meta-Analysis and Systematic Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  25. Xie, D.; Zhang, L.; Bai, L. Deep Learning in Visual Computing and Signal Processing. Appl. Comput. Intell. Soft Comput. 2017, 2017, 1320780. [Google Scholar] [CrossRef]
  26. Srinidhi, C.L.; Ciga, O.; Martel, A.L. Deep Neural Network Models for Computational Histopathology: A survey. Med. Image Anal. 2021, 67, 101813. [Google Scholar] [CrossRef]
  27. Aher, S.B.; Lobo, L.M.R.J. Comparative Study of Classification Algorithms. Int. J. Inf. Technol. Knowl. Manag. 2012, 5, 239–243. [Google Scholar]
  28. Nachappa, T.G.; Piralilou, S.T.; Gholamnia, K.; Ghorbanzadeh, O.; Rahmati, O.; Blaschke, T. Flood Susceptibility Mapping with Machine Learning, Multi-Criteria Decision Analysis and Ensemble Using Dempster Shafer Theory. J. Hydrol. 2020, 590, 125275. [Google Scholar] [CrossRef]
  29. Creamer, G.; Freund, Y. Learning A Board Balanced Scorecard to Improve Corporate Performance. Decis. Support Syst. 2010, 49, 365–385. [Google Scholar] [CrossRef]
  30. Asteris, P.G.; Rizal, F.I.M.; Koopialipoor, M.; Roussis, P.C.; Ferentinou, M.; Armaghani, D.J.; Gordan, B. Slope Stability Classification under Seismic Conditions Using Several Tree-Based Intelligent Techniques. Appl. Sci. 2022, 12, 1753. [Google Scholar] [CrossRef]
  31. Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  32. Ribeiro, M.H.D.M.; dos Santos Coelho, L. Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series. Appl. Soft Comput. 2020, 86, 105837. [Google Scholar] [CrossRef]
  33. Monego, V.S.; Anochi, J.A.; de Campos Velho, H.F. South America Seasonal Precipitation Prediction by Gradient-Boosting Machine-Learning Approach. Atmosphere 2022, 13, 243. [Google Scholar] [CrossRef]
  34. Nanni, L.; Brahnam, S.; Ghidoni, S.; Lumini, A. Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification. Comput. Intell. Neurosci. 2015, 2015, 909123. [Google Scholar] [CrossRef] [PubMed]
  35. Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A Survey on Ensemble Learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
  36. Courant, R.; Hilbert, D. Methods of Mathematical Physics; John Wiley & Sons Incorporated: New York, NY, USA, 1991. [Google Scholar]
  37. Reddy, J.N. An Introduction to the Finite Element Method, 3rd ed.; McGraw-Hill Education: New York, NY, USA, 2005. [Google Scholar]
  38. Friedman, J.H.; Tibshirani, R.; Hastie, T. The Elements of Statistical Learning Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  39. Liu, Z.; Qi, X.; Torr, P.H. Global Texture Enhancement for Fake Face Detection in the Wild. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
Figure 1. The means of the samples.
Figure 1. The means of the samples.
Mathematics 12 03872 g001
Figure 2. Design variables.
Figure 2. Design variables.
Mathematics 12 03872 g002
Figure 3. Reference feature coordinate.
Figure 3. Reference feature coordinate.
Mathematics 12 03872 g003
Figure 4. Sample distribution.
Figure 4. Sample distribution.
Mathematics 12 03872 g004
Figure 5. Confusion Matrix.
Figure 5. Confusion Matrix.
Mathematics 12 03872 g005
Figure 6. Confusion Matrix.
Figure 6. Confusion Matrix.
Mathematics 12 03872 g006
Figure 7. Confusion Matrix.
Figure 7. Confusion Matrix.
Mathematics 12 03872 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y. Mutual-Energy Inner Product Optimization Method for Constructing Feature Coordinates and Image Classification in Machine Learning. Mathematics 2024, 12, 3872. https://doi.org/10.3390/math12233872

AMA Style

Wang Y. Mutual-Energy Inner Product Optimization Method for Constructing Feature Coordinates and Image Classification in Machine Learning. Mathematics. 2024; 12(23):3872. https://doi.org/10.3390/math12233872

Chicago/Turabian Style

Wang, Yuanxiu. 2024. "Mutual-Energy Inner Product Optimization Method for Constructing Feature Coordinates and Image Classification in Machine Learning" Mathematics 12, no. 23: 3872. https://doi.org/10.3390/math12233872

APA Style

Wang, Y. (2024). Mutual-Energy Inner Product Optimization Method for Constructing Feature Coordinates and Image Classification in Machine Learning. Mathematics, 12(23), 3872. https://doi.org/10.3390/math12233872

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop