You are currently viewing a new version of our website. To view the old version click .
Information
  • Correction
  • Open Access

12 June 2024

Correction: Yi et al. SFS-AGGL: Semi-Supervised Feature Selection Integrating Adaptive Graph with Global and Local Information. Information 2024, 15, 57

,
,
,
,
,
and
1
School of Software, Jiangxi Normal University, Nanchang 330022, China
2
College of Computer Science, Shenyang Aerospace University, Shenyang 110136, China
3
College of Information Science and Technology, Northeast Normal University, Changchun 130117, China
*
Authors to whom correspondence should be addressed.

Error in Figure/Table

In the original publication [1], Table 2 contained errors. Specifically, there were mistakes in the time complexity of five matrices and in the formula for F. The corrected version of Table 2 is provided below.
Table 2. The time complexity of each matrix in our proposed algorithm.
In the original publication [1], there were two errors in Table 3 as published. Specifically, there was a mistake in the algorithm complexity of SFS-AGGL and a mistake of the order of reference for FDEFS method. The corrected version of Table 3 is presented below.
Table 3. Computational complexity of each iteration for FS methods.
In the original publication [1], there were errors in Table 4 as published. Specifically, mistakes were made in the first-order derivatives of F, the second-order derivatives of F, and S. The corrected version of Table 4 is provided below.
Table 4. First- and second-order derivatives of each formula.
In the original publication [1], there were some mistakes in Figure 2 as published. Specifically, errors were made in the calculation and update process. The corrected version of Figure 2 is presented below.
Figure 2. Flow chart of SFS-AGGL algorithm.
In the original publication [1], there were some mistakes in sub-images of Figure 7 as published. Specifically, errors were made in the feature dimensions of the sub-images in Figure 7. The corrected version of Figure 7 is presented below.
Figure 7. Clustering results of SFS-AGGL under different parameter values and different feature dimensions, where different colors represent different feature dimensions.

Equations Correction

There were errors in some equations in the original publication [1]. Specifically, mistakes were made in the definition of symbol, the matrix transposition operation, or the absence of the matrix trace operation. The corrected equations are provided below.
min α α 0 s . t .   x = D × α .
min α α 1 s . t .   x = D × α .
min α α 2 s . t .   x = D × α .
min α α 21 s . t .   x = D × α .
min F 0 i = 1 n j = 1 n | | f i f j | | 2 2 s i j + i = 1 n | | f i y i | | 2 2 u i i = t r ( F T L F ) + t r ( ( F Y ) T U ( F Y ) )
ε ( W , ϑ ) W = ( 2 X X T W 2 X F + 2 β X X T W 4 β X S T X T W + 2 β X S S T X T W + 2 θ H W + ϑ ) = 0
(29) min ε ( F ) = t r ( ( X T W F ) ( X T W F ) T ) + α t r ( F T L F ) + t r ( ( F Y ) T U ( F Y ) ) = t r ( X T W W T X 2 F W T X + F F T ) + α t r ( F T L F ) + t r ( F T U F 2 F T U Y + Y T U Y ) = t r ( X T W W T X 2 F W T X + F F T + α F T L F + F T U F 2 F T U Y + Y T U Y ) (30) ε ( F , μ ) = t r ( X T W W T X 2 F W T X + F F T + α F T ( D S ) F + F T U F 2 F T U Y + Y T U Y ) + t r ( μ F ) (31) ε ( F , μ ) F = ( 2 X T W + 2 F + 2 α ( D S ) F + 2 U F 2 U Y + μ ) = 0 (32) ( 2 X T W + 2 F + 2 α ( D S ) F + 2 U F 2 U Y ) i j F i j = 0
F i j = F i j [ X T W + U Y + α S F ] i j [ F + α D F + U F ] i j
(35) min ε ( S ) = α t r ( F T L F ) + β t r ( ( W T X W T X S ) ( W T X W T X S ) T ) + λ t r ( S E ) = α t r ( F T L F ) + β t r ( W T X X T W 2 W T X S T X T W + W T X S S T X T W ) + λ t r ( S E ) = t r ( α F T D F α F T S F + β W T X X T W 2 β W T X S T X T W + β W T X S S T X T W + λ S E ) (36) ε ( S , ξ ) = ( t r ( α F T D F α F T S F + β W T X X T W 2 β W T X S T X T W + β W T X S S T X T W ) + λ t r ( S E ) + t r ( ξ S ) ) (52) F i j ( i t e r + 1 ) = arg min F i j φ ( F i j , F i j ( i t e r ) ) = F i j ( i t e r ) F i j ( i t e r ) ψ ( F i j ) [ F + α D F + U F ] i j = F i j ( i t e r ) [ X T W + U Y + α S F ] i j [ F + α D F + U F ] i j (54) | | u | | 2 | | u | | 2 2 2 | | v | | 2 | | v | | 2 | | v | | 2 2 2 | | v | | 2 (56) t r ( ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) T ) + α t r ( ( F ( i t e r + 1 ) ) T L F ( i t e r + 1 ) ) + t r ( ( F ( i t e r + 1 ) Y ) T U ( F ( i t e r + 1 ) Y ) ) + β t r ( ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) T ) + θ t r ( ( W ( i t e r + 1 ) ) T H ( i t e r ) W ( i t e r + 1 ) ) + λ t r ( S ( i t e r + 1 ) E ) t r ( ( X T W ( i t e r ) F ( i t e r ) ) ( X T W ( i t e r ) F ( i t e r ) ) T ) + α t r ( ( F ( i t e r ) ) T L F ( i t e r ) ) + t r ( ( F ( i t e r ) Y ) T U ( F ( i t e r ) Y ) T ) + β t r ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) T ) + θ t r ( ( W ( i t e r ) ) T H ( i t e r ) W ( i t e r ) ) + λ t r ( S ( i t e r ) E ) (57) t r ( ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) T ) + α t r ( ( F ( i t e r + 1 ) ) T L F ( i t e r + 1 ) ) + t r ( ( F ( i t e r + 1 ) Y ) T U ( F ( i t e r + 1 ) Y ) ) + β t r ( ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) T ) + θ i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 2 2 | | ( W ( i t e r ) ) i | | 2 + λ t r ( S ( i t e r + 1 ) E ) t r ( ( X T W ( i t e r ) F ( i t e r ) ) ( X T W ( i t e r ) F ( i t e r ) ) T ) + α t r ( ( F ( i t e r ) ) T L F ( i t e r ) ) + t r ( ( F ( i t e r ) Y ) T U ( F ( i t e r ) Y ) ) + β t r ( ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) T ) + θ i = 1 d | | ( W ( i t e r ) ) i | | 2 2 2 | | ( W ( i t e r ) ) i | | 2 + λ t r ( S ( i t e r ) E ) (58) t r ( ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) T ) + α t r ( ( F ( i t e r + 1 ) ) T L F ( i t e r + 1 ) ) + t r ( ( F ( i t e r + 1 ) Y ) T U ( F ( i t e r + 1 ) Y ) ) + β t r ( ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) T ) + θ i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 θ ( i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 2 2 | | ( W ( i t e r ) ) i | | 2 ) + λ t r ( S ( i t e r + 1 ) E ) t r ( ( X T W ( i t e r ) F ( i t e r ) ) ( X T W ( i t e r ) F ( i t e r ) ) T ) + α t r ( ( F ( i t e r ) ) T L F ( i t e r ) ) + t r ( ( F ( i t e r ) Y ) T U ( F ( i t e r ) Y ) ) + β t r ( ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) T ) + θ i = 1 d | | ( W ( i t e r ) ) i | | 2 θ ( i = 1 d | | ( W ( i t e r ) ) i | | 2 i = 1 d | | ( W ( i t e r ) ) i | | 2 2 2 | | ( W ( i t e r ) ) i | | 2 ) + λ t r ( S ( i t e r ) E ) (59) i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 2 2 | | ( W ( i t e r ) ) i | | 2 i = 1 d | | ( W ( i t e r ) ) i | | 2 i = 1 d | | ( W ( i t e r ) ) i | | 2 2 2 | | ( W ( i t e r ) ) i | | 2 (60) t r ( ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) ( X T W ( i t e r + 1 ) F ( i t e r + 1 ) ) T ) + α t r ( ( F ( i t e r + 1 ) ) T L F ( i t e r + 1 ) ) + t r ( ( F ( i t e r + 1 ) Y ) T U ( F ( i t e r + 1 ) Y ) ) + β t r ( ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) ( ( W ( i t e r + 1 ) ) T X ( W ( i t e r + 1 ) ) T X S ( i t e r + 1 ) ) T ) + θ i = 1 d | | ( W ( i t e r + 1 ) ) i | | 2 + λ t r ( S ( i t e r + 1 ) E ) t r ( ( X T W ( i t e r ) F ( i t e r ) ) ( X T W ( i t e r ) F ( i t e r ) ) T ) + α t r ( ( F ( i t e r ) ) T L F ( i t e r ) ) + t r ( ( F ( i t e r ) Y ) T U ( F ( i t e r ) Y ) ) + β t r ( ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) ( ( W ( i t e r ) ) T X ( W ( i t e r ) ) T X S ( i t e r ) ) T ) + θ i = 1 d | | ( W ( i t e r ) ) i | | 2 + λ t r ( S ( i t e r ) E ) (65) min t r ( F T L F ) + t r ( ( F Y ) T U ( F Y ) ) + | | X T W + 1 n b T F | | 2 , p p + λ | | W | | 2 , p p s . t .   F 0 , W , p ( 0 , 1 ] . (67) min { γ t r ( F T L F ) + t r ( ( F Y ) T U ( F Y ) ) + α | | S | | F 2 + t r ( W T X L W T X ) + θ t r ( W T A W ) + | | X T W + 1 n b T F | | F 2 + λ | | W | | 2 , 1 s . t .   0 S i j 1 , S i 1 n = 1 . (68) min { | | X T W F | | F 2 + i j n | | W T ( X i X j ) | | 2 2 S i j + α | | S A | | F 2 + t r ( F T L F ) + t r ( ( F Y ) T U ( F Y ) ) + | | W T X W T X Z | | F 2 + β | | Z | | 2 , 1 + λ | | W | | 2 , 1 s . t .   0 S i j 1 , S i T 1 n = 1 , α , β , λ 0 .

Error in Algorithm

In the original publication [1], there were errors in Algorithm 1 as published. Specifically, mistakes were made in the calculation and update of the matrices. The corrected version of Algorithm 1 is presented below.
Algorithm 1: SFS-AGGL
Input: Sample Matrix: X = [ X L , X U ] R d × n
   Label Matrix: Y = [ Y l ; Y u ] R n × c
   Parameters: α 0 , β 0 , θ 0 , λ 0
Output: Feature Projection Matrix W
  Predictive Labeling Matrix F
  Similarity Matrix S
1: Initialization: the initial non-negative matrix W 0 , F 0 , S 0 , i t e r = 0 ;
2: Calculation of the matrices U and E according to Equations (12) and (19), compute D and H according to S0 and W0;
3:  Repeat
4: According to Equation (27) update W i t e r as
           W i t e r X F + 2 β X S T X T W X X T W + β X X T W + β X S S T X T W + θ H W ;
5: According to Equation (33) update F i t e r as F i t e r X T W + U Y + α S F F + α D F + U F ;
6: According to Equation (39) update S i t e r as S i t e r α F F T + 2 β X T W W T X 2 β X T W W T X S + λ E ;
7: According to S i t e r and W i t e r update matrices D and H;
8: Update i t e r = i t e r + 1 ;
9: Until converges

Text Correction

There were errors in the first paragraph of Section 2.1 in the original publication [1]. Mistakes were made regarding the sizes of matrices X and Y. The corrected content appears below.
Let X = [ X l , X u ] = [ x 1 , , x l , x l + 1 , , x l + 1 + u ] R d × n denote the training samples, where x i R d denotes the i-th sample. Y = [ Y l ; Y u ] R n × c is the label matrix, and Y l denotes the true label of the labeled sample. If the sample x i belongs to the class j, then its corresponding class label is Y i j = 1 ; otherwise, Y i j = 0 . Y u denotes the true label of the unlabeled sample. Since Y u is unknown during the training process, it is set as a 0 matrix during training [49]. The main symbols in this paper are presented in Table 1.
There was an error in the first paragraph of Section 2.2 in the original publication [1]. Specifically, a mistake was made in the symbol X. The corrected content appears below.
Given a sample x R d and a target dictionary D , it is desired to find a coefficient vector a such that the signal x can be represented as a linear combination of the basic elements of the target dictionary D .
There was an error in the first paragraph of Section 2.3 in the original publication [1]. Specifically, a mistake was made in the definition of matrix S. The corrected content appears below.
Then, the weight matrix formed by the L1 graph is expressed as S = [ α 1 , α 2 , , α n ] .
There was an error under Equation (11) in the first paragraph of Section 2.4 in the original publication [1]. Specifically, a mistake was made in the size of matrix U. The corrected content is provided below.
where s i j can be computed by Equation (9) or Equation (10). U R n × n is a diagonal matrix that effectively utilizes category information from all samples in SSL.
There was an error under Equation (23) in Section 3.2 in the original publication [1]. Specifically, there was a mistake in the definition of matrix H. The corrected content is provided below.
where H R d × d is a matrix consisting of diagonal elements h i i = 1 / | | W i | | 2 .
There was an error under the Equation (30) in Section 3.2 in the original publication [1]. Specifically, there was a mistake in the definition of matrix D. The corrected content is provided below.
where D is a diagonal matrix, whose diagonal elements are d i i = j = 1 n s i j .
There were errors in Section 3.4.1 in the original publication [1]. Specifically, they are the matrix description and the total complexity of the SFS-AGGL algorithm. The corrected content is provided below.
Based on Algorithm 1, the SFS-AGGL algorithm’s computational complexity comprises two parts. The first part is the computation of the diagonal auxiliary matrices U and E in step 2, and the second part is the updating of three matrices (W, F, and S) during each iteration. The computational or updating components of each matrix are defined in Table 2. Therefore, the total complexity of the SFS-AGGL algorithm is O ( d n 2 + ( i t e r × max ( d c n , d n 2 , c n 2 , d 2 c ) ) , where iter is the iteration count. Furthermore, the computational complexities of other related FS methods are also presented in Table 3.

Error Citation

There was an error of references in the original publication [1]. Specifically, there was an incorrect information on reference [50]. The corrected content is provided below.
50.
Zhu, R.; Dornaika, F.; Ruichek, Y. Learning a discriminant graph-based embedding with feature selection for image categorization. Neural Netw. 2019, 111, 35–46.

Missing ORCID

There was a missing orcid of an author in the original publication [1]. Specifically, there was a missing orcid of Gengsheng Xie. The corrected content is provided below.
The orcid of Gengsheng Xie is https://orcid.org/0000-0003-1224-6414.
The authors state that the scientific conclusions are unaffected. This correction was approved by the Academic Editor. The original publication has also been updated.

Reference

  1. Yi, Y.; Zhang, H.; Zhang, N.; Zhou, W.; Huang, X.; Xie, G.; Zheng, C. SFS-AGGL: Semi-Supervised Feature Selection Integrating Adaptive Graph with Global and Local Information. Information 2024, 15, 57. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.