Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = new additive iteration rule

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1589 KiB  
Article
An Efficient Quantized Message Passing Receiver Design for SCMA Systems
by Hao Cheng, Min Zhang and Ruoyu Su
Sensors 2025, 25(10), 3098; https://doi.org/10.3390/s25103098 - 14 May 2025
Viewed by 457
Abstract
Sparse code multiple access (SCMA) has been considered as an efficient technique to provide both massive connectivity and high spectrum efficiency for future machine-type wireless networks. However, the conventional uniform quantization of the message passing algorithm (MPA) for the SCMA detection induces a [...] Read more.
Sparse code multiple access (SCMA) has been considered as an efficient technique to provide both massive connectivity and high spectrum efficiency for future machine-type wireless networks. However, the conventional uniform quantization of the message passing algorithm (MPA) for the SCMA detection induces a significant bit error ratio (BER) performance degradation. In this sense, we propose a new quasi-uniform quantization scheme that can efficiently handle the dynamic range in the exchange of messages. To accelerate the convergence of conventional Max-log MPA, the Sub-log MPA is considered by using the latest updating messages at the current iteration. Simulation results show that the proposed quasi-uniform quantization method can significantly improve the BER performance of the SCMA decoder without modifying the resource nodes’ and variable nodes’ update rules in both the additive white Gaussian noise and the Rayleigh frequency selective channels, as compared to the uniform quantizer. Full article
Show Figures

Figure 1

20 pages, 364 KiB  
Article
A Differential Datalog Interpreter
by Matthew James Stephenson
Software 2023, 2(3), 427-446; https://doi.org/10.3390/software2030020 - 21 Sep 2023
Cited by 1 | Viewed by 8557
Abstract
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being [...] Read more.
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

12 pages, 797 KiB  
Article
Decentralized UAV Swarm Scheduling with Constrained Task Exploration Balance
by Runfeng Chen, Jie Li and Ting Peng
Drones 2023, 7(4), 267; https://doi.org/10.3390/drones7040267 - 13 Apr 2023
Cited by 3 | Viewed by 2309
Abstract
Scheduling is one of the key technologies used in unmanned aerial vehicle (UAV) swarms. Scheduling determines whether a task can be completed and when the task is complete. The distributed method is a fast way to realize swarm scheduling. It has no central [...] Read more.
Scheduling is one of the key technologies used in unmanned aerial vehicle (UAV) swarms. Scheduling determines whether a task can be completed and when the task is complete. The distributed method is a fast way to realize swarm scheduling. It has no central node and UAVs can freely join or leave it, thus making it more robust and flexible. However, the two most representative methods, the Consensus-Based Bundle Algorithm (CBBA) and the Performance Impact (PI) algorithm, pursue the minimum cost impact of tasks, which have optimization limitations and are easily cause task conflicts. In this paper, a new concept called “task consideration” is proposed to quantify the impact of tasks on scheduling and the regression of the task itself, balancing the exploration of the UAV for the minimum-impact task and the regression of neighboring tasks to improve the optimization and convergence of scheduling. In addition, the conflict resolution rules are modified to fit the proposed method, and the exploration of tasks is increased by a new removal method to further improve the optimization. Finally, through extensive Monte Carlo experiments, compared with CBBA and PI, the proposed method is shown to perform better in terms of task allocation and total travel time, and with the increase in the number of average UAV tasks, the number of iterations is less and the convergence is faster. Full article
Show Figures

Figure 1

17 pages, 393 KiB  
Article
A New Adaptive Accelerated Levenberg–Marquardt Method for Solving Nonlinear Equations and Its Applications in Supply Chain Problems
by Rong Li, Mingyuan Cao and Guoling Zhou
Symmetry 2023, 15(3), 588; https://doi.org/10.3390/sym15030588 - 24 Feb 2023
Cited by 3 | Viewed by 2000
Abstract
In this paper, a new adaptive Levenberg–Marquardt method is proposed to solve the nonlinear equations including supply chain optimization problems. We present a new adaptive update rule which is a segmented function on the ratio between the actual and predicted reductions of the [...] Read more.
In this paper, a new adaptive Levenberg–Marquardt method is proposed to solve the nonlinear equations including supply chain optimization problems. We present a new adaptive update rule which is a segmented function on the ratio between the actual and predicted reductions of the objective function to accept a large number of unsuccessful iterations and avoid jumping in local areas. The global convergence and quadratic convergence of the proposed method are proved by using the trust region technique and local error bound condition, respectively. In addition, we use the proposed algorithm to test on the symmetric and asymmetric linear equations. Numerical results show that the proposed method has good numerical performance and development prospects. Furthermore, we apply the algorithm to solve the fresh agricultural products supply chain optimization problems. Full article
Show Figures

Figure 1

18 pages, 402 KiB  
Article
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
by Shailza Jolly, Pepa Atanasova and Isabelle Augenstein
Information 2022, 13(10), 500; https://doi.org/10.3390/info13100500 - 17 Oct 2022
Cited by 11 | Viewed by 3885
Abstract
Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a [...] Read more.
Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence)
Show Figures

Figure 1

18 pages, 3436 KiB  
Article
Sustainable Recognition Methods of Modeling Design Features of Light and Micro Vehicle-Mounted UAV: Based on Support Vector Regression and Kano Model
by Hao Yang, Yunxiang Huo, Ruoyu Jia, Feng Sha, Naiqi Hu, Linglan Yu and Yueran Wang
Sustainability 2022, 14(13), 8210; https://doi.org/10.3390/su14138210 - 5 Jul 2022
Cited by 17 | Viewed by 2438
Abstract
In the era of Industry 4.0, intelligent algorithms provide an effective way to make design methods more sustainable through mining people’s demands continuously, especially in the field of evaluating and predicting the user preferences of phasic or interim design schemes. Vehicle-mounted unmanned aerial [...] Read more.
In the era of Industry 4.0, intelligent algorithms provide an effective way to make design methods more sustainable through mining people’s demands continuously, especially in the field of evaluating and predicting the user preferences of phasic or interim design schemes. Vehicle-mounted unmanned aerial vehicles (UAVs) are of significance in mobility experience and traffic surveys. However, as a new type of product, UAVs lack general rules in modeling design and the process of development decision making presents some fuzzy characteristics, which make the evolution and iteration of modeling design more complex. Based on the theories of Kansei Engineering, this study utilized support vector regression (SVR) to establish a correlation model between design factors and preference degree. Because the perceptual evaluation knowledge is fuzzy and uncertain, the paper applied cross-validation and grid search methods to find the optimal parameters. The parameters of the SVR model were adjusted to meet the need for stable learning and for endurance of the noise from subjective experience data to improve the prediction effect and generalization ability. In addition, by means of the Kano model, the customers’ cognition of demand types was quantified to obtain the prioritization of UAV modeling design elements, as well as to compare with the preference scores to validate the feasibility of this research. It was found that the SVR model proposed in the study could effectively predict user preference (R2 = 0.763, RMSE = 0.057). For the UAVs with a higher preference score, the modeling characteristics were consistent with the attractive, one-dimensional or must-be quality elements in the results of the Kano model, which verified the reliability of the study. The conclusion is expected to provide a sustainable design method for vehicle-mounted UAVs commonly used in citizen travel and outdoor activities. Full article
(This article belongs to the Special Issue Sustainability in Industrial Design Engineering for Industry 4.0)
Show Figures

Figure 1

16 pages, 3369 KiB  
Article
Height Prediction and 3D Visualization of Mining-Induced Water-Conducting Fracture Zone in Western Ordos Basin Based on a Multi-Factor Regression Analysis
by Huiyong Yin, Fangying Dong, Yiwen Zhang, Wenju Cheng, Peihe Zhai, Xuyan Ren, Ziang Liu, Yutao Zhai and Xin Li
Energies 2022, 15(11), 3850; https://doi.org/10.3390/en15113850 - 24 May 2022
Cited by 13 | Viewed by 2119
Abstract
The mining-induced water-conducting fracture zone (WCFZ) plays a critical role in roof water damage prevention and ecological protection. The measured heights of the WCFZ were collected from 52 working faces or boreholes in the Ordos Basin mining area. Four factors influencing the mining-induced [...] Read more.
The mining-induced water-conducting fracture zone (WCFZ) plays a critical role in roof water damage prevention and ecological protection. The measured heights of the WCFZ were collected from 52 working faces or boreholes in the Ordos Basin mining area. Four factors influencing the mining-induced height of the WCFZ, i.e., mining thickness, proportion coefficient of hard rock, working width, and mining depth, were analyzed. The optimal unitary function model of each factor and the height of the WCFZ were obtained through single-factor analysis. The grey correlation method and fuzzy ordered binary comparison method were used to determine the comprehensive weight, and the weighted improved multiple regression model was obtained by combination and iteration. The relative error of the model was basically controlled within 10%. Finally, taking the Qingshuiying Coalfield as an application case, we predicted the mining-induced height of the WCFZ by using the new prediction model. The spatial distribution characteristics of the WCFZ were analyzed by the geographic information system. In addition, Groundwater Modeling System (GMS) software was used to build a 3D structure model of WCFZ height to visualize the spatial distribution rules of the WCFZ. The results showed that the height of the WCFZ can be predicted quantitatively by this new method, and the visualization of the WCFZ can be realized. The proposed method effectively analyzes and predicts the mining-induced height of the WCFZ so that water gushing risks from overlying aquifers can be prevented or mitigated in mines. Full article
(This article belongs to the Topic Green Mining)
Show Figures

Figure 1

22 pages, 2315 KiB  
Article
Suspicion Distillation Gradient Descent Bit-Flipping Algorithm
by Predrag Ivaniš, Srdjan Brkić and Bane Vasić
Entropy 2022, 24(4), 558; https://doi.org/10.3390/e24040558 - 15 Apr 2022
Cited by 4 | Viewed by 3207
Abstract
We propose a novel variant of the gradient descent bit-flipping (GDBF) algorithm for decoding low-density parity-check (LDPC) codes over the binary symmetric channel. The new bit-flipping rule is based on the reliability information passed from neighboring nodes in the corresponding Tanner graph. The [...] Read more.
We propose a novel variant of the gradient descent bit-flipping (GDBF) algorithm for decoding low-density parity-check (LDPC) codes over the binary symmetric channel. The new bit-flipping rule is based on the reliability information passed from neighboring nodes in the corresponding Tanner graph. The name SuspicionDistillation reflects the main feature of the algorithm—that in every iteration, we assign a level of suspicion to each variable node about its current bit value. The level of suspicion of a variable node is used to decide whether the corresponding bit will be flipped. In addition, in each iteration, we determine the number of satisfied and unsatisfied checks that connect a suspicious node with other suspicious variable nodes. In this way, in the course of iteration, we “distill” such suspicious bits and flip them. The deterministic nature of the proposed algorithm results in a low-complexity implementation, as the bit-flipping rule can be obtained by modifying the original GDBF rule by using basic logic gates, and the modification is not applied in all decoding iterations. Furthermore, we present a more general framework based on deterministic re-initialization of the decoder input. The performance of the resulting algorithm is analyzed for the codes with various code lengths, and significant performance improvements are observed compared to the state-of-the-art hard-decision-decoding algorithms. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications)
Show Figures

Figure 1

12 pages, 445 KiB  
Article
Higher-Order Belief Propagation Correction Decoder for Polar Codes
by Meng Zhang, Zhuo Li, Lijuan Xing and Xin Liao
Entropy 2022, 24(4), 534; https://doi.org/10.3390/e24040534 - 11 Apr 2022
Cited by 1 | Viewed by 2440
Abstract
Belief propagation (BP) decoding for polar codes has been extensively studied because of its inherent parallelism. However, its performance remains inferior to that of successive cancellation list decoding (SCL) due to the structure of the decoding graph. To improve the block error rate [...] Read more.
Belief propagation (BP) decoding for polar codes has been extensively studied because of its inherent parallelism. However, its performance remains inferior to that of successive cancellation list decoding (SCL) due to the structure of the decoding graph. To improve the block error rate (BLER) performance, the BP correction (BPC) decoding, a post-processing scheme that corrects prior knowledge of the identified code bit, improves convergence by executing additional iterations on the failed BP decoder. Moreover, the BPC decoder demonstrates a better decoding performance than the BP-based bit-flipping decoder. Nevertheless, the additional decoding attempts lead to increased latency. In this article, a modified BPC decoder is proposed to reduce the number of decoding attempts by redefining the correction rules. A new metric is designed to effectively identify the corrected location. Numerical results show that the proposed modified BPC decoder achieves a slight improvement in BLER compared with the original BPC, with a dramatic reduction in average complexity. Furthermore, a higher-order version, named MBPC-Ω, is extended to further improve the performance, where the Ω is the maximum correction order. Numerical results show that the higher-order modified BPC achieves a similar BLER performance to existing multiple bit-flipping BP decoders but has around half the latency overhead. In addition, the proposed MBPC-2 decoder performs better than the cyclic redundancy check-aided SCL (CA-SCL) decoder with list size 4 and is slightly worse than the CA-SCL with list size 8 in high signal-to-noise ratio (SNR) regions but with significant decoding latency reduction. Full article
(This article belongs to the Special Issue Information Theory and Coding for Wireless Communications)
Show Figures

Figure 1

18 pages, 27306 KiB  
Article
GAN-GL: Generative Adversarial Networks for Glacial Lake Mapping
by Hang Zhao, Meimei Zhang and Fang Chen
Remote Sens. 2021, 13(22), 4728; https://doi.org/10.3390/rs13224728 - 22 Nov 2021
Cited by 11 | Viewed by 3767
Abstract
Remote sensing is a powerful tool that provides flexibility and scalability for monitoring and investigating glacial lakes in High Mountain Asia (HMA). However, existing methods for mapping glacial lakes are designed based on a combination of several spectral features and ancillary data (such [...] Read more.
Remote sensing is a powerful tool that provides flexibility and scalability for monitoring and investigating glacial lakes in High Mountain Asia (HMA). However, existing methods for mapping glacial lakes are designed based on a combination of several spectral features and ancillary data (such as the digital elevation model, DEM) to highlight the lake extent and suppress background information. These methods, however, suffer from either the inevitable requirement of post-processing work or the high costs of additional data acquisition. Signifying a key advancement in the deep learning models, a generative adversarial network (GAN) can capture multi-level features and learn the mapping rules in source and target domains using a minimax game between a generator and discriminator. This provides a new and feasible way to conduct large-scale glacial lake mapping. In this work, a complete glacial lake dataset was first created, containing approximately 4600 patches of Landsat-8 OLI images edited in three ways—random cropping, density cropping, and uniform cropping. Then, a GAN model for glacial lake mapping (GAN-GL) was constructed. The GAN-GL consists of two parts—a generator that incorporates a water attention module and an image segmentation module to produce the glacial lake masks, and a discriminator which employs the ResNet-152 backbone to ascertain whether a given pixel belonged to a glacial lake. The model was evaluated using the created glacial lake dataset, delivering a good performance, with an F1 score of 92.17% and IoU of 86.34%. Moreover, compared to the mapping results derived from the global–local iterative segmentation algorithm and random forest for the entire Eastern Himalayas, our proposed model was superior regarding the segmentation of glacial lakes under complex and diverse environmental conditions, in terms of accuracy (precision = 93.19%) and segmentation efficiency. Our model was also very good at detecting small glacial lakes without assistance from ancillary data or human intervention. Full article
(This article belongs to the Topic Water Management in the Era of Climatic Change)
Show Figures

Graphical abstract

21 pages, 2317 KiB  
Article
A Hybrid Machine Learning and Population Knowledge Mining Method to Minimize Makespan and Total Tardiness of Multi-Variety Products
by Yongtao Qiu, Weixi Ji and Chaoyang Zhang
Appl. Sci. 2019, 9(24), 5286; https://doi.org/10.3390/app9245286 - 4 Dec 2019
Cited by 7 | Viewed by 2753
Abstract
Nowadays, the production model of many enterprises is multi-variety customized production, and the makespan and total tardiness are the main metrics for enterprises to make production plans. This requires us to develop a more effective production plan promptly with limited resources. Previous research [...] Read more.
Nowadays, the production model of many enterprises is multi-variety customized production, and the makespan and total tardiness are the main metrics for enterprises to make production plans. This requires us to develop a more effective production plan promptly with limited resources. Previous research focuses on dispatching rules and algorithms, but the application of the knowledge mining method for multi-variety products is limited. In this paper, a hybrid machine learning and population knowledge mining method to minimize makespan and total tardiness for multi-variety products is proposed. First, through offline machine learning and data mining, attributes of operations are selected to mine the initial population knowledge. Second, an addition–deletion sorting method (ADSM) is proposed to reprioritize operations and then form the rule-based initial population. Finally, the nondominated sorting genetic algorithm II (NSGA-II) hybrid with simulated annealing is used to obtain the Pareto solutions. To evaluate the effectiveness of the proposed method, three other types of initial populations were considered under different iterations and population sizes. The experimental results demonstrate that the new approach has a good performance in solving the multi-variety production planning problems, whether it is the function value or the performance metric of the acquired Pareto solutions. Full article
Show Figures

Figure 1

22 pages, 2241 KiB  
Article
A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method
by Kai Xiong, Guanghui Zhao, Guangming Shi and Yingbin Wang
Sensors 2019, 19(20), 4540; https://doi.org/10.3390/s19204540 - 18 Oct 2019
Cited by 12 | Viewed by 3156
Abstract
The Split Bregman method (SBM), a popular and universal CS reconstruction algorithm for inverse problems with both l1-norm and TV-norm regularization, has been extensively applied in complex domains through the complex-to-real transforming technique, e.g., MRI imaging and radar. However, SBM still [...] Read more.
The Split Bregman method (SBM), a popular and universal CS reconstruction algorithm for inverse problems with both l1-norm and TV-norm regularization, has been extensively applied in complex domains through the complex-to-real transforming technique, e.g., MRI imaging and radar. However, SBM still has great potential in complex applications due to the following two points; Bregman Iteration (BI), employed in SBM, may not make good use of the phase information for complex variables. In addition, the converting technique may consume more time. To address that, this paper presents the complex-valued Split Bregman method (CV-SBM), which theoretically generalizes the original SBM into the complex domain. The complex-valued Bregman distance (CV-BD) is first defined by replacing the corresponding regularization in the inverse problem. Then, we propose the complex-valued Bregman Iteration (CV-BI) to solve this new problem. How well-defined and the convergence of CV-BI are analyzed in detail according to the complex-valued calculation rules and optimization theory. These properties prove that CV-BI is able to solve inverse problems if the regularization is convex. Nevertheless, CV-BI needs the help of other algorithms for various kinds of regularization. To avoid the dependence on extra algorithms and simplify the iteration process simultaneously, we adopt the variable separation technique and propose CV-SBM for resolving convex inverse problems. Simulation results on complex-valued l1-norm problems illustrate the effectiveness of the proposed CV-SBM. CV-SBM exhibits remarkable superiority compared with SBM in the complex-to-real transforming technique. Specifically, in the case of large signal scale n = 512, CV-SBM yields 18.2%, 17.6%, and 26.7% lower mean square error (MSE) as well as takes 28.8%, 25.6%, and 23.6% less time cost than the original SBM in 10 dB, 15 dB, and 20 dB SNR situations, respectively. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 1612 KiB  
Article
Construction of Open Innovation Ecology on the Internet: A Case Study of Xiaomi (China) Using Institutional Logic
by Jaime Ortiz, Hao Ren, Kei Li and An Zhang
Sustainability 2019, 11(11), 3225; https://doi.org/10.3390/su11113225 - 11 Jun 2019
Cited by 11 | Viewed by 13980
Abstract
The characteristics of collaborative innovation and interactions among core enterprises, users, and partners are critical. Research is lacking on how to construct open innovation ecology through institutional design. This study explores how core enterprises effectively motivate users and partners to participate in innovation [...] Read more.
The characteristics of collaborative innovation and interactions among core enterprises, users, and partners are critical. Research is lacking on how to construct open innovation ecology through institutional design. This study explores how core enterprises effectively motivate users and partners to participate in innovation activities on the Internet and successfully constructing open innovation ecology, using the institutional logic theory and an exploratory case study of Xiaomi, a famous Chinese Internet enterprise. The findings are as follows: (1) three main characteristics of innovation carried out by core enterprises, users, and partners in the open innovation ecology—iterative innovation, social innovation, and joint innovation; (2) three new institutional practices—following, leading, and symbiotic mechanisms—which provide an effective institutional guarantee for interaction and innovation; (3) two kinds of institutional logic—administrative logic and social logic, which core enterprises follow to construct open innovation ecology on the Internet, and its change in innovation characteristics. Therefore, a full understanding of the changing rules of the institutional logic is fundamental for successfully building open innovation ecology. This study enriches both the open innovation ecology theory as well as the institutional logic theory. In addition, Xiaomi’s open innovation ecological development model proves that core enterprises can take the approach of building open innovation ecology. It provides a strong example to other enterprises on innovations on the Internet. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

16 pages, 3463 KiB  
Article
Research of SWNMF with New Iteration Rules for Facial Feature Extraction and Recognition
by Jing Zhou
Symmetry 2019, 11(3), 354; https://doi.org/10.3390/sym11030354 - 8 Mar 2019
Cited by 3 | Viewed by 2306
Abstract
Weighted nonnegative matrix factorization (WNMF) is a technology for feature extraction, which can extract the feature of face dataset, and then the feature can be recognized by the classifier. To improve the performance of WNMF for feature extraction, a new iteration rule is [...] Read more.
Weighted nonnegative matrix factorization (WNMF) is a technology for feature extraction, which can extract the feature of face dataset, and then the feature can be recognized by the classifier. To improve the performance of WNMF for feature extraction, a new iteration rule is proposed in this paper. Meanwhile, the base matrix U is sparse based on the threshold, and the new method is named sparse weighted nonnegative matrix factorization (SWNMF). The new iteration rules are based on the smaller iteration steps, thus, the search is more precise, therefore, the recognition rate can be improved. In addition, the sparse method based on the threshold is adopted to update the base matrix U, which can make the extracted feature more sparse and concentrate, and then easier to recognize. The SWNMF method is applied on the ORL and JAFEE datasets, and from the experiment results we can find that the recognition rates are improved extensively based on the new iteration rules proposed in this paper. The recognition rate of new SWNMF method reached 98% for ORL face database and 100% for JAFEE face database, respectively, which are higher than the PCA method, the sparse nonnegative matrix factorization (SNMF) method, the convex non-negative matrix factorization (CNMF) method and multi-layer NMF method. Full article
Show Figures

Figure 1

Back to TopTop