Developing Support Vector Machine with New Fuzzy Selection for the Infringement of a Patent Rights Problem

Classification problems are very important issues in real enterprises. In the patent infringement issue, accurate classification could help enterprises to understand court decisions to avoid patent infringement. However, the general classification method does not perform well in the patent infringement problem because there are too many complex variables. Therefore, this study attempts to develop a classification method, the support vector machine with new fuzzy selection (SVMFS), to judge the infringement of patent rights. The raw data are divided into training and testing sets. However, the data quality of the training set is not easy to evaluate. Effective data quality management requires a structural core that can support data operations. This study adopts new fuzzy selection based on membership values, which are generated from fuzzy c-means clustering, to select appropriate data to enhance the classification performance of the support vector machine (SVM). An empirical example based on the SVMFS shows that the proposed SVMFS can obtain a superior accuracy rate. Moreover, the new fuzzy selection also verifies that it can effectively select the training dataset.


Introduction
Nowadays classification models have various limitations related to the use of a single model. Data preprocessing plays a significant role in the entire dataset. In the data preprocessing, detecting outliers which appear to not belong in the data is one of the important methods, and can be caused by human error, such as mislabeling, transposing numerals, and programming bugs. Outliers corrupt the results to a small or large degree, depending on the circumstances, if they are not removed from the raw dataset. This study develops a fuzzy selection strategy that addresses fuzzy membership for selecting data to be eliminated from datasets. New algorithms should provide high quality and clean data to treat the noise (smart data) in Big Data analysis problems [1]. Therefore, this study attempts to handle noisy data by proposing a new algorithm for unstructured datasets. Some studies have developed outlier detection methods, for example, van der Gaag [2] used FDSTools noise profiles to obtain training datasets and a test set to analyze the impact of FDSTools noise correction for different analysis thresholds. This method was able to obtain a higher quality training dataset, leading to improved performance. Niu and Wang [3] proposed a combined model to achieve accurate prediction results. The combined model included complete empirical mode decomposition ensemble, four neural network models, and a linear model. Cai et al. [4] adopted Kalman filter-deduced noisy datasets. The Kalman filter is insensitive to non-Gaussian noises because it uses the maximum correntropy criteria. Numerical experiments have shown to outperform this model on four benchmark datasets for traffic flow forecasting. Liu and Chen [5] conducted a comprehensive review of data processing strategies in wind energy forecasting models. This research mentioned that the existing data-driven forecasting models attach great significance to the proper application of data processing methods. Ma et al. [6] developed an unscented Kalman filter (UKF) with the generalized correntropy loss (GCL) which can be termed GCL-UKF. GCL-UKF has been used to estimate and forecast the power system state. Numerical simulation results have validated the efficacy of the proposed methods for state estimation using various types of measurement. Wang et al. [7] developed wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), which is a hybrid model. RSPA takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. Florez-Lozano et al. [8] developed an intelligent system that combined both classic aggregation operators and neural and fuzzy systems. These studies proposed pre-processing methods to effectively handle datasets for various forecasting methods. Therefore, this study develops a fuzzy selection strategy that addresses fuzzy membership. The new fuzzy selection operator based on the fuzzy clustering algorithm is carefully formulated to ensure that there are better members of the dataset in the proposed classification system.
This study develops the support vector machine (SVM) classification model with new fuzzy selection to improve the performance of the classification problem. Supervised classification is the essential technique used for extracting quantitative information from the database, and the SVM is one a popular classifier. Tang et al. [9] developed a joint segmentation and classification framework for sentence-level sentiment classification. Their method simultaneously generates useful segmentations and predicts sentence-level polarity based on the segmentation results. The effectiveness of the approach was verified by applying it to sentiment classification. Jiang et al. [10] proposed a method for the effective classification and localization of mixed sources. The advantage of this method is that it could make good use of known information in order to distinguish the distances of sources from mixed sources and estimate the range parameters of near-field sources. Kasabov et al. [11] developed a new and efficient neuromorphic approach to the most complex rich spatiotemporal brain data (STBD) and functional magnetic resonance imaging (fMRI) data. Shao et al. [12] developed a prototype-based classification model which evolving data streams. Building upon the techniques of error-driven representativeness learning, P-Tree based data maintenance, and concept drift handling, SyncStream allows dynamic modeling of the evolving concepts and supports good prediction performance. Wang et al. [13] developed Noise-resistant Statistical Traffic Classification (NSTC) to solve the traffic classification problem. NSTC could reduce the noise and reliability, thereby improving the classification performance. Phan et al. [14] developed a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging. The CNNs divide the dataset into nontransition and transition sets and explored = how different frameworks perform on them. Basha et al. [15] proposed the interval principal component analysis to detect faults in the Tennessee Eastman (TE) Process with a higher degree of accuracy than other methods. The interval principal component analysis method has been capable of maintaining a high performance rate even at low GLR window sample sizes and low interval aggregation window sizes. The main focus of these studies was to solve actual problems using new classification methods. Moreover, pre-processing methods, such as those presented in [12,13], could improve performance. Therefore, this study develops a fuzzy selection strategy that addresses how fuzzy membership and the support vector machine (SVM) method provide an effective way to perform supervised classification. The recent SVM literature is summarized and shown in Table 1. It can be observed that the UCI dataset [16] has been examined in many studies, and many studies have developed hybrid SVMs for improving the classification performance.

Author(s)
Year Methods

Applied Fields
Ekong et al. [17] 2016 Interval type-2 fuzzy SVM Epileptic seizure phases Shen et al. [18] 2016 SVM with fruit fly optimization algorithm Medical diagnosis Wang et al. [19] 2016 Locally linear SVMs UCI dataset Zhong et al. [20] 2016 SVM with novel Bayesian inference Industrial survey data Qi et al. [21] 2016 New Deep SVM UCI dataset Liu et al. [22] 2017 SVM One-vs-one (OVO) strategy Zhang et al. [23] 2017 Multiple birth SVM UCI dataset Utkin and Zhuk [24] 2017 One-class classification SVM UCI dataset Gonzalez-Abril et al. [25] 2017 Modified SVM UCI dataset Kusakci et al. [26] 2017 Competitive SVM Teams Autonomous human chromosome Richhariy and Tanveer [27] 2018 Universum support vector machine Electroencephalogram Ougiaroglou et al. [28] 2018 SVM KEEL-dataset repository de Lima et al. [29] 2018 Least squares twin multi-class SVM UCI dataset Tang et al. [30] 2019 Regular simplex SVM Benchmark datasets Yang and Dong [31] 2019 SVM with generalized quantile loss UCI dataset Okwuashi and Ndehedehe [32] 2020 Deep SVM Hyperspectral image Furthermore, some literature has combined the fuzzy c-means (FCM) with SVM for improving the performance of the classifier ( [33][34][35][36][37][38]). These studies used the clustering label of FCM as the preprocess mechanism for improving the SVM classifier. This study adopts the roulette wheel selection with a membership function of FCM to select appropriate data for the training set. Traditional roulette wheel selection is based on probability to select possible eliminates in genetic algorithms. This study adopts membership values of FCM to roulette wheel selection for the possible eliminates of training datasets.
The purpose of this study is to develop a new classification method that combines SVM with new fuzzy selection. A core component of this study is the development of a new fuzzy selection method, roulette wheel selection with a membership function, to select appropriate data for the training set. The proposed methodology draws on the advantages of fuzzy clustering, roulette wheel selection, and SVM to effectively handle the dataset, reduce outlier data, and improve classification performance. The remainder of this paper is organized as follows: Section 2 presents the proposed support vector machine with new fuzzy selection method and also introduces the new fuzzy selection method; Section 3 provides the research design of the support vector machine with new fuzzy selection (SVMFS) for the actual infringement of patent rights problem; and Section 4 offers conclusions and suggestions for further research.

Support Vector Machine with New Fuzzy Selection
Generally, the raw data are divided into training and testing sets. However, the data quality of training set is not easily evaluated. Effective data quality management requires a structural core that can support data operations. The data quality of training set is a very important issue in classification problem. This study attempts to adopt the new fuzzy selection method to select appropriate data for enhancing the classification performance of the SVM. Figure 1 shows the flowchart of the proposed SVMFS method.

New Fuzzy Selection
First, the dataset uses the fuzzy clustering algorithm to determine fuzzy membership. In this study, we adopt the fuzzy c-means algorithm ( [39][40][41]). The distance function can be defined as a membership function of clustering. Therefore, the closeness of the data to the multi-cluster center can be expressed by the degree of membership of the distance function. The FCM objective function from [39] can be formulated as: where c is the number of clusters to classify, ω is a parameter for updating the clustering membership functions, l is number of epochs to execute. In FCM, the initial memberships of the data x i , i = 1, . . . , N with crisp input-output to the clusters j (j = 1, . . . , c) are generated randomly. The initial l value is set to 0. FCM optimizes the objective function by continuously updating the membership functions and centers of clusters until the difference in updating the membership functions is smaller than the tolerance for the solution accuracy. d is the distance between data x i and the center of cluster j, O j . The equations used to update membership functions and centers of clusters can be expressed as follows:

New Fuzzy Selection
First, the dataset uses the fuzzy clustering algorithm to determine fuzzy membership. In this study, we adopt the fuzzy c-means algorithm ( [39][40][41]). The distance function can be defined as a membership function of clustering. Therefore, the closeness of the data to the multi-cluster center can be expressed by the degree of membership of the distance function. The FCM objective function from [39] can be formulated as: where c is the number of clusters to classify, ω is a parameter for updating the clustering membership functions, l is number of epochs to execute. In FCM, the initial memberships of the data xi, i = 1, …, N with crisp input-output to the clusters j (j = 1, …, c) are generated randomly. The initial l value is set to 0. FCM optimizes the objective function by continuously updating the membership functions and centers of clusters until the difference in updating the membership functions is smaller than the tolerance for the solution accuracy. d is the distance between data xi and the center of cluster j, Οj. The equations used to update membership functions and centers of clusters can be expressed as follows: Traditional roulette wheel selection is similar to a roulette wheel in a casino which is assigned to each of the possible selections based on probability. This study develops the new fuzzy roulette wheel selection is based on fuzzy membership function of FCM. Based on the final membership function, the fuzzy selection strategy determines which of the members in the current U (l) will have an even higher membership. The selection operator is carefully formulated to ensure that better members of the dataset have a greater probability of being selected to improve the accuracy of classification. The procedures of selection for worse members of the dataset still could be selected which are small probability, and this is important to ensure that the noise filtering process is reasonable. Fitter datasets are more likely to be selected as training (X train ) and testing (X test ) datasets. The Algorithm 1 of the new fuzzy roulette wheel selection is displayed as following:

SVMFS
The support vector machine (SVM) proposed by Vapnik [42] uses classification techniques drawn from statistical learning theory ( [42,43]). The SVM performs a binary classification using the optimal separation hyperplane, which is nonlinearly mapped into the high-dimensional feature space. The data are linearly separated using SVM, which trains the linear machine to obtain the best hyperplane, and the decision function is estimated through the nonlinear class boundary, which is based on the support vector. The data are sorted according to the maximum distance between hyperplanes or the nearest training point. The training points closest to the optimal separating hyperplane are called support vectors: where x, z R n , and n H are the dimensions of H. Equation (4) depicts Mercer's required conditions for any square integrable function g(x). The kernel function can be expressed as the inner product. Therefore, having a positive semi-definite kernel implies that kernel K is separately used to solve integral equations. Several choices are possible for kernel K(.). The kernel function enables the SVM to function within very large dimensional feature spaces without making explicit computations in the space. Computations are completed in another space subsequent to the application of this kernel trick. One begins with a formulation in the primal weight space with a high dimensional feature space through the application of the transformation function φ(.). The problem cannot be solved in the primal weight space, but it can be handled in the dual space by applying the kernel functions.
In this way, we are able to implicitly compute the problem within a high dimensional feature space. The extension from linear SVM classifiers to nonlinear SVM classifiers can be completed by replacing x with φ(x) and applying the kernel method wherever possible: Originally, SVM was designed for two-class classification. Based on the process of determining the separate boundary and the maximum distance to the closest points, SVM derives a class choice, called support vectors (SVs), for the training dataset. SVM can avoid a potential misclassification in the testing data by minimizing the structural risk rather than the empirical risk. Therefore, the SVM classifier demonstrates a better generalization performance than that of other traditional classifiers. First, we give a training dataset D = {X train , Y train }, where X train ∈ n is the input vector which is selected from fuzzy roulette wheel selection with a known binary output label, Y train ∈ {0, 1}. Then, the classification function is specified by: where φ : n → m is the feature mapping of the input space to a high dimensional feature space. The data points become linearly separable by a hyperplane defined by the pair (w ∈ m , b ∈ ) [35]. The optimal hyperplane separating the data is expressed as Equation (8): where w is the norm for a normal weights vector of a hyperplane. This constrained optimization problem is solved by the following primal Lagrangian form: where α i are the Lagrange multipliers between 0 and C. Applying the Karush-Kuhn-Tucker conditions, solutions for the dual Lagrangian problem, α 0 i , decide the parameters w 0 and b 0 of the optimal hyperplane.. Next, the decision function is generated by Equation (10): K(x,xi) is the kernel function and should satisfy Mercer's condition, as mentioned previously. In addition, the value of the kernel function is equal to the inner product of two vectors, x and xi, in the feature space φ(x) and ϕ(x i ). Note that in the case of the radial basis function (RBF) kernels, which are represented in Equation (11), one has only two additional tuning parameters (C, σ): In addition, having suitable parameters will effectively improve the performance of the SVM.

Patent Infringement Problem
A patent infringement problem was executed with various classification methods using the standard SVM, principal component analysis (PCA) [44] + SVM, least square support vector machine (LSSVM), back propagation neural network (BPNN), least square support vector machine with new fuzzy selection (LSSVMFS), and the proposed SVMFS. The definition of patent infringement refers to the unlawful conduct of a patented invention without the permission of the patent holder. Moreover, the definition of patent infringement may vary by jurisdiction, but it typically includes using or selling the patented invention. The patent infringement applications can adopt the user modelling technique. Researchers use the classification model to model court decisions according to collected datasets. In this study, the patent infringement dataset came from Taiwan courts. This study attempts to improve the accuracy of the patent infringement dataset in classification fields.

Data
The data used in this investigation were gathered from law offices. This study develops the SVMFS to the real patent infringement classification problem which could assist lawyer judging patent infringement cases by data-driven technique. In total, 93 original data points were divided into training and testing datasets. The distributions of the data are shown in Table 2. The key/input variables include "Months"(M), "International Patent classification-Main item"(IPC-M), "International Patent classification-Subitem"(IPC-S), "Case category"(CC), "Case category-Main item"(CC-M), "Case category-Subitem"(CC-S),"Service fee"(SF), "Client"(CL), "Authorized capital"(AC), "Agent"(A), and "Expected judgment"(EJ), and the decision variable is "Court decisions"(CD), as shown in Table 2. The numerical expression is defined by lawyer's options. Table 2 also gives a summary of the statistical analyses conducted, such as the maximum, minimum, average, and standard deviation values for various variables. Table 2 shows that the variation in the authorized capital is large, which may affect the classification performance. The court decision was that a patent infringement (69 cases) had occurred in about 74.2% of 93 cases. Table 3 shows the input data example of patent infringement cases for various classification models. Moreover, the normalization is employed for reducing the effect of numbers for categories in this study. Table 4 displays the correlation matrix of the variables. The correlations of all variables could be observed are much lower and are not statistically significant in the patent infringement example.

Statistical Comparison with Different Numbers of Testing Datasets
This research coded the classification models using MATLAB R2019b (The MathWorks, Inc., Natick, MA, USA). The classification models can be divided into new fuzzy selection and least square support vector machine (LSSVM)/SVM models. The various numbers of clusters used to classify values of FCM (c = 3, 4, and 5) were tested with X test numbers of 10, 20, and 30 in the least square support vector machine with fuzzy selection (LSSVMFS) and SVMFS models. Figure 2 shows the training error of fuzzy clustering in patent infringement datasets. The experimental results show that the training error of fuzzy clustering with different number of clusters converges with excellent stability. Two measuring indices employed are the true positive rate (TPR) and the accuracy. Then, the measuring indexes is specified by:

Statistical Comparison with Different Numbers of Testing Datasets
This research coded the classification models using MATLAB R2019b (The MathWorks, Inc. Natick, MA, USA). The classification models can be divided into new fuzzy selection and least square support vector machine (LSSVM)/SVM models. The various numbers of clusters used to classify values of FCM (c = 3, 4, and 5) were tested with Xtest numbers of 10, 20, and 30 in the least square support vector machine with fuzzy selection (LSSVMFS) and SVMFS models. Figure 2 shows the training error of fuzzy clustering in patent infringement datasets. The experimental results show that the training error of fuzzy clustering with different number of clusters converges with excellent stability. Two measuring indices employed are the true positive rate (TPR) and the accuracy. Then, the measuring indexes is specified by:    In order to identify LSSVM/SVMFS model parameter (Number of c) and obtain average and variance performances, LSSVM/SVMFS execute 20 times each c = 3, 4, and 5. Table 3 indicates that the SVMFS model with c = 3, 4, and 5 in X test numbers of 10, 20, and 30 can has a higher accuracy than the LSSVMFS. Furthermore, the SVMFS with c = 3 outperforms (average and variance are 0.870 and 0.103, respectively) in X test numbers of 10. The SVMFS with c = 5 outperforms (the average and variance are 0.875 and 0.06, respectively) in X test numbers of 20. The SVMFS with c = 4 outperforms (the average and variance are 0.85 and 0.045, respectively) in X test numbers of 30. All the best performance in different X test numbers are bolded in Table 3. Figures 3-5 illustrate boxplots of different numbers of testing datasets for the LSSVRFS/SVRFS model with various cluster values. In the box plots of Figures 3-5, the lowest point is the minimum of the accuracy and the highest point is the maximum of the accuracy with different numbers of testing datasets. As demonstrated in the sensitivity analysis provided in Table 5 and Figures 3-5, the SVRFS model has smaller variance with the cluster value in different numbers of testing datasets. Therefore, with various cluster values, the SVRFS model can provide stable results with various numbers of X test and give an accurate classification. 0.103, respectively) in Xtest numbers of 10. The SVMFS with c = 5 outperforms (the average and variance are 0.875 and 0.06, respectively) in Xtest numbers of 20. The SVMFS with c = 4 outperforms (the average and variance are 0.85 and 0.045, respectively) in Xtest numbers of 30. All the best performance in different Xtest numbers are bolded in Table 3. Figures 3-5 illustrate boxplots of different numbers of testing datasets for the LSSVRFS/SVRFS model with various cluster values. In the box plots of Figures 3-5, the lowest point is the minimum of the accuracy and the highest point is the maximum of the accuracy with different numbers of testing datasets. As demonstrated in the sensitivity analysis provided in Table 5 and Figures 3-5, the SVRFS model has smaller variance with the cluster value in different numbers of testing datasets. Therefore, with various cluster values, the SVRFS model can provide stable results with various numbers of Xtest and give an accurate classification.            Table 6 shows the results of the patent infringement datasets using various classification methods. SVMFS obtained higher accuracy rates of 0.87, 0.86, and 0.83 and TPN of 1, 1, and 1 with 10, 20, and 30 testing datasets, respectively, than the traditional SVM, principal component analysis PCA + SVM, LSSVM, and back propagation neural network (BPNN) methods. The PCA could be observed that has not improved the accuracy of SVM in the patent infringement example. The reason may be the correlation of the variables are much lower. Hence the reducing variables of PCA in the patent infringement example may not obviously enhance SVM classification. Our proposed new fuzzy selection algorithm obviously improves the SVM/LSSVM classification performance in patent infringement datasets. The SVMFS also can effectively discover the structure of patent infringement datasets. The reason may be the membership of FCM method is obtain more information of training set for uncertain classification problem.

The Results Analysis
From observing these experiments, this study can conclude (1) that the new fuzzy selection and SVM method can improve the classification performance and (2) that quality of the data for processing is improved by new fuzzy selection method when the structure of datasets is very complex.

Conclusions
This study firstly developed a SVMFS and examined it using patent infringement datasets. The results indicate that SVMFS offers a promising alternative for classification. Overall, the SVMFS can provide a more stable and better performance with a higher level of accuracy. The superior performance of SVMFS can be attributed to several factors, as follows: (1) the new fuzzy selection can  Table 6 shows the results of the patent infringement datasets using various classification methods. SVMFS obtained higher accuracy rates of 0.87, 0.86, and 0.83 and TPN of 1, 1, and 1 with 10, 20, and 30 testing datasets, respectively, than the traditional SVM, principal component analysis PCA + SVM, LSSVM, and back propagation neural network (BPNN) methods. The PCA could be observed that has not improved the accuracy of SVM in the patent infringement example. The reason may be the correlation of the variables are much lower. Hence the reducing variables of PCA in the patent infringement example may not obviously enhance SVM classification. Our proposed new fuzzy selection algorithm obviously improves the SVM/LSSVM classification performance in patent infringement datasets. The SVMFS also can effectively discover the structure of patent infringement datasets. The reason may be the membership of FCM method is obtain more information of training set for uncertain classification problem. From observing these experiments, this study can conclude (1) that the new fuzzy selection and SVM method can improve the classification performance and (2) that quality of the data for processing is improved by new fuzzy selection method when the structure of datasets is very complex.

Conclusions
This study firstly developed a SVMFS and examined it using patent infringement datasets. The results indicate that SVMFS offers a promising alternative for classification. Overall, the SVMFS can provide a more stable and better performance with a higher level of accuracy. The superior performance of SVMFS can be attributed to several factors, as follows: (1) the new fuzzy selection can enhance the quality of the data for processing. In the fuzzy cluster algorithm, the closeness of the data to the multi-cluster center can be expressed by the degree of membership of the distance function. Based on the degree of membership of the distance function, the roulette wheel selection can select a higher membership dataset to enhance the SVM classification model. In the experiments, using the proposed new fuzzy selection method which increases the membership can yield better classification rates; and (2) the SVM can effectively determine the structure of patent infringement datasets.
In terms of future work, other types of machine learning datasets with SVMFS would be a challenging issue to study, and one-hot encoding could be considered as a way to input the categorical variables. Future studies could also consider using other data preprocessing techniques to improve the SVM, such as interval PCA [45].

Conflicts of Interest:
The authors declare no conflict of interest.