Outlier Detection and Explanation Method Based on FOLOF Algorithm
Abstract
1. Introduction
2. Related Work
3. Methods
3.1. Local Outlier Detection Algorithm LOF
3.2. Elbow Rule
3.3. Golden Section
4. Local Outlier Detection Method Based on Objective Function
4.1. Pruning Algorithm Based on the Objective Function of FCM
Algorithm 1 Pruning algorithm based on objective function value |
Input: dataset D Output: outlier candidate set D. Step 1: The FCM algorithm is applied to the entire dataset to derive the OF and to include small clusters in the set of outlier candidates. Step 2: The remaining points are removed one by one to obtain all OFi, and each DOFi and average reduction AvgDOFi are calculated. Step 3: Compare DOFi and T(AvgDOF). If , put the ith point into the outlier candidate set. |
4.2. Weighted LOF Algorithm
Algorithm 2 Weighted LOF algorithm |
Input: dataset D; Output: Weight Set , outlier value score of each data ; Step 1: Perform Z-score standardization on dataset D to obtain D′, and calculate the proportion Pij of the jth dimension attribute of the ith data in dataset D′ through Equation (22). (22) where 1 ≤ i ≤ n, 1 ≤ j ≤ m, is the jth dimensional attribute value of the ith data of D′.Step 2: Calculate the information entropy Ej of the jth dimension attribute in dataset D′ with Equation (23). (23) where . Step 3: Calculate the weight ωj of the jth dimensional attribute with Equation (24) (24) After the above steps, m dimension attribute weight sets are finally obtained, . where 0 ≤ ωj ≤ 1 and . Step 4: After each one-dimensional attribute is weighted by the entropy weight method, the traditional LOF algorithm is executed separately for each dimensional attribute, and it is weighted according to Equation (25) to obtain the final LOF value as the anomaly score of the overall data. (25) where LOFij denotes the LOF value of the jth dimension of the ith data. |
4.3. Description of an Objective Function-Based Local Outlier Detection Algorithm
Algorithm 3 Outlier detection method based on the FOLOF algorithm |
Input: dataset D, number of neighborhood queries k, pruning threshold T, number of outliers p; Output: Top p outliers; Step 1: The elbow rule from Section 3.2 is used to determine the optimal number of clusters c for dataset D, to obtain the best clustering effect. Step 2: Set the parameter c, apply the FCM algorithm to the cluster data D, and record and compare the change in objective function values upon removal of each data point. Prune dataset D based on the pruning threshold T(AvgDOFi) to derive the outlier candidate set D0. Step 3: In the outlier candidate D0, each dimension property is weighted by Equations (22)–(24), and then the outlier score LOF of each data is calculated by Equation (25). Step 4: Arrange the LOF values of each data in order from largest to smallest, output the first p data objects and form the outlier set of dataset D0. |
5. Outlier Explanation Method Based on FOLOF Algorithm
6. Experimental Analysis
6.1. Synthetic Dataset
6.1.1. Outlier Detection in Synthetic Dataset
6.1.2. Explanation of the Outliers in the D1 Dataset
6.2. UCI Datasets
6.2.1. Outlier Detection in UCI Datasets
6.2.2. Explanation of the Outliers in the UCI Datasets
6.3. NBA Player Dataset
6.4. Analysis and Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhou, Y.; Xia, H.; Yu, D.; Cheng, J.; Li, J. Outlier detection method based on high-density iteration. Inf. Sci. 2024, 662, 120286. [Google Scholar] [CrossRef]
- Zhang, J. Advancements of outlier detection: A survey. ICST Trans. Scalable Inf. Syst. 2013, 13, 1–26. [Google Scholar] [CrossRef]
- Hilal, W.; Gadsden, S.A.; Yawney, J. A review of anomaly detection techniques and applications in financial fraud. Expert Syst. Appl. 2022, 193, 116429. [Google Scholar] [CrossRef]
- Wegier, W.; Ksieniewicz, P. Application of Imbalanced Data Classification Quality Metrics as Weighting Methods of the Ensemble Data Stream Classification Algorithms. Entropy 2020, 22, 849. [Google Scholar] [CrossRef] [PubMed]
- Shen, L.; Sun, Z.; Chen, L.; Feng, J. Application of High-Dimensional Outlier Mining Based on the Maximum Frequent Pattern Factor in Intrusion Detection. Math. Probl. Eng. 2021, 2021, 9234084. [Google Scholar] [CrossRef]
- Jin, F.; Chen, M.; Zhang, W.; Yuan, Y.; Wang, S. Intrusion detection on internet of vehicles via combining log-ratio oversampling, outlier detection and metric learning. Inf. Sci. 2021, 579, 814–831. [Google Scholar] [CrossRef]
- Li, Z.; Zhang, L. An Ensemble Outlier Detection Method Based on Information Entropy-Weighted Subspaces for High-Dimensional Data. Entropy 2023, 25, 1185. [Google Scholar] [CrossRef]
- Wu, Y.; Fang, J.; Chen, W.; Zhao, P.; Zhao, L. Safety: A spatial and feature mixed outlier detection method for big trajectory data. Inf. Process. Manag. 2024, 61, 103679. [Google Scholar] [CrossRef]
- Duraj, A.; Duczymiński, D. Nested Binary Classifier as an Outlier Detection Method in Human Activity Recognition Systems. Entropy 2023, 25, 1121. [Google Scholar] [CrossRef]
- Alaverdyan, Z.; Jung, J.; Bouet, R.; Lartizien, C. Regularized siamese neural network for unsupervised outlier detection on brain multiparametric magnetic resonance imaging: Application to epilepsy lesion screening. Med. Image Anal. 2020, 60, 101618. [Google Scholar] [CrossRef]
- Oleka, C.J.; Aikhuele, D.O.; Omorogiuwa, E. Data-driven intelligent model for the classification, identification, and determination of data clusters and defect location in a welded joint. Processes 2022, 10, 1923. [Google Scholar] [CrossRef]
- Li, Z.; Chen, W.; Yan, X.; Zhou, Q.; Wang, H. An outlier robust detection method for online monitoring data of dissolved gases in transformer oils. Flow Meas. Instrum. 2025, 102, 102793. [Google Scholar] [CrossRef]
- Zhou, Y.; Zhu, W.; Fang, Q.; Bai, L. Survey of Outlier Detection Methods Based on Clustering. Comput. Eng. Appl. 2021, 57, 37–45. [Google Scholar] [CrossRef]
- Guan, L.; Duan, L.; Wang, X.; Wang, H.; Lin, R. Weighted-digraph-guided multi-kernelized learning for outlier explanation. Inf. Fusion 2025, 119, 103026. [Google Scholar] [CrossRef]
- Hariharan, S.; Jerusha, Y.A.; Suganeshwari, G.; Ibrahim, S.S.; Tupakula, U.; Varadharajan, V. A Hybrid Deep Learning Model for Network Intrusion Detection System using Seq2Seq and ConvLSTM-Subnets. IEEE Access 2025, 13, 30705–30721. [Google Scholar] [CrossRef]
- Crupi, R.; Regoli, D.; Sabatino, A.D.; Marano, I.; Brinis, M.; Albertazzi, L.; Cirillo, A.; Cosentini, A.C. DTOR: Decision Tree Outlier Regressor to explain anomalies. arXiv 2024, arXiv:2403.10903. [Google Scholar]
- Willard, F.; Moffett, L.; Mokel, E.; Donnelly, J.; Guo, S.; Yang, J.; Kim, G.; Barnett, A.J.; Rudin, C. This looks better than that: Better interpretable models with protopnext. arXiv 2024, arXiv:2406.14675. [Google Scholar]
- Ramaswamy, S.; Rastogi, R.; Shim, K. Efficient algorithms for mining outliers from large data sets. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 16–18 May 2000; pp. 427–438. [Google Scholar] [CrossRef]
- Yang, J.; Rahardja, S.; Fränti, P. Mean-shift outlier detection and filtering. Pattern Recognit. 2021, 115, 107874. [Google Scholar] [CrossRef]
- Breunig, M.M.; Kriegel, H.P.; Ng, R.T.; Sander, J. Lof: Identifying density-based local outliers. ACM SIGMOD Rec. 2000, 29, 93–104. [Google Scholar] [CrossRef]
- Tang, J.; Chen, Z.; Fu, A.W.C.; Cheung, D.W. Enhancing effectiveness of outlier detections for low density patterns. In Proceedings of the Advances in Knowledge Discovery and Data Mining: 6th Pacific-Asia Conference, PAKDD 2002, Taipei, Taiwan, 6–8 May 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 535–548. [Google Scholar] [CrossRef]
- Latecki, L.J.; Lazarevic, A.; Pokrajac, D. Lof: Identifying density-based local outliers. In Proceedings of the Machine Learning and Data Mining in Pattern Recognition: 5th International Conference, MLDM 2007, Leipzig, Germany, 18–20 July 2007. [Google Scholar] [CrossRef]
- Wang, J.; Zhao, X.; Zhang, G.; Liu, J. NLOF: A new density-based local outlier detection algorithm. Comput. Sci. 2013, 40, 181–185. [Google Scholar]
- Cheng, Z.; Zou, C.; Dong, J. Outlier detection using isolation forest and local outlier factor. In Proceedings of the Conference on Research in Adaptive and Convergent Systems, Chongqing China, 24–27 September 2019; pp. 161–168. [Google Scholar]
- Knorr, E.M.; Ng, R.T. Finding intensional knowledge of distance-based outliers. In VLDB ’99: Proceedings of the 25th International Conference on Very Large Data Bases, Edinburgh, UK, 7–10 September 1999; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; Volume 99, pp. 211–222. [Google Scholar]
- Chen, Z.; Tang, J.; Fu, A.W.C. Modeling and efficient mining of intentional knowledge of outliers. In Proceedings of the Seventh International Database Engineering and Applications Symposium, Hong Kong, China, 18 July 2003; pp. 44–53. [Google Scholar] [CrossRef]
- Deng, Y.; Zhu, Q. Outlier analysis method based on clustering. Comput. Appl. Res. 2012, 29, 865–868. [Google Scholar]
- Bhurtyal, S.; Bui, H.; Hernandez, S.; Eksioglu, S.; Asborno, M.; Mitchell, K.N.; Kress, M. Prediction of waterborne freight activity with Automatic identification System using Machine learning. Comput. Ind. Eng. 2025, 200, 110757. [Google Scholar] [CrossRef]
- Papastefanopoulos, V.; Linardatos, P.; Kotsiantis, S. Combining normalizing flows with decision trees for interpretable unsupervised outlier detection. Eng. Appl. Artif. Intell. 2025, 141, 109770. [Google Scholar] [CrossRef]
- Angiulli, F.; Fassetti, F.; Nisticò, S.; Palopoli, L. Explaining outliers and anomalous groups via subspace density contrastive loss. Mach. Learn. 2024, 113, 7565–7589. [Google Scholar] [CrossRef]
- Stakhov, A.P. The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering. Chaos Solitons Fractals 2005, 26, 263–289. [Google Scholar] [CrossRef]
- Koupaei, J.A.; Hosseini, S.M.M.; Ghaini, F.M. A new optimization algorithm based on chaotic maps and golden section search method. Eng. Appl. Artif. Intell. 2016, 50, 201–214. [Google Scholar] [CrossRef]
- Lipovetsky, S.; Lootsma, F.A. Generalized golden sections, repeated bisections and aesthetic pleasure. Eur. J. Oper. Res. 2000, 121, 213–216. [Google Scholar] [CrossRef]
- Phillips, M.E. Rethinking the role of the golden section in music and music scholarship. Creat. Res. J. 2019, 31, 419–427. [Google Scholar] [CrossRef]
- Moh, D.; Al-Zoubi, B.; Al-Dahoud, A.; Yahya, A.A. New outlier detection method based on fuzzy clustering. WSEAS Trans. Inf. Sci. Appl. 2010, 7, 681–690. [Google Scholar]
- Maulik, U.; Bandyopadhyay, S. Performance evaluation of some clustering algorithms and validity indices. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1650–1654. [Google Scholar] [CrossRef]
- Rendón, E.; Abundez, I.; Arizmendi, A.; Quiroz, E.M. Internal versus external cluster validation indexes. Int. J. Comput. Commun. 2011, 5, 27–34. [Google Scholar]
- Chen, S.; Wang, W.; van Zuylen, H. A comparison of outlier detection algorithms for ITS data. Expert Syst. Appl. 2010, 37, 1169–1178. [Google Scholar] [CrossRef]
- Feng, G.; Li, Z.; Zhou, W.; Dong, S. Entropy-based outlier detection using spark. Clust. Comput. 2020, 23, 409–419. [Google Scholar] [CrossRef]
- Ghorbel, O.; Ayedi, W.; Snoussi, H.; Abid, M. Fast and efficient outlier detection method in wireless sensor networks. IEEE Sens. J. 2015, 15, 3403–3411. [Google Scholar] [CrossRef]
- Gu, P.; Liu, H.; Luo, Z. An outlier detection algorithm based on multi clustering. Comput. Appl. Res. 2013, 30, 751–753. [Google Scholar] [CrossRef]
- Liu, F.T.; Ting, K.M.; Zhou, Z.H. Isolation forest. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; pp. 413–422. [Google Scholar]
- Aydın, F. Boundary-aware local density-based outlier detection. Inf. Sci. 2023, 647, 119520. [Google Scholar] [CrossRef]
- Liu, H.; Zhang, S.; Wu, Z.; Li, X. Outlier detection using local density and global structure. Pattern Recognit. 2025, 157, 110947. [Google Scholar] [CrossRef]
ID | OFi | DOFi | ID | OFi | DOFi |
---|---|---|---|---|---|
1 | 620.861 | 4.726 | 36 | 618.171 | 7.416 |
2 | 621.007 | 4.579 | 37 | 616.718 | 8.869 |
3 | 624.956 | 0.631 | 38 | 617.530 | 8.057 |
4 | 621.119 | 4.468 | 39 | 622.921 | 2.667 |
5 | 621.859 | 3.692 | 40 | 582.946 | 42.641 |
6 | 590.532 | 35.055 | 41 | 620.065 | 5.522 |
7 | 561.651 | 63.936 | 42 | 625.105 | 0.482 |
8 | 616.612 | 8.975 | 43 | 625.460 | 0.127 |
9 | 616.737 | 8.851 | 44 | 623.401 | 2.187 |
10 | 620.375 | 5.213 | 45 | 580.656 | 44.931 |
11 | 623.438 | 2.149 | 46 | 623.428 | 2.159 |
12 | 623.586 | 2.001 | 47 | 625.368 | 0.220 |
13 | 617.592 | 7.995 | 48 | 621.934 | 3.653 |
14 | 624.872 | 0.715 | 49 | 621.112 | 4.475 |
15 | 624.898 | 0.688 | 50 | 620.913 | 4.674 |
16 | 622.478 | 3.109 | 51 | 621.922 | 1.698 |
17 | 624.673 | 0.915 | 52 | 597.338 | 28.250 |
18 | 619.707 | 5.881 | 53 | 618.114 | 7.473 |
19 | 581.889 | 43.698 | 54 | 623.217 | 2.370 |
20 | 618.095 | 7.492 | 55 | 623.097 | 2.490 |
21 | 621.764 | 3.823 | 56 | 623.842 | 1.745 |
22 | 624.180 | 1.407 | 57 | 619.280 | 6.307 |
23 | 623.996 | 1.590 | 58 | 593.137 | 32.451 |
24 | 618.602 | 6.985 | 59 | 624.614 | 0.973 |
25 | 616.488 | 9.099 | 60 | 615.654 | 9.933 |
26 | 618.493 | 7.094 | 61 | 621.773 | 3.815 |
27 | 624.182 | 1.405 | 62 | 624.578 | 1.009 |
28 | 621.800 | 3.787 | 63 | 621.914 | 3.673 |
29 | 624.103 | 1.484 | 64 | 623.464 | 2.124 |
30 | 604.217 | 21.370 | 65 | 618.801 | 6.787 |
31 | 621.894 | 3.693 | 66 | 624.016 | 1.571 |
32 | 623.293 | 2.295 | 67 | 620.344 | 5.243 |
33 | 620.711 | 4.876 | 68 | 620.007 | 5.581 |
34 | 617.619 | 7.969 | 69 | 572.235 | 53.352 |
35 | 619.891 | 5.696 | 70 | 586.382 | 39.205 |
ID | LOF | LOF1 | LOF2 |
---|---|---|---|
58 | 2.069 | 0.965 | 3.180 |
69 | 2.021 | 1.034 | 3.013 |
60 | 1.720 | 0.951 | 2.495 |
6 | 1.404 | 1.764 | 1.041 |
40 | 1.166 | 1.384 | 0.948 |
19 | 1.141 | 1.284 | 0.997 |
7 | 1.096 | 1.160 | 1.031 |
52 | 1.070 | 1.019 | 1.121 |
30 | 1.000 | 0.970 | 1.031 |
45 | 0.993 | 1.021 | 0.966 |
70 | 0.944 | 0.966 | 0.922 |
Dataset | CH Index | Dunn Index | I Index | S Index |
---|---|---|---|---|
Original dataset D1 | 65.4948 | 0.0431 | 41.4020 | 120.4470 |
Dataset with outliers removed | 103.3029 | 0.0966 | 52.5155 | 131.2707 |
ID | lrdk | lrdk1 | lrdk2 |
---|---|---|---|
58 | 0.070 | 0.184 | 0.077 |
69 | 0.075 | 0.166 | 0.081 |
60 | 0.085 | 0.187 | 0.097 |
6 | 0.071 | 0.089 | 0.354 |
40 | 0.084 | 0.110 | 0.398 |
19 | 0.084 | 0.118 | 0.382 |
7 | 0.102 | 0.151 | 0.357 |
52 | 0.099 | 0.168 | 0.332 |
30 | 0.092 | 0.184 | 0.357 |
45 | 0.104 | 0.168 | 0.388 |
70 | 0.093 | 0.185 | 0.407 |
Dataset | Data Volume | Number of Attributes | Number of Classifications |
---|---|---|---|
Iris | 150 | 4 | 3 |
Wine | 178 | 13 | 3 |
Yeast | 1484 | 8 | 10 |
UKM | 403 | 5 | 4 |
Seeds | 210 | 7 | 3 |
Wdbc | 569 | 30 | 2 |
Speech | 3686 | 400 | 4 |
Dataset | FCM | DBSCAN | PMLDOF |
---|---|---|---|
Iris | 1 | 0.85 | 0.9 |
Wine | 0.9 | 0.90 | 0.8 |
Yeast | 1 | 0.90 | 1 |
UKM | 1 | 1 | 0.9 |
Seeds | 0.9 | 0.8 | 0.8 |
Wdbc | 1 | 0.9 | 0.9 |
Speech | 1 | 0.8 | 0.9 |
Dataset | FCM | DBSCAN | PMLDOF |
---|---|---|---|
Iris | 29 | 35 | 32 |
Wine | 47 | 52 | 45 |
Yeast | 472 | 530 | 497 |
UKM | 81 | 90 | 88 |
Seeds | 54 | 68 | 63 |
Wdbc | 102 | 125 | 118 |
Speech | 1016 | 1424 | 1386 |
FOLOF | LOF | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Dataset | TP | FP | Pr | Nf | t | TP | FP | Pr | Nf | t |
Iris | 9 | 1 | 0.900 | 0.100 | 6.586 | 6 | 4 | 0.600 | 0.400 | 15.182 |
Wine | 6 | 2 | 0.750 | 0.250 | 55.221 | 5 | 3 | 0.625 | 0.375 | 290.532 |
Yeast | 18 | 2 | 0.900 | 0.100 | 387.552 | 15 | 5 | 0.750 | 0.250 | 1318.221 |
UKM | 4 | 1 | 0.800 | 0.200 | 187.462 | 4 | 1 | 0.800 | 0.200 | 562.113 |
Seeds | 9 | 1 | 0.900 | 0.100 | 61.325 | 7 | 3 | 0.700 | 0.300 | 310.234 |
Wdbc | 4 | 1 | 0.800 | 0.200 | 213.256 | 3 | 2 | 0.600 | 0.400 | 627.453 |
Speech | 8 | 2 | 0.800 | 0.200 | 965.523 | 7 | 3 | 0.700 | 0.300 | 2563.278 |
Complete Dataset | Dataset After Removing Outliers | |||||||
---|---|---|---|---|---|---|---|---|
Dataset | CH Index | Dunn Index | I Index | S Index | CH Index | Dunn Index | I Index | S Index |
Iris | 512.642 | 0.451 | 21.203 | 124.695 | 548.865 | 0.604 | 20.930 | 101.740 |
Wine | 50.390 | 0.214 | 4.963 | 144.688 | 50.647 | 0.218 | 5.295 | 122.652 |
Yeast | 194.378 | 0.024 | 0.013 | 2.759 | 189.394 | 0.025 | 0.013 | 2.862 |
UKM | 92.189 | 0.069 | 0.076 | 6.226 | 92.560 | 0.069 | 0.079 | 6.190 |
Seeds | 155.975 | 0.059 | 9.097 | 5.362 | 172.956 | 0.067 | 7.964 | 5.763 |
Wdbc | 232.061 | 0.024 | 4.298 | 4.535 | 276.7681 | 276.768 | 4.328 | 4.856 |
Speech | 1698.421 | 0.017 | 4.597 | 6.458 | 1856.015 | 0.033 | 4.683 | 6.854 |
Datasets | FOLOF | LOF | IFOREST | BLDOD | PEHS |
---|---|---|---|---|---|
Iris | 0.9545 | 0.7873 | 0.9256 | 0.9377 | 0.9432 |
Wine | 0.8978 | 0.7781 | 0.8801 | 0.8356 | 0.8618 |
Yeast | 0.9896 | 0.8532 | 0.9254 | 0.9156 | 0.9635 |
UKM | 0.8789 | 0.8511 | 0.8412 | 0.8612 | 0.8563 |
Seeds | 0.9654 | 0.8735 | 0.9463 | 0.9487 | 0.9502 |
Wdbc | 0.8951 | 0.8423 | 0.8856 | 0.8651 | 0.8752 |
Speech | 0.9372 | 0.8947 | 0.9284 | 0.9301 | 0.9221 |
Methods | R+ | R− | p-Value | Assumption |
---|---|---|---|---|
FOLOF vs. LOF | 28 | 0 | 0.0156 | reject |
FOLOF vs. IFOREST | 28 | 0 | 0.0156 | reject |
FOLOF vs. BLDOD | 28 | 0 | 0.0156 | reject |
FOLOF vs. PEHS | 28 | 0 | 0.0156 | reject |
Datasets | Weak Outliers | Ordinary Outliers | Strong Outliers |
---|---|---|---|
Iris | 103, 105, 108, 109, 110 | 101, 104, 106 | 107 |
Wine | 131, 132, 133, 134 | 138 | 136 |
Yeast | 1117, 1121, 1123, 1124, 1125, 1127, 1128 | 1119, 1122, 1126, 1133, 1134, 1135 | 1123, 1129, 1130, 1131, 1132 |
UKM | 226, 227 | 225 | 229 |
Seeds | 141, 143,145,146, 149,150 | 142, | 147, 148 |
Wdbc | 572, 573 | 574 | 570 |
Speech | 3689, 3694, 3695 | 3687, 3690, 3696 | 3691, 3692 |
ID | LOF | LOF1 | LOF2 | LOF3 | LOF4 | LOF5 |
---|---|---|---|---|---|---|
6 | 1.779 | 1.331 | 4.512 | 1.057 | 1.000 | 1.061 |
16 | 1.636 | 1.021 | 1.113 | 3.862 | 1.000 | 1.100 |
5 | 1.605 | 1.530 | 1.030 | 3.348 | 1.000 | 1.061 |
21 | 1.602 | 0.952 | 4.105 | 0.988 | 1.000 | 1.111 |
79 | 1.489 | 1.431 | 1.545 | 1.099 | 2.429 | 0.961 |
146 | 1.485 | 1.051 | 0.952 | 3.076 | 0.958 | 1.330 |
17 | 1.457 | 0.983 | 3.292 | 0.967 | 1.000 | 1.038 |
1 | 1.436 | 2.664 | 1.112 | 1.396 | 1.000 | 1.028 |
3 | 1.424 | 1.805 | 1.767 | 1.510 | 1.029 | 1.061 |
55 | 1.400 | 1.403 | 1.946 | 1.601 | 1.000 | 1.040 |
Local Outlier Factors | Weak Outliers | Ordinary Outliers | Strong Outliers |
---|---|---|---|
LOF1 | 6, 16, 17, 21, 55, 146 | 5, 79, 3 | 1 |
LOF2 | 1, 3, 5, 16, 55, 79, 146 | 17 | 6, 21 |
LOF3 | 1, 3, 6, 17, 21, 79, 55 | - | 5, 16, 146 |
LOF4 | 1, 3, 5, 6, 16, 17, 21, 55, 146 | - | 79 |
LOF5 | 1, 3, 5, 6, 17, 55, 79 | 16, 21 | 146 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bai, L.; Wang, J.; Zhou, Y. Outlier Detection and Explanation Method Based on FOLOF Algorithm. Entropy 2025, 27, 582. https://doi.org/10.3390/e27060582
Bai L, Wang J, Zhou Y. Outlier Detection and Explanation Method Based on FOLOF Algorithm. Entropy. 2025; 27(6):582. https://doi.org/10.3390/e27060582
Chicago/Turabian StyleBai, Lei, Jiasheng Wang, and Yu Zhou. 2025. "Outlier Detection and Explanation Method Based on FOLOF Algorithm" Entropy 27, no. 6: 582. https://doi.org/10.3390/e27060582
APA StyleBai, L., Wang, J., & Zhou, Y. (2025). Outlier Detection and Explanation Method Based on FOLOF Algorithm. Entropy, 27(6), 582. https://doi.org/10.3390/e27060582