# Tag Estimation Method for ALOHA RFID System Based on Machine Learning Classifiers

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Works

## 3. Materials and Methods

#### 3.1. Machine Learning Classifiers for Tag Estimation

#### 3.1.1. Experimental Setup

#### 3.1.2. Neural Network Model

**Definition**

**1.**

#### 3.1.3. Random Forest Model

## 4. Results and Comparison

## 5. Mobile RFID Reader—Implementation Feasibility

#### 5.1. Current State-of-the-Art

#### 5.2. Experimental Setup

#### 5.3. MCU Hardware

#### 5.4. Discussion

#### 5.5. Limitations to the Study

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

IoT | Internet of Things |

RFID | Radio Frequency Identification |

WIPT | Wireless Information and Power Transfer |

DFSA | Dynamic Framed Slotted ALOHA |

TDMA | Time-Division multiple-access |

ILCM | Improved Linearized Combinatorial model |

ML | Machine Learning |

DT | Decision Tree |

k-NN | k-Nearest Neighbour |

SVM | Support Vector Machine |

RF | Random Forest |

DL | Deep Learning |

ANN | Artificial Neural Networks |

NN | Neural Network |

ReLU | Rectified Linear Unit |

SGD | Stochastic Gradient Descent |

Adam | Adaptive Moment Optimization |

RMSProp | Root Mean Square Propagation |

MAE | Mean Absolute Errors |

CMSIS-NN | Microcontroller Software Interface Standard Neural Network |

MCU | microcontrollers |

## References

- Cui, L.; Zhang, Z.; Gao, N.; Meng, Z.; Li, Z. Radio Frequency Identification and Sensing Techniques and Their Applications—A Review of the State-of-the-Art. Sensors
**2019**, 19, 4012. [Google Scholar] [CrossRef] [PubMed] - Santos, Y.; Canedo, E. On the Design and Implementation of an IoT based Architecture for Reading Ultra High Frequency Tags. Information
**2019**, 10, 41. [Google Scholar] [CrossRef] - Landaluce, H.; Arjona, L.; Perallos, A.; Falcone, F.; Angulo, I.; Muralter, F. A Review of IoT Sensing Applications and Challenges Using RFID and Wireless Sensor Networks. Sensors
**2020**, 20, 2495. [Google Scholar] [CrossRef] - Dobkin, D.D. The RF in RFID; Elsevier: Burlington, MA, USA, 2008. [Google Scholar]
- Škiljo, M.; Šolić, P.; Blažević, Z.; Patrono, L.; Rodrigues, J.J.P.C. Electromagnetic characterization of SNR variation in passive Gen2 RFID system. In Proceedings of the 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN), Milan, Italy, 4–7 July 2017; pp. 172–175. [Google Scholar] [CrossRef]
- Šolić, P.; Maras, J.; Radić, J.; Blažević, Z. Comparing Theoretical and Experimental Results in Gen2 RFID Throughput. IEEE Trans. Autom. Sci. Eng.
**2017**, 14, 349–357. [Google Scholar] [CrossRef] - EPCglobal Inc. Class1 Generation 2 UHF Air Interface Protocol Standard; Technical Report; EPCglobal Inc.: Wellington, New Zealand, 2015. [Google Scholar]
- Šolić, P.; Radić, J.; Rožić, N. Energy Efficient Tag Estimation Method for ALOHA-Based RFID Systems. IEEE Sens. J.
**2014**, 14, 3637–3647. [Google Scholar] [CrossRef] - Law, C.; Lee, K.; Siu, K.Y. Efficient Memoryless Protocol for Tag Identification (Extended Abstract); Association for Computing Machinery: New York, NY, USA, 2000. [Google Scholar] [CrossRef]
- Capetanakis, J. Tree algorithms for packet broadcast channels. IEEE Trans. Inf. Theory
**1979**, 25, 505–515. [Google Scholar] [CrossRef] - Rodić, L.D.; Stančić, I.; Zovko, K.; Šolić, P. Machine Learning as Tag Estimation Method for ALOHA-based RFID system. In Proceedings of the 2021 6th International Conference on Smart and Sustainable Technologies (SpliTech), Bol and Split, Croatia, 8–11 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Schoute, F. Dynamic Frame Length ALOHA. IEEE Trans. Commun.
**1983**, 31, 565–568. [Google Scholar] [CrossRef] - Škiljo, M.; Šolić, P.; Blažević, Z.; Rodić, L.D.; Perković, T. UHF RFID: Retail Store Performance. IEEE J. Radio Freq. Identif.
**2021**, 6, 481–489. [Google Scholar] [CrossRef] - Chen, W. An Accurate Tag Estimate Method for Improving the Performance of an RFID Anticollision Algorithm Based on Dynamic Frame Length ALOHA. IEEE Trans. Autom. Sci. Eng.
**2009**, 6, 9–15. [Google Scholar] [CrossRef] - Vogt, H. Efficient object identification with passive RFID tags. In Proceedings of the International Conference on Pervasive Computing, Zürich, Switzerland, 26–28 August 2002; pp. 98–113. [Google Scholar]
- Šolić, P.; Radić, J.; Rozić, N. Linearized Combinatorial Model for optimal frame selection in Gen2 RFID system. In Proceedings of the 2012 IEEE International Conference on RFID (RFID), Orlando, FL, USA, 3–5 April 2012; pp. 89–94. [Google Scholar] [CrossRef]
- Vahedi, E.; Wong, V.W.S.; Blake, I.F.; Ward, R.K. Probabilistic Analysis and Correction of Chen’s Tag Estimate Method. IEEE Trans. Autom. Sci. Eng.
**2011**, 8, 659–663. [Google Scholar] [CrossRef] - Arjona, L.; Landaluce, H.; Perallos, A.; Onieva, E. Scalable RFID Tag Estimator with Enhanced Accuracy and Low Estimation Time. IEEE Signal Process. Lett.
**2017**, 24, 982–986. [Google Scholar] [CrossRef] - Delgado, M.; Vales-Alonso, J.; Gonzalez-Castao, F. Analysis of DFSA anti-collision protocols in passive RFID environments. In Proceedings of the 2009 35th Annual Conference of IEEE Industrial Electronics, Porto, Portugal, 3–5 November 2009; pp. 2610–2617. [Google Scholar] [CrossRef]
- Vales-Alonso, J.; Bueno-Delgado, V.; Egea-Lopez, E.; Gonzalez-Castano, F.J.; Alcaraz, J. Multiframe Maximum-Likelihood Tag Estimation for RFID Anticollision Protocols. IEEE Trans. Ind. Inform.
**2011**, 7, 487–496. [Google Scholar] [CrossRef] - Wang, S.; Aggarwal, C.; Liu, H. Using a Random Forest to Inspire a Neural Network and Improving on It. In Proceedings of the 2017 SIAM International Conference on Data Mining (SDM), Houston, TX, USA, 27–29 April 2017; pp. 1–9. [Google Scholar] [CrossRef]
- Filho, I.E.D.B.; Silva, I.; Viegas, C.M.D. An Effective Extension of Anti-Collision Protocol for RFID in the Industrial Internet of Things (IIoT). Sensors
**2018**, 18, 4426. [Google Scholar] [CrossRef] [PubMed] - Radić, J.; Šolić, P.; Škiljo, M. Anticollision algorithm for radio frequency identification system with low memory requirements: NA. Trans. Emerg. Telecommun. Technol.
**2020**, 31, e3969. [Google Scholar] [CrossRef] - Boovaraghavan, S.; Maravi, A.; Mallela, P.; Agarwal, Y. MLIoT: An End-to-End Machine Learning System for the Internet-of-Things. In Proceedings of the International Conference on Internet-of-Things Design and Implementation, Charlottesville, VA, USA, 18–21 May 2021; IoTDI ’21. pp. 169–181. [Google Scholar] [CrossRef]
- Mitchell, T.M. Machine Learning; McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
- Sarkar, D.; Bali, R.; Sharma, T. Practical Machine Learning with Python: A Problem-Solver’s Guide to Building Real-World Intelligent Systems, 1st ed.; Apress: New York, NY, USA, 2017. [Google Scholar]
- Zantalis, F.; Koulouras, G.; Karabetsos, S.; Kandris, D. A Review of Machine Learning and IoT in Smart Transportation. Future Internet
**2019**, 11, 94. [Google Scholar] [CrossRef] - Hussain, F.; Hussain, R.; Hassan, S.A.; Hossain, E. Machine Learning in IoT Security: Current Solutions and Future Challenges. IEEE Commun. Surv. Tutor.
**2020**, 22, 1686–1721. [Google Scholar] [CrossRef] - Dujić Rodić, L.; Perković, T.; Županović, T.; Šolić, P. Sensing Occupancy through Software: Smart Parking Proof of Concept. Electronics
**2020**, 9, 2207. [Google Scholar] [CrossRef] - Sen, P.C.; Hajra, M.; Ghosh, M. Supervised Classification Algorithms in Machine Learning: A Survey and Review. In Emerging Technology in Modelling and Graphics; Mandal, J.K., Bhattacharya, D., Eds.; Springer: Singapore, 2020; pp. 99–111. [Google Scholar]
- Soofi, A.A.; Awan, A. Classification Techniques in Machine Learning: Applications and Issues. J. Basic Appl. Sci.
**2017**, 13, 459–465. [Google Scholar] [CrossRef] - Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote. Sens. Mag.
**2017**, 5, 8–36. [Google Scholar] [CrossRef] - LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature
**2015**, 521, 436–444. [Google Scholar] [CrossRef] - Kotsiantis, S.B.; Zaharakis, I.; Pintelas, P. Supervised machine learning: A review of classification techniques. Emerg. Artif. Intell. Appl. Comput. Eng.
**2007**, 160, 3–24. [Google Scholar] - Nikam, S.S. A comparative study of classification techniques in data mining algorithms. Orient. J. Comput. Sci. Technol.
**2015**, 8, 13–19. [Google Scholar] - Charoenpong, J.; Pimpunchat, B.; Amornsamankul, S.; Triampo, W.; Nuttavut, N. A Comparison of Machine Learning Algorithms and their Applications. Int. J. Simul. Syst. Sci. Technol.
**2019**, 20, 8. [Google Scholar] - Mothkur, R.; Poornima, K. Machine learning will transfigure medical sector: A survey. In Proceedings of the 2018 International Conference on Current Trends towards Converging Technologies (ICCTCT), Coimbatore, India, 1–3 March 2018; pp. 1–8. [Google Scholar]
- Cutler, A.; Cutler, D.; Stevens, J. Random Forests. In Ensemble Machine Learning; Springer: Boston, MA, USA, 2012; Volume 45, pp. 157–176. [Google Scholar] [CrossRef]
- Paul, A.; Mukherjee, D.P.; Das, P.; Gangopadhyay, A.; Chintha, A.R.; Kundu, S. Improved Random Forest for Classification. IEEE Trans. Image Process.
**2018**, 27, 4012–4024. [Google Scholar] [CrossRef] - Kulkarni, V.Y.; Sinha, P.K. Pruning of Random Forest classifiers: A survey and future directions. In Proceedings of the 2012 International Conference on Data Science Engineering (ICDSE), Kerala, India, 18–20 July 2012; pp. 64–68. [Google Scholar] [CrossRef]
- Lipton, Z.C. A Critical Review of Recurrent Neural Networks for Sequence Learning. arXiv
**2015**, arXiv:1506.00019. [Google Scholar] - Ng, A.Y.; Jordan, M.I. On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes. In Proceedings of the Advances in Neural Information Processing Systems 14 (NIPS 2001), Vancouver, BC, Canada, 3–8 December 2001; Dietterich, T.G., Becker, S., Ghahramani, Z., Eds.; MIT Press: Cambridge, MA, USA, 2001; pp. 841–848. [Google Scholar]
- Joo, R.; Bertrand, S.; Tam, J.; Fablet, R. Hidden Markov Models: The Best Models for Forager Movements? PLoS ONE
**2013**, 8, e71246. [Google Scholar] [CrossRef] - Zhang, G.P. Neural networks for classification: A survey. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev.
**2000**, 30, 451–462. [Google Scholar] [CrossRef] - Roßbach, P. Neural Networks vs. Random Forests—Does It Always Have to Be Deep Learning; Frankfurt School of Finance and Management: Frankfurt am Main, Germany, 2018. [Google Scholar]
- Biau, G.; Scornet, E.; Welbl, J. Neural random forests. Sankhya A
**2019**, 81, 347–386. [Google Scholar] [CrossRef] - NVIDIA; Vingelmann, P.; Fitzek, F.H. CUDA, Release: 11.2. 2021. Available online: https://developer.nvidia.com/cuda-toolkit (accessed on 21 July 2021).
- Provoost, J.; Wismans, L.; der Drift, S.V.; Kamilaris, A.; Keulen, M.V. Short Term Prediction of Parking Area states Using Real Time Data and Machine Learning Techniques. arXiv
**2019**, arXiv:1911.13178. [Google Scholar] - Dongare, A.D.; Kharde, R.R.; Kachare, A.D. Introduction to Artificial Neural Network. Int. J. Eng. Innov. Technol.
**2012**, 2, 189–194. [Google Scholar] - Hayou, S.; Doucet, A.; Rousseau, J. On the Impact of the Activation function on Deep Neural Networks Training. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; 2019; Volume 97, pp. 2672–2680. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef] - Probst, P.; Wright, M.N.; Boulesteix, A.L. Hyperparameters and tuning strategies for random forest. Wires Data Min. Knowl. Discov.
**2019**, 9, e1301. [Google Scholar] [CrossRef] - Biau, G.; Scornet, E. A random forest guided tour. Test
**2016**, 25, 197–227. [Google Scholar] [CrossRef] - Prinzie, A.; Van den Poel, D. Random Forests for multiclass classification: Random MultiNomial Logit. Expert Syst. Appl.
**2008**, 34, 1721–1732. [Google Scholar] [CrossRef] - Oshiro, T.M.; Perez, P.S.; Baranauskas, J.A. How Many Trees in a Random Forest? In Proceedings of the Machine Learning and Data Mining in Pattern Recognition, Berlin, Germany, 20–21 July 2012; Perner, P., Ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 154–168. [Google Scholar]
- Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote. Sens.
**2016**, 114, 24–31. [Google Scholar] [CrossRef] - Su, J.; Sheng, Z.; Leung, V.C.M.; Chen, Y. Energy Efficient Tag Identification Algorithms For RFID: Survey, Motivation And New Design. IEEE Wirel. Commun.
**2019**, 26, 118–124. [Google Scholar] [CrossRef] - Sakr, F.; Bellotti, F.; Berta, R.; De Gloria, A. Machine Learning on Mainstream Microcontrollers. Sensors
**2020**, 20, 2638. [Google Scholar] [CrossRef] - Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J.
**2016**, 3, 637–646. [Google Scholar] [CrossRef] - Magno, M.; Cavigelli, L.; Mayer, P.; von Hagen, F.; Benini, L. FANNCortexM: An Open Source Toolkit for Deployment of Multi-layer Neural Networks on ARM Cortex-M Family Microcontrollers: Performance Analysis with Stress Detection. In Proceedings of the 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), Limerick, Ireland, 15–18 April 2019; pp. 793–798. [Google Scholar] [CrossRef]
- Alameh, M.; Abbass, Y.; Ibrahim, A.; Valle, M. Smart Tactile Sensing Systems Based on Embedded CNN Implementations. Micromachines
**2020**, 11, 103. [Google Scholar] [CrossRef] - Sharma, R.; Biookaghazadeh, S.; Li, B.; Zhao, M. Are Existing Knowledge Transfer Techniques Effective for Deep Learning with Edge Devices? In Proceedings of the 2018 IEEE International Conference on Edge Computing (EDGE), San Francisco, CA, USA, 2–7 July 2018; pp. 42–49. [Google Scholar] [CrossRef]
- TensorFlow Lite. Available online: http://www.tensorflow.org/lite (accessed on 21 July 2021).
- EdgeML Machine LEARNING for Resource-Constrained Edge Devices. Available online: https://github.com/Microsoft/EdgeML (accessed on 21 July 2021).
- STM32CubeMX—STMicroelectronics, X-CUBE-AI—AI. Available online: https://http://www.st.com/en/embedded-software/x-cube-ai.html (accessed on 21 July 2021).
- CMSIS NN Software Library. Available online: https://arm-software.github.io/CMSIS_5/NN/html/index.html (accessed on 21 July 2021).
- COepnMV. Available online: https://openmv.io (accessed on 21 July 2021).
- Teensy 4.0 Development Board. Available online: https://www.pjrc.com/store/teensy40.html (accessed on 21 July 2021).
- Atmel SAM3X8E ARM Cortex-M3 MCU. Available online: http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-11057-32-bit-Cortex-M3-Microcontroller-SAM3X-SAM3A_Datasheet.pdf (accessed on 21 July 2021).
- Arduino IDE. Available online: https://www.arduino.cc/en/software (accessed on 21 July 2021).
- Elsts, A.; McConville, R. Are Microcontrollers Ready for Deep Learning-Based Human Activity Recognition? Electronics
**2021**, 10, 2640. [Google Scholar] [CrossRef] - Šolić, P.; Šarić, M.; Stella, M. RFID reader-tag communication throughput analysis using Gen2 Q-algorithm frame adaptation scheme. Int. J. Circuits Syst. Signal Proc.
**2014**, 8, 233–239. [Google Scholar]

**Figure 1.**An example of an interrogating frame of frame size $L={2}^{Q}$. i represents the size of a particular part of the frame.

**Figure 4.**Comparison of absolute errors for Neural Network, Random Forest and ILCM model for frame sizes (

**a**) $L=8$, (

**b**) $L=16$ and (

**c**) $L=256$.

**Figure 5.**Comparison of throughput for the NN model, ILCM and Optimal model for the scenario of frame size $L=32$ realizations. (

**a**) Throughput for the NN model, ILCM and Optimal model; (

**b**) Throughput for the NN model, ILCM and Optimal model for a larger number of tags.

**Figure 6.**Devices used in the test: Teensy 4.0 (

**left**), Arduino DUE (

**center**) and Raspberry Pi4 (

**right**). Source: Own photo.

ML Classifier | Advantages | Limitations |
---|---|---|

DT | Solves multi-class and binary problems; Fast | Prone to overfitting; Sensitive to outliers |

Can handle missing values; Easily interpretable | ||

k-NN | Solves multi-class and binary problems; Easy to implement | Sensitive to noisy attributes; Poor interpretability |

Slow to evaluate large training sets | ||

SMV | Solves binary problems; High accuracy | Training is slow; High complexity and memory requirements |

Durable to Noise; Excellent to model non-linear relations | ||

RF | Solves multi-class and binary problems; Higher accuracy compared to other models | Can be slow for real-time predictions; Not very interpretable |

Robust to noise | ||

Naive Bayes | Solves multi-class and binary problems; Simple to implement; Fast | Ignores underlying geometry of data; Requires predictors to be independent |

ANN | Solves multi-class and binary problems; Handles noisy data | Prone to over-fitting on small datasets; Computationally intensive |

Detects non-linear relation amongst data; Fast |

Q | L | S | C | E | N (Number of Tags) |
---|---|---|---|---|---|

2 | 4 | 2 | 1 | 1 | 6 |

2 | 4 | 0 | 3 | 1 | 15 |

… | … | … | … | … | … |

8 | 256 | 79 | 122 | 55 | 401 |

8 | 256 | 18 | 229 | 9 | 943 |

Hyper Parameter | Values |
---|---|

n_estimators | 50, 100, 200, 500 |

criterion | gini, entropy |

max_depth | 3, 5, 10, 20 |

max_features | auto, sqrt |

min_samples_split | 2, 4, 6, 10 |

Frame Size | n_Estimators | Criterion | Max_Depth | Max_Features | Min_Samples_Plit |
---|---|---|---|---|---|

L = 4 | 50 | gini | 5 | auto | 2 |

L = 8 | 50 | gini | 5 | auto | 2 |

L = 16 | 100 | gini | 10 | auto | 4 |

L = 32 | 100 | entropy | 20 | sqrt | 2 |

L = 128 | 500 | gini | 20 | sqrt | 2 |

L = 256 | 200 | gini | 20 | sqrt | 4 |

Frame Size | ACCURACY | ||
---|---|---|---|

NN | ILCM | RF | |

L = 4 | 33.54% | 23.55% | 33.59% |

L = 8 | 28.56% | 27.28% | 28.22% |

L = 16 | 24.05% | 23.27% | 24.37% |

L = 32 | 19.78% | 17.06% | 19.54% |

L = 128 | 11.25% | 4.42% | 12.12% |

L = 256 | 5.74% | 2.8% | 9.46% |

Frame Size | MAE | ||
---|---|---|---|

NN | ILCM | RF | |

L = 4 | 2.23 | 2.182 | 2.23 |

L = 8 | 2.56 | 2.61 | 2.5 |

L =16 | 3.57 | 4.31 | 3.69 |

L = 32 | 5.23 | 6.98 | 5.324 |

L = 128 | 11.27 | 17.38 | 11.93 |

L = 256 | 16.06 | 27.29 | 18.19 |

Original Model | Quantized Model | |
---|---|---|

Model L = 4 | 33.33% | 32.72% |

Model L = 8 | 28.53% | 27.58% |

Model L = 16 | 23.11% | 22.04% |

Model L = 32 | 19.00% | 12.08% |

Model L = 128 | 8.03% | 4.08% |

Model L = 256 | 6.71% | 3.03% |

Frame Size | Model Size | Execution Time (ms) | ||
---|---|---|---|---|

(Bytes) | Teensy 4.0 | Arduino DUE | Raspberry Pi4 | |

L = 4 | 4320 | 22 | 897 | 143 |

L = 8 | 5152 | 32 | 1284 | 159 |

L = 16 | 6592 | 48 | 1983 | 173 |

L =32 | 13,824 | 120 | 4928 | 187 |

L = 128 | 75,776 | 692 | 29,615 | 270 |

L = 256 | 283,264 | 1669 | 111,374 | 648 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Dujić Rodić, L.; Stančić, I.; Zovko, K.; Perković, T.; Šolić, P.
Tag Estimation Method for ALOHA RFID System Based on Machine Learning Classifiers. *Electronics* **2022**, *11*, 2605.
https://doi.org/10.3390/electronics11162605

**AMA Style**

Dujić Rodić L, Stančić I, Zovko K, Perković T, Šolić P.
Tag Estimation Method for ALOHA RFID System Based on Machine Learning Classifiers. *Electronics*. 2022; 11(16):2605.
https://doi.org/10.3390/electronics11162605

**Chicago/Turabian Style**

Dujić Rodić, Lea, Ivo Stančić, Kristina Zovko, Toni Perković, and Petar Šolić.
2022. "Tag Estimation Method for ALOHA RFID System Based on Machine Learning Classifiers" *Electronics* 11, no. 16: 2605.
https://doi.org/10.3390/electronics11162605