You are currently viewing a new version of our website. To view the old version click .
Big Data and Cognitive Computing
  • Editor’s Choice
  • Article
  • Open Access

1 July 2022

Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition

,
,
,
and
1
CES Lab, ENIS, Sfax University, Sfax 3029, Tunisia
2
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Faculty of Engineering, Université de Moncton, Moncton, NB E1A3E9, Canada
4
Spectrum of Knowledge Production & Skills Development, Sfax 3027, Tunisia
This article belongs to the Special Issue Advancements in Deep Learning and Deep Federated Learning Models

Abstract

The evolution of applications in telecommunication, network, computing, and embedded systems has led to the emergence of the Internet of Things and Artificial Intelligence. The combination of these technologies enabled improving productivity by optimizing consumption and facilitating access to real-time information. In this work, there is a focus on Industry 4.0 and Smart City paradigms and a proposal of a new approach to monitor and track water consumption using an OCR, as well as the artificial intelligence algorithm and, in particular the YoLo 4 machine learning model. The goal of this work is to provide optimized results in real time. The recognition rate obtained with the proposed algorithms is around 98%.

1. Introduction

Nowadays, smart cities are conceived to integrate multiple information types and technologies to offer more and better services. The research progress in this domain solved many complex problems in recent years, such as pollution, e-health, transport, and remote data collection.
Moreover, the use of real time data and Artificial Intelligence (AI) increased efficiency and offered flexibility and easy use. In fact, deep and machine learning are nurtured to enable systems to correctly interpret external data, to learn from such data, and to use this knowledge to achieve specific goals and tasks through flexible adaptation [1,2,3].
The Internet of Things (IoT) is used in different areas not only smart cities, smart agriculture, and industry 4.0 [4] but also the field of sports and e-health (called in this case IoMT [5,6,7]). When we collect data from different sensors via IoT, artificial intelligence is used to process the data and to have forecasts and to bring a considerable saving of the consumption.
In this paper, we focus on remote automatic data treatment using artificial intelligence for water monitoring. This task will be performed automatically without an agent, which allows a low-cost solution. In addition, this real time data helps customers to obtain more comprehensive visibility of their water consumption.
In this work, we start from the real case of the water consumption management infrastructure in Tunisia and we will apply AI to facilitate the management and minimize the consumption for users.
The contributions of this paper may be summarized as follows.
  • An AI-and OCR-based model is advanced to detect and extract water meter numbers.
  • This model is implementable on smart phones.
  • The model enables detecting, extracting, and calculating pertinent data, such as consumption and date and storing them in a database.
  • The accuracy obtained from the object detection model is about 98%.
In light of these results, several perspectives are proposed.
This paper is articulated around four parts:
-
The state of the art that talks about the application of AI in the context of smart cities, and particularly in the management of consumption.
-
The proposed approach that facilitates data collection, storage, and approximation of consumption
-
The different results of the implementation of the proposed approach
-
A conclusion and perspectives of the proposed work.

3. Proposed Approach

We represent our proposed system based on deep learning. The software architecture allows us to describe in a symbolic and schematic way the different elements of the computer system, their interrelations, and interactions.
Our approach presents 3 units, which are: (Figure 2)
Figure 2. Software architecture.
-
Display unit: mobile application;
-
Image processing unit: model AI, which will be integrated within the mobile application;
-
Water providers data storage unit: database.

3.1. Specification

It is important to identify and specify the functionalities that will be implemented. This will determine what we expect from our application. Indeed, our system will be modeled using diagrams that respect the UML modeling language.
So, to satisfy the needs of users, our system must provide these main services, illustrated in Figure 3. In Figure 4, we propose the online detection system uses case diagram.
Figure 3. General use case diagram.
Figure 4. Online detection’ use case diagram.

3.2. Image Processing

The image processing unit is a class containing methods to detect the contours of water meter numbers, extract these numbers, and calculate the monthly consumption of each. Once the outline of the meter is detected, an object named meter is created. This object will then be processed by a fast OCR algorithm. Once we have obtained the water meter number, the calculation of the consumption of the meter will be done automatically and the image, the detected number, and the consumption figure, as well as the GPS location, the date of the day, and the name of the field must be recorded in our database. The meter is a class, all meters are objects generated by the object detection process.
The following diagram (Figure 5) is used to represent the triggering of events according to the system states and to model parallelizable behaviors. It is used to describe a workflow.
Figure 5. Flow chart of the Program.
In our approach, the image processing goes through several steps: first, we will insert the image. Then, each image should undergo the two phases of the program: detection and number extraction of each meter.

3.2.1. Yolo Meter Detection

The object detection model will be integrated in a mobile application. We need to choose the fastest, lightest, and the most accurate one. The major reason why you cannot proceed with this problem, object detection, by building a standard convolutional network followed by a fully connected layer is that the length of the output layer is variable and not constant. This is because the number of occurrences of the objects of interest is not fixed. A straightforward approach to solve this problem would be to take different regions of interest from the image and use a CNN to classify the presence of the object within that region.
The problem with this approach is that the objects of interest might have different spatial locations within the image and different aspect ratios. Hence, you would have to select a huge number of regions, and this could computationally blow up. Therefore, algorithms, such as R-CNN, YOLO, etc., have been developed to find these occurrences and find them rapidly.
Moreover, in Yolo4, features were predicted for each layer using “Feature Pyramid Network”. This solved the problem of not catching small objects as high resolution features. The authors in [27] compare the precision between Faster-RCNN, Yolo v4, and SSD after several object detection, and conclude that Yolo v4 shows the highest precision, as shown in Figure 6.
Figure 6. Comparison of performance of Deep learning Models.

3.2.2. Yolo Implementation

The approach is based on the Darknet neural network framework for training and testing mobile applications. The framework uses multi-scale training, massive data expansion and batch normalization. It is an open-source neural network framework written in C and CUDA.
For deep learning detection, a dataset is needed. It generally integrates several data (video files, images, texts, sounds, or even statistics). Their grouping together forms a set that enables the automatic learning and model creation. Thus, the first step is to collect images and, if necessary, by exploiting data augmentation or image enhancement (1100 images).
The first step is to collect images and, if necessary, by exploiting data augmentation or image enhancement (1100 images). The next step is data annotation/labeling (Figure 7). Our dataset is in the Darknet YOLO format to train YOLOv4 on the Darknet with our custom dataset and to divide the data into three folders. The training was performed on 70% of the images, 10% for validation and the test on 20% of the images.
Figure 7. Data Labeling.
To be able to integrate the model in mobile applications, the weights are converted to TensorFlow Lite.

3.3. Number Meter Extraction

OCR methods use algorithms to recognize the characters, of which there are two variants. Pattern recognition is where the algorithm is trained with examples of characters in different fonts and can then use this training to try and recognize characters from the input. Feature recognition is where the algorithm has a specific set of rules regarding the features of characters, for example the number of angles and crossed lines. The algorithm then uses this to recognize the text [28,29,30].
In our approach, the open-source OCR Tesseract is used and deployed on the mobile application. The Tesseract process flow in presented in Figure 8.
Figure 8. Tesseract process flow.
The image process starts with eliminating image noise with non-local means denoising and Gaussian blur. Next, using four different thresholds to preprocess our images. This binarization is based on Niblack’s algorithm, which is creating a threshold image. A rectangular window is used to glide across the image and compute the threshold value for the center pixel by using the mean and the variance of the gray values in this window.
Another method is used based on the histogram of Otsu thresholding. Using this setup, we develop an effective thresholding technique for diverse test situations. The results are provided in Figure 9.
Figure 9. Image processing result.
After having realized the architecture of our system, the next step will be dedicated to the implementation and realization of the mobile application.

3.4. Mobile Application

To facilitate the access and recording of data in relation to the agents who read the meters, it is important to use their smartphones. Since these smartphones have different characteristics, we proposed to use light mobile applications. We chose to use Android as the mobile platform.
Indeed, since the majority of smartphones used in this work are Android smartphones, we decided to use Android Studio as SW. We could have used cross-platform systems but given the need for an optimized SW lite, we chose to create an Android platform.
On this platform we used a lite AI framework to do the digital recognition. This application will then allow us to save the data on the phone.
As soon as the system is connected via 3G and/or Wifi, the data will be saved in the main database. In the result part, screen printouts will display the result of the implementation.

4. Obtained Results

4.1. Counter Detection

In this part, we will present the results of the implementation of the application. Figure 10 illustrates the object detection result.
Figure 10. Object detection result.
Using a smartphone, we detect the counter number, as described in the proposed approach part.
After training the custom tiny-YOLOv4 object detector and saving the obtained weights, we then repeated the process, re-modifying the configuration file to obtain the weights that achieved the highest mAPscore on my training set.
Once the training is finished, we use our trained custom tiny-YOLO v4 detector to make inference on test images. When we run this detector on a test image, we got the bounding box of the detected water meter number successfully.
Finally, we converted the weights to TensorFlow’s.pb representation then we converted the TensorFlow weights to TensorFlow Lite to prepare the model to be integrated in the mobile application.

4.2. Overview Process

After implementing the approach of the water meter counter detection process, we illustrated the obtained result in Figure 11. It contains the number recognition. The proposed process is as follows:
Figure 11. Process overview.
  • Detect the requested area from the image: water meter counter;
  • Perform an image processing on the images;
  • Pass the images to Tesseract;
  • Store the results of Tesseract in the desired format.

4.3. Application Realization

After creating both the AI model and the Android mobile application and integrating the model within the application, we have a function mobile application that replied to all the project goals. The welcome interface (Figure 12) is the first window encountered after launching the mobile application. This interface welcomes back the user and resumes the role of the application. It lasts for 10 s, then one of these two scenarios may arise:
Figure 12. The welcome interface.
  • If the user is not authenticated, he will be directed to the Sign in interface.
  • If not, he will be directed to the Home interface.
The authentication interface, as depicted in Figure 13, is the second window encountered after launching the application and seeing the welcome interface. Its role is to secure access to the application; it allows the entry of a sign-in and a password. After the validation of the information entered by the sign in button, two scenarios are presented:
Figure 13. The sign interface.
  • If the information is validated, the user will be redirected to the main interface of the application, which is the Home interface.
  • If not, an error message will be displayed.
Figure 14 shows the general and main menu of the application. It has four sections: “Online detection”, “Take picture”, “Real time detection”, and “Show uploads”.
Figure 14. The Home interface.
Once the operator opens the “Online detection” interface, as shown in Figure 15, they can enter the field name, the zip code name, and pick a picture either from the gallery or from the camera. Once the user has picked a picture, the water meter number will be detected and extracted. Finally, both the image and the number will be displayed. Then he can press the register meter button to go to the second interface to be able to enter the water current units, see the monthly consumption of the water, see the date of the recording, and locate the water meter location by clicking on the location button view. Finally, the operator presses the submit button to save all the water meter data in the firebase.
Figure 15. The Online detection interfaces.
When the user opens the “Real time detection” interface (Figure 16), they can test the Yolo detection model and its accuracy to know the limits of the AI model and to have the best experience possible with this mobile application to perform their job successfully.
Figure 16. The real time detection interface.
When the user opens the “Show uploads” interface (Figure 17), they will be able to both view and delete any saved record.
Figure 17. The show uploads interface.

5. Discussion

The present work was carried out within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. We used a dataset of 1100 images.
The training was performed on 70% of the images, 10% for validation and the test on 20% of the images. We then conducted a test on 150 real images that were photographed from a water meter. The detection process applied to the 150 images, including some cases with numbers between two positions. It resulted in 2 erroneous and 148 correct values.
Thanks to the learning approach, the proposed system allowed for choosing the low value that will be stored in the database. The obtained recognition rate was 98.67%.

6. Conclusions

The present work was carried out within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. This prototype will be developed and used in the context of the company digitization and governance.
The objective of this paper was to develop an AI model based on deep learning, OCR algorithms, and artificial intelligence, which allows us to detect and extract water meter numbers. Moreover, this model was integrated into an Android mobile application. In fact, the meter images are taken by the cameras of the operator’s smartphones. Then, our application allows them to detect, extract, calculate the consumption of each meter monthly, and finally save all relevant information in the Firebase, such as the meter number, location, date, etc.
The accuracy obtained from the object detection model with the tiny YOLOv4 is 98%. The results obtained and the studies and the experiments carried out have also enabled us to highlight certain areas of improvement for our algorithm, such as enrichment and optimization of the speed and efficiency of our system.
Despite the results obtained, several perspectives of this work are being developed.
The future work aims at making a diagnosis of intelligent consumption, on the one hand, and a secure data backup using blockchain technology, on the other hand.
The blockchain will create a system allowing traceability not only of the data but also of the different transactions that take place.

Author Contributions

This paper is the result of collaboration between different authors from different universities. Conceptualization, T.F., J.K.; methodology, H.H., T.F.; software.; validation, J.K., and H.H.; formal analysis, M.H.; investigation, H.E.; resources, M.H.; data curation, H.H.; writing—original draft preparation, J.K.; writing—review and editing, T.F.; visualization, M.H.; supervision, H.H.; project administration, T.F.; funding acquisition, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R125), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

No data availability.

Acknowledgments

The authors thank Natural Sciences and Engineering Research Council of Canada (NSERC) and New Brunswick Innovation Foundation (NBIF) for the financial support of the global project. These granting agencies did not contribute to the design of the study and collection, analysis, or interpretation of data.

Conflicts of Interest

There is no conflict of interest.

References

  1. Balouch, S.; Abrar, M.; Abdul Muqeet, H.; Shahzad, M.; Jamil, H.; Hamdi, M.; Malik, A.S.; Hamam, H. Optimal Scheduling of Demand Side Load Management of Smart Grid Considering Energy Efficiiency. Energy Res. 2022, 18, 861571. [Google Scholar] [CrossRef]
  2. Masood, B.; Guobing, S.; Nebhen, J.; Rehman, A.U.; Iqbal, M.N.; Rasheed, I.; Bajaj, M.; Shafiq, M.; Hamam, H. Investigation and Field Measurements for Demand Side Management Control Technique of Smart Air Conditioners located at Residential, Commercial, and Industrial Sites. Energies 2022, 15, 2482. [Google Scholar] [CrossRef]
  3. Asif, M.; Ali, I.; Ahmad, S.; Irshad, A.; Gardezi, A.A.; Alassery, F.; Hamam, H.; Shafiq, M. Industrial Automation Information Analogy for Smart Grid Security. CMC-Comput. Mater. Contin. 2022, 71, 3985–3999. [Google Scholar] [CrossRef]
  4. Boyes, H.; Hallaq, B.; Cunningham, J.; Watson, T. The industrial internet of things (IIoT): An analysis framework. Comput. Ind. 2018, 101, 1–12. [Google Scholar] [CrossRef]
  5. França, R.P.; Monteiro, A.C.B.; Arthur, R.; Iano, Y. An Overview of the Internet of Medical Things and Its Modern Perspective. In Efficient Data Handling for Massive Internet of Medical Things. Internet of Things (Technology, Communications and Computing); Chakraborty, C., Ghosh, U., Ravi, V., Shelke, Y., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  6. Frikha, T.; Chaari, A.; Chaabane, F.; Cheikhrouhou, O.; Zaguia, A. Healthcare and Fitness Data Management Using the IoT-Based Blockchain Platform. J. Healthc. Eng. 2021, 2021, 9978863. [Google Scholar] [CrossRef]
  7. Frikha, T.; Chaabane, F.; Aouinti, N.; Cheikhrouhou, O.; Ben Amor, N.; Kerrouche, A. Implementation of Blockchain Consensus Algorithm on Embedded Architecture. Secur. Commun. Netw. 2021, 2021, 9918697. [Google Scholar] [CrossRef]
  8. Kagermann, H.; Wahlster, W.; Helbig, J. Securing the Future of German Manufacturing Industry: Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0; Final Report of the Industrie 4.0 Working Group; Forschungsunion im Stifterverband fur die Deutsche Wirtschaft e.V.: Berlin, Germany, 2013. [Google Scholar]
  9. Duan, L.; Da Xu, L. Data Analytics in Industry 4.0: A Survey. Inf. Syst. Front. 2021, 1–17. [Google Scholar] [CrossRef]
  10. Perrier, N.; Bled, A.; Bourgault, M.; Cousin, N.; Danjou, C.; Pellerin, R.; Roland, T. Construction 4.0: A survey of research trends. J. Inf. Technol. Constr. 2020, 25, 416–437. [Google Scholar] [CrossRef]
  11. Serpanos, D.; Wolf, M. Industrial Internet of Things. In Internet-of-Things (IoT) Systems; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  12. Qin, J.; Liu, Y.; Grosvenor, R. A Categorical Framework of Manufacturing for Industry 4.0 and Beyond. Procedia CIRP 2016, 52, 173–178. [Google Scholar] [CrossRef] [Green Version]
  13. Blanchet, M.; Rinn, T. The Industrie 4.0 Transition Quantified. Roland Berger Think Act, Munich. 2016. Available online: www.rolandberger.com/publications/publication_pdf/roland_berger_industry_40_20160609.pdf (accessed on 15 May 2022).
  14. Preuveneers, D.; Ilie-Zudor, E. The intelligent industry of the future: A survey on emerging trends, research challenges and opportunities in industry 4.0. J. Ambient. Intell. Smart Environ. 2017, 9, 287–298. [Google Scholar] [CrossRef] [Green Version]
  15. Schumacher, A.; Erol, S.; Sihn, W. A Maturity Model for Assessing Industry 4.0 Readiness and Maturity of Manufacturing Enterprises. Procedia CIRP 2016, 52, 161–166. [Google Scholar] [CrossRef]
  16. Fathalli, A.; Romdhane, M.S.; Vasconcelos, V.; Ben Rejeb Jenhani, A. Biodiversity of cyanobacteria in Tunisian freshwater reservoirs: Occurrence and potent toxicity—A review. J. Water Supply Res. Technol.-Aqua 2015, 64, 755–772. [Google Scholar] [CrossRef] [Green Version]
  17. Gallo, I.; Zamberletti, A.; Noce, L. Robust Angle Invariant GAS Meter Reading. In Proceedings of the 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, SA, Australia, 23–25 November 2015. [Google Scholar] [CrossRef]
  18. Quintanilha, D.B.P.; Costa, R.W.S.; Diniz, J.O.B.; de Almeida, J.D.S.; Braz, G.; Silva, A.C.; de Paiva, A.C.; Monteiro, E.M.; Froz, B.R.; Piheiro, L.P.A.; et al. Automatic consumption reading on electromechanical meters using HoG and SVM. In Proceedings of the 7th Latin American Conference on Networked and Electronic Media (LACNEM 2017), Valparaiso, Chile, 6–7 November 2018. [Google Scholar] [CrossRef]
  19. Gonçalves, J.C.; Centeno, T.M. Utilização De Técnicas De Processamento De Imagens E Classificação De Padrões No Reconhecimento De Dígitos Em Imagens De Medidores De Consumo De Gás Natural. Abakos (Brasil) 2017, 5, 59–78. [Google Scholar] [CrossRef] [Green Version]
  20. Cerman, M.; Shalunts, G.; Albertini, D. A mobile recognition system for analog energy meter scanning. In International Symposium on Visual Computing; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  21. Gomez, L.; Rusinol, M.; Karatzas, D. Cutting Sayre’s Knot: Reading Scene Text without Segmentation. Application to Utility Meters. In Proceedings of the 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), Vienna, Austria, 24–27 April 2018. [Google Scholar] [CrossRef]
  22. Elrefaei, L.A.; Bajaber, A.; Natheir, S.; Abusanab, N.; Bazi, M. Automatic electricity meter reading based on image processing. In Proceedings of the 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Amman, Jordan, 3–5 November 2015. [Google Scholar] [CrossRef]
  23. Tsai, C.M.; Shou, T.D.; Chen, S.C.; Hsieh, J.W. Use SSD to Detect the Digital Region in Electricity Meter. In Proceedings of the 2019 International Conference on Machine Learning and Cybernetics (ICMLC), Kobe, Japan, 7–10 July 2019. [Google Scholar] [CrossRef]
  24. Yang, F.; Jin, L.; Lai, S.; Gao, X.; Li, Z. Fully convolutional sequence recognition network for water meter number reading. IEEE Access 2019, 7, 11679–11687. [Google Scholar] [CrossRef]
  25. Li, C.; Su, Y.; Yuan, R.; Chu, D.; Zhu, J. Light-weight spliced convolution network-based automatic water meter reading in smart city. IEEE Access 2019, 7, 174359–174367. [Google Scholar] [CrossRef]
  26. Salomon, G.; Laroca, R.; Menotti, D. Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  27. Zuo, L.; He, P.; Zhang, C.; Zhang, Z. A robust approach to reading recognition of pointer meters based on improved mask-RCNN. Neurocomputing 2020, 388, 90–101. [Google Scholar] [CrossRef]
  28. Jeong-ah, K.; Ju-yeong, S.; Se-ho, P. Comparaison of Faster RCNN, YOLO and SSD for Real time vehicle type recognition. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics—Asia (ICCE-Asia), Seoul, Korea, 1–3 November 2020. [Google Scholar]
  29. Forsberg, A.; Lundqvist, M. A Comparison of OCR Methods on Natural Images in Different Image Domains; Degree Project in technology; KTH Royal Institute of Technology: Stockholm, Sweden, 2020. [Google Scholar]
  30. Allouche, M.; Frikha, T.; Mitrea, M.; Memmi, G.; Chaabane, F. Lightweight Blockchain Processing. Case Study: Scanned Document Tracking on Tezos Blockchain. Appl. Sci. 2021, 11, 7169. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.