Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition

: The evolution of applications in telecommunication, network, computing, and embedded systems has led to the emergence of the Internet of Things and Artiﬁcial Intelligence. The combination of these technologies enabled improving productivity by optimizing consumption and facilitating access to real-time information. In this work, there is a focus on Industry 4.0 and Smart City paradigms and a proposal of a new approach to monitor and track water consumption using an OCR, as well as the artiﬁcial intelligence algorithm and, in particular the YoLo 4 machine learning model. The goal of this work is to provide optimized results in real time. The recognition rate obtained with the proposed algorithms is around 98%.


Introduction
Nowadays, smart cities are conceived to integrate multiple information types and technologies to offer more and better services. The research progress in this domain solved many complex problems in recent years, such as pollution, e-health, transport, and remote data collection.
Moreover, the use of real time data and Artificial Intelligence (AI) increased efficiency and offered flexibility and easy use. In fact, deep and machine learning are nurtured to enable systems to correctly interpret external data, to learn from such data, and to use this knowledge to achieve specific goals and tasks through flexible adaptation [1][2][3].
The Internet of Things (IoT) is used in different areas not only smart cities, smart agriculture, and industry 4.0 [4] but also the field of sports and e-health (called in this case IoMT [5][6][7]). When we collect data from different sensors via IoT, artificial intelligence is used to process the data and to have forecasts and to bring a considerable saving of the consumption.
In this paper, we focus on remote automatic data treatment using artificial intelligence for water monitoring. This task will be performed automatically without an agent, which allows a low-cost solution. In addition, this real time data helps customers to obtain more comprehensive visibility of their water consumption.
In this work, we start from the real case of the water consumption management infrastructure in Tunisia and we will apply AI to facilitate the management and minimize the consumption for users.
The contributions of this paper may be summarized as follows.
• An AI-and OCR-based model is advanced to detect and extract water meter numbers. • This model is implementable on smart phones.

•
The model enables detecting, extracting, and calculating pertinent data, such as consumption and date and storing them in a database.

•
The accuracy obtained from the object detection model is about 98%.
In light of these results, several perspectives are proposed. This paper is articulated around four parts: -The state of the art that talks about the application of AI in the context of smart cities, and particularly in the management of consumption. -The proposed approach that facilitates data collection, storage, and approximation of consumption - The different results of the implementation of the proposed approach -A conclusion and perspectives of the proposed work.

Industry 4.0
The concept of Industry 4.0 was launched in 2011 by the government of Germany. The perspective has been to increase and maintain the productivity and flexibility performance of the German manufacturing sector [8]. It is about promoting smart production by machines and humans communicating with each other.
Although Industry 4.0 is preceded by three industrial revolutions, it considers itself disruptive since it aims at making the manufacturing system intelligent by factories, products and services that are also intelligent and connected to each other. It is about making all the objects and stakeholders of a factory interconnected throughout the value chain. Industry 4.0, therefore, involves contemporary societies and organizations and is the subject of research in the academic and industrial world [9]. The transdisciplinarity of the concept, translated by the strong interest given to said concept, leads to the emergence of a diversity of terminology, such as "future industry", "digital industry", "smart industry", "industrial internet", or "digital transformation" [10]. Some authors characterize Industry 4.0 as "systems that communicate and cooperate with each other, but also with humans, to decentralize decision making" [11].
The definition given to the term industrial internet by General Electric, confirms the transdisciplinarity nature of the industry 4.0 concept. It describes the integration of machines, computers, and humans with sensors, connected objects, and software enabling the prediction, planning, and control of industrial operations and generating transformational organizational results [12]. It is recognized that a long period of time is needed for a change, restructuring, and even an industrial revolution to develop and adjust. Thus, Qin [11] states that, in parallel with the implementation of change, the definition of the concept industry 4.0 will be refined and adapted to the advances of the field [13]. Indeed, for Blanchet [14], it is a new paradigm of inserting these technologies into industries. Companies are driven to invest in integrating new information technologies, automating processes through robotics, cyber-physical systems and embedded systems, and making supply chains coordinated [14]. This paradigm ranges from optimizing physical assets to optimizing how data and information are leveraged throughout the product lifecycle. This digital optimization is based on an information flow, represented by a "digital thread", that spans the entire product lifecycle.
To optimize the manufacturing ecosystem, it is important to use information well. The technologies used in Industry 4.0 provide the means for smart connected devices and sensors to better utilize data. This helps optimize productivity and efficiency [15]. For example, advanced analytics transform information into results that help decision makers and 3D printing to convert digital data into a tangible part and the captured information to plan the ideal maintenance time. In other words, the key to seizing new opportunities and boosting performance is to actively manage information along the value chain to avoid information leakage [15]. These leakages present loss information that may affect a stakeholder in the value chain. Moreover, machines and goods are a major cost category for manufacturing companies. Therefore, the optimal use of information sensors and smart, connected devices will have a significant effect on optimizing productivity, life cycle management, and organizational design.
The introduction of remote monitoring and steering to reduce downtime, by making the best use of all machine information, can improve asset utilization, and thus generate value. For example, for [16], "Industry 4.0 refers to recent technological advances in which the Internet and associated technologies (e.g., embedded systems) serve as a fulcrum for integrating physical objects, human actors, smart machines, production lines, and processes across organizational boundaries to form a new, more agile, intelligent, and connected value chain" [16].

Water Monitoring
One of the smart cities components is "Smart City Services". It includes the activities that sustain a city's population; these involve municipal tasks, such as supply of water, waste management, environmental control, and both monitoring and billing meters, etc. In this paper, we will apply the basics of industry 4.0 to the management of water consumption in Tunisia.
Tunisia is one of the countries of the Mediterranean with little water resources. The mobilizable potential is estimated at 4.6 billion/m 3 , the regulatable resources amount to 4.1 billion / m 3 and the current mobilization rate is 74%. The volume currently available per capita and per year is 450 m 3 against 556 in Morocco, 776 in Syria, and 2200 in Turkey [17].
In this context, automatic meter reading is an important subject, which refers to automatically recording the consumption of electric energy, gas, and water for both monitoring and billing. Despite the existence of smart readers, they are not widespread in many countries, especially in the underdeveloped ones, and the reading is still performed manually on site by an operator who manually writes the meter number on a piece of paper, which could be easily lost, with no reading proof, such as an image.
Since this operation is subject to human errors, unfortunately there is no checking process to confirm correct data reading before saving them in a database, and even if there is a process, such as having two operators visit the site, this way of information checking is human effort and time consuming. Moreover, it shows low efficiency. Furthermore, due to a large number of meters to be evaluated, the inspection is usually done by another operator and errors might go unnoticed. Performing the meter inspection automatically would reduce mistakes introduced by the human factor and save manpower. Furthermore, the reading could also be executed automatically using a mobile application installed in the smartphone of the operator. In summary, image-based automatic meter reading has advantages, such as lower cost and fast installation, since it does not require renewal or replacement of existing meters.
The work carried out in this paper was realized within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. This prototype will be developed and used in the context of the company digitization and governance.
In the method currently used in Tunisia, the error and the problem of data recording may be the result of several causes: Loss of the paper register on which the data is collected.
This system connects the water management company, the employees, and the customers. Therefore, it leads to the following outcomes: -For the Tunisian water management company: to continuously obtain updated consumption values of the different customers. -For the customers: to get access in real time to the consumption value as well as the invoices that have been saved. To uncover the benefits of real time data in water monitoring, we look at how it is collected and analyzed and the kind of insights it can provide for water providers. In fact, the intelligent access to consumption data as a part of the end-to-end connectivity that is increasingly important to their business processes.
Before presenting our approach, it is important to briefly present an object detection/ recognition algorithm. It is based on image classification, object localization, and object detection/segmentation. Figure 1 shows these algorithms. Top performing deep learning models are R-CNN (region-based convolutional neural network), Fast R-CNN, Faster R-CNN, SSD (single shot multibox detector), and Yolo.

-
For the Tunisian water management company: to continuously obtain up sumption values of the different customers. -For the customers: to get access in real time to the consumption value as invoices that have been saved. -For the collaborators: to avoid errors in the manual seizure of the values.
To uncover the benefits of real time data in water monitoring, we look collected and analyzed and the kind of insights it can provide for water provid the intelligent access to consumption data as a part of the end-to-end connect increasingly important to their business processes.
Before presenting our approach, it is important to briefly present an objec recognition algorithm. It is based on image classification, object localization detection/segmentation. Figure 1 shows these algorithms. Top performing de models are R-CNN (region-based convolutional neural network), Fast R-CNN CNN, SSD (single shot multibox detector), and Yolo. Several works have been carried out on the problems of AMR (automatic ing). In some cases, the authors integrate the process in a single step while ot it into three main steps: (1) The detection of the meters; (2) Digit segmentation; (3) Digit recognition.
The authors in Refs. [18][19][20] have limited themselves to images of counte cific characteristics (position and colors of the digits, etc.). The major drawb technique is that it may only work on certain types of water meters under spe tions.
The authors in Refs. [21,22] use deep learning approaches, which requ image base to have efficient results.
In [23], the authors perform three steps of electric meter recognition: pre segmentation of individual digits, and reading recognition. The results we with 21 images used. Several works have been carried out on the problems of AMR (automatic meter reading). In some cases, the authors integrate the process in a single step while others divide it into three main steps: (1) The detection of the meters; (2) Digit segmentation; (3) Digit recognition.
The authors in Refs. [18][19][20] have limited themselves to images of counters with specific characteristics (position and colors of the digits, etc.). The major drawback of this technique is that it may only work on certain types of water meters under specific conditions.
The authors in Refs. [21,22] use deep learning approaches, which requires a large image base to have efficient results.
In [23], the authors perform three steps of electric meter recognition: preprocessing, segmentation of individual digits, and reading recognition. The results were obtained with 21 images used.
Many works focus on a single step of the AMR pipeline [24,25]. It becomes difficult to accurately evaluate the presented methods from end to end. It is difficult to evaluate existing methods given the execution time of the proposed approaches, as well as the hardware used. The authors in [26] focus on the problem of water meter recognition in smart city applications. The experimental results show high accuracy and require fewer parameters and less computation.
They also implemented a system for real-time database management on the Cloud platform. The system sends the image of the meter index. This increases the percentage of data storage.
In [27,28], the authors explore the angle between the pointer and the dial to perform the reading. Therefore, they do not work on digit counters but rather on dial counters.
Based on the different approaches used in the state of the art and given the technological and infrastructural constraints (limited network connection and coverage, GSM and tablets with minimal resolution and resources, etc.), we propose in this paper a hybrid approach. This approach allows for recognition, computation and minimization of consumption.

Proposed Approach
We represent our proposed system based on deep learning. The software architecture allows us to describe in a symbolic and schematic way the different elements of the computer system, their interrelations, and interactions.
Our approach presents 3 units, which are: ( Figure 2) data storage. In [27,28], the authors explore the angle between the pointer and the dial to perform the reading. Therefore, they do not work on digit counters but rather on dial counters.
Based on the different approaches used in the state of the art and given the technological and infrastructural constraints (limited network connection and coverage, GSM and tablets with minimal resolution and resources, etc.), we propose in this paper a hybrid approach. This approach allows for recognition, computation and minimization of consumption.

Proposed Approach
We represent our proposed system based on deep learning. The software architecture allows us to describe in a symbolic and schematic way the different elements of the computer system, their interrelations, and interactions.
Our approach presents 3 units, which are: (Figure 2) -Display unit: mobile application; -Image processing unit: model AI, which will be integrated within the mobile application; -Water providers data storage unit: database.

Specification
It is important to identify and specify the functionalities that will be implemented. This will determine what we expect from our application. Indeed, our system will be modeled using diagrams that respect the UML modeling language.
So, to satisfy the needs of users, our system must provide these main services, illustrated in Figure 3. In Figure 4, we propose the online detection system uses case diagram. -Display unit: mobile application; -Image processing unit: model AI, which will be integrated within the mobile application; -Water providers data storage unit: database.

Specification
It is important to identify and specify the functionalities that will be implemented. This will determine what we expect from our application. Indeed, our system will be modeled using diagrams that respect the UML modeling language.
So, to satisfy the needs of users, our system must provide these main services, illustrated in Figure 3. In Figure 4, we propose the online detection system uses case diagram. Big Data Cogn. Comput. 2022, 6, x FOR PEER REVIEW 6 of 16

Image Processing
The image processing unit is a class containing methods to detect the contours of water meter numbers, extract these numbers, and calculate the monthly consumption of each. Once the outline of the meter is detected, an object named meter is created. This object will then be processed by a fast OCR algorithm. Once we have obtained the water meter number, the calculation of the consumption of the meter will be done automatically and the image, the detected number, and the consumption figure, as well as the GPS location, the date of the day, and the name of the field must be recorded in our database. The meter is a class, all meters are objects generated by the object detection process.
The following diagram ( Figure 5) is used to represent the triggering of events according to the system states and to model parallelizable behaviors. It is used to describe a workflow.

Image Processing
The image processing unit is a class containing methods to detect the contours of water meter numbers, extract these numbers, and calculate the monthly consumption of each. Once the outline of the meter is detected, an object named meter is created. This object will then be processed by a fast OCR algorithm. Once we have obtained the water meter number, the calculation of the consumption of the meter will be done automatically and the image, the detected number, and the consumption figure, as well as the GPS location, the date of the day, and the name of the field must be recorded in our database. The meter is a class, all meters are objects generated by the object detection process.
The following diagram ( Figure 5) is used to represent the triggering of events according to the system states and to model parallelizable behaviors. It is used to describe a workflow.

Image Processing
The image processing unit is a class containing methods to detect the contours of water meter numbers, extract these numbers, and calculate the monthly consumption of each. Once the outline of the meter is detected, an object named meter is created. This object will then be processed by a fast OCR algorithm. Once we have obtained the water meter number, the calculation of the consumption of the meter will be done automatically and the image, the detected number, and the consumption figure, as well as the GPS location, the date of the day, and the name of the field must be recorded in our database. The meter is a class, all meters are objects generated by the object detection process.
The following diagram ( Figure 5) is used to represent the triggering of events according to the system states and to model parallelizable behaviors. It is used to describe a workflow.
In our approach, the image processing goes through several steps: first, we the image. Then, each image should undergo the two phases of the program and number extraction of each meter.

Yolo Meter Detection
The object detection model will be integrated in a mobile application. W choose the fastest, lightest, and the most accurate one. The major reason why y proceed with this problem, object detection, by building a standard convolu work followed by a fully connected layer is that the length of the output layer and not constant. This is because the number of occurrences of the objects of int fixed. A straightforward approach to solve this problem would be to take differe of interest from the image and use a CNN to classify the presence of the object w region.
The problem with this approach is that the objects of interest might hav spatial locations within the image and different aspect ratios. Hence, you wou select a huge number of regions, and this could computationally blow up. The gorithms, such as R-CNN, YOLO, etc., have been developed to find these occur find them rapidly.
Moreover, in Yolo4, features were predicted for each layer using "Featur Network". This solved the problem of not catching small objects as high reso tures. The authors in [27] compare the precision between Faster-RCNN, Yolo v after several object detection, and conclude that Yolo v4 shows the highest pr shown in Figure 6. In our approach, the image processing goes through several steps: first, we will insert the image. Then, each image should undergo the two phases of the program: detection and number extraction of each meter.

Yolo Meter Detection
The object detection model will be integrated in a mobile application. We need to choose the fastest, lightest, and the most accurate one. The major reason why you cannot proceed with this problem, object detection, by building a standard convolutional network followed by a fully connected layer is that the length of the output layer is variable and not constant. This is because the number of occurrences of the objects of interest is not fixed. A straightforward approach to solve this problem would be to take different regions of interest from the image and use a CNN to classify the presence of the object within that region.
The problem with this approach is that the objects of interest might have different spatial locations within the image and different aspect ratios. Hence, you would have to select a huge number of regions, and this could computationally blow up. Therefore, algorithms, such as R-CNN, YOLO, etc., have been developed to find these occurrences and find them rapidly.
Moreover, in Yolo4, features were predicted for each layer using "Feature Pyramid Network". This solved the problem of not catching small objects as high resolution features. The authors in [27] compare the precision between Faster-RCNN, Yolo v4, and SSD after several object detection, and conclude that Yolo v4 shows the highest precision, as shown in Figure 6.

Yolo Implementation
The approach is based on the Darknet neural network framework for trai testing mobile applications. The framework uses multi-scale training, massive da sion and batch normalization. It is an open-source neural network framework w C and CUDA.
For deep learning detection, a dataset is needed. It generally integrates sev (video files, images, texts, sounds, or even statistics). Their grouping together fo that enables the automatic learning and model creation. Thus, the first step is images and, if necessary, by exploiting data augmentation or image enhancem images).
The first step is to collect images and, if necessary, by exploiting data augm or image enhancement (1100 images). The next step is data annotation/labeling ( Our dataset is in the Darknet YOLO format to train YOLOv4 on the Darknet custom dataset and to divide the data into three folders. The training was perfo 70% of the images, 10% for validation and the test on 20% of the images. To be able to integrate the model in mobile applications, the weights are c to TensorFlow Lite.

Yolo Implementation
The approach is based on the Darknet neural network framework for training and testing mobile applications. The framework uses multi-scale training, massive data expansion and batch normalization. It is an open-source neural network framework written in C and CUDA.
For deep learning detection, a dataset is needed. It generally integrates several data (video files, images, texts, sounds, or even statistics). Their grouping together forms a set that enables the automatic learning and model creation. Thus, the first step is to collect images and, if necessary, by exploiting data augmentation or image enhancement (1100 images).
The first step is to collect images and, if necessary, by exploiting data augmentation or image enhancement (1100 images). The next step is data annotation/labeling (Figure 7). Our dataset is in the Darknet YOLO format to train YOLOv4 on the Darknet with our custom dataset and to divide the data into three folders. The training was performed on 70% of the images, 10% for validation and the test on 20% of the images.

Yolo Implementation
The approach is based on the Darknet neural network framework for training testing mobile applications. The framework uses multi-scale training, massive data exp sion and batch normalization. It is an open-source neural network framework writte C and CUDA.
For deep learning detection, a dataset is needed. It generally integrates several (video files, images, texts, sounds, or even statistics). Their grouping together forms a that enables the automatic learning and model creation. Thus, the first step is to co images and, if necessary, by exploiting data augmentation or image enhancement (1 images).
The first step is to collect images and, if necessary, by exploiting data augmenta or image enhancement (1100 images). The next step is data annotation/labeling (Figur Our dataset is in the Darknet YOLO format to train YOLOv4 on the Darknet with custom dataset and to divide the data into three folders. The training was performed 70% of the images, 10% for validation and the test on 20% of the images. To be able to integrate the model in mobile applications, the weights are conve to TensorFlow Lite.

Number Meter Extraction
OCR methods use algorithms to recognize the characters, of which there are two iants. Pattern recognition is where the algorithm is trained with examples of characte different fonts and can then use this training to try and recognize characters from th put. Feature recognition is where the algorithm has a specific set of rules regarding features of characters, for example the number of angles and crossed lines. The algori then uses this to recognize the text [28][29][30]. To be able to integrate the model in mobile applications, the weights are converted to TensorFlow Lite.

Number Meter Extraction
OCR methods use algorithms to recognize the characters, of which there are two variants. Pattern recognition is where the algorithm is trained with examples of characters in different fonts and can then use this training to try and recognize characters from the input. Feature recognition is where the algorithm has a specific set of rules regarding the Big Data Cogn. Comput. 2022, 6, 72 9 of 16 features of characters, for example the number of angles and crossed lines. The algorithm then uses this to recognize the text [28][29][30].
In our approach, the open-source OCR Tesseract is used and deployed on the mobile application. The Tesseract process flow in presented in Figure 8.
Big Data Cogn. Comput. 2022, 6, x FOR PEER REVIEW 9 In our approach, the open-source OCR Tesseract is used and deployed on the mo application. The Tesseract process flow in presented in Figure 8  The image process starts with eliminating image noise with non-local means noising and Gaussian blur. Next, using four different thresholds to preprocess our ima This binarization is based on Niblack's algorithm, which is creating a threshold imag rectangular window is used to glide across the image and compute the threshold v for the center pixel by using the mean and the variance of the gray values in this wind Another method is used based on the histogram of Otsu thresholding. Using setup, we develop an effective thresholding technique for diverse test situations. The sults are provided in Figure 9. After having realized the architecture of our system, the next step will be dedica to the implementation and realization of the mobile application.

Mobile Application
To facilitate the access and recording of data in relation to the agents who read meters, it is important to use their smartphones. Since these smartphones have diffe characteristics, we proposed to use light mobile applications. We chose to use Androi the mobile platform. The image process starts with eliminating image noise with non-local means denoising and Gaussian blur. Next, using four different thresholds to preprocess our images. This binarization is based on Niblack's algorithm, which is creating a threshold image. A rectangular window is used to glide across the image and compute the threshold value for the center pixel by using the mean and the variance of the gray values in this window.
Another method is used based on the histogram of Otsu thresholding. Using this setup, we develop an effective thresholding technique for diverse test situations. The results are provided in Figure 9.  The image process starts with eliminating image noise with non-local means denoising and Gaussian blur. Next, using four different thresholds to preprocess our images. This binarization is based on Niblack's algorithm, which is creating a threshold image. A rectangular window is used to glide across the image and compute the threshold value for the center pixel by using the mean and the variance of the gray values in this window.
Another method is used based on the histogram of Otsu thresholding. Using this setup, we develop an effective thresholding technique for diverse test situations. The results are provided in Figure 9. After having realized the architecture of our system, the next step will be dedicated to the implementation and realization of the mobile application.

Mobile Application
To facilitate the access and recording of data in relation to the agents who read the meters, it is important to use their smartphones. Since these smartphones have different characteristics, we proposed to use light mobile applications. We chose to use Android as the mobile platform.
Indeed, since the majority of smartphones used in this work are Android smartphones, we decided to use Android Studio as SW. We could have used cross-platform systems but given the need for an optimized SW lite, we chose to create an Android platform. After having realized the architecture of our system, the next step will be dedicated to the implementation and realization of the mobile application.

Mobile Application
To facilitate the access and recording of data in relation to the agents who read the meters, it is important to use their smartphones. Since these smartphones have different characteristics, we proposed to use light mobile applications. We chose to use Android as the mobile platform.
Indeed, since the majority of smartphones used in this work are Android smartphones, we decided to use Android Studio as SW. We could have used cross-platform systems but given the need for an optimized SW lite, we chose to create an Android platform.
On this platform we used a lite AI framework to do the digital recognition. This application will then allow us to save the data on the phone.
As soon as the system is connected via 3G and/or Wifi, the data will be saved in the main database. In the result part, screen printouts will display the result of the implementation.

Counter Detection
In this part, we will present the results of the implementation of the application. Figure 10 illustrates the object detection result.
As soon as the system is connected via 3G and/or Wifi, the data will be saved in the main database. In the result part, screen printouts will display the result of the implementation.

Counter Detection
In this part, we will present the results of the implementation of the application. Figure 10 illustrates the object detection result.
Using a smartphone, we detect the counter number, as described in the proposed approach part.
After training the custom tiny-YOLOv4 object detector and saving the obtained weights, we then repeated the process, re-modifying the configuration file to obtain the weights that achieved the highest mAPscore on my training set.
Once the training is finished, we use our trained custom tiny-YOLO v4 detector to make inference on test images. When we run this detector on a test image, we got the bounding box of the detected water meter number successfully.
Finally, we converted the weights to TensorFlow's.pb representation then we converted the TensorFlow weights to TensorFlow Lite to prepare the model to be integrated in the mobile application.

Overview Process
After implementing the approach of the water meter counter detection process, we illustrated the obtained result in Figure 11. It contains the number recognition. The proposed process is as follows: • Detect the requested area from the image: water meter counter; • Perform an image processing on the images; • Pass the images to Tesseract; • Store the results of Tesseract in the desired format. Using a smartphone, we detect the counter number, as described in the proposed approach part.
After training the custom tiny-YOLOv4 object detector and saving the obtained weights, we then repeated the process, re-modifying the configuration file to obtain the weights that achieved the highest mAPscore on my training set.
Once the training is finished, we use our trained custom tiny-YOLO v4 detector to make inference on test images. When we run this detector on a test image, we got the bounding box of the detected water meter number successfully.
Finally, we converted the weights to TensorFlow's.pb representation then we converted the TensorFlow weights to TensorFlow Lite to prepare the model to be integrated in the mobile application.

Overview Process
After implementing the approach of the water meter counter detection process, we illustrated the obtained result in Figure 11. It contains the number recognition. The proposed process is as follows: • Detect the requested area from the image: water meter counter; • Perform an image processing on the images; • Pass the images to Tesseract; • Store the results of Tesseract in the desired format.

Application Realization
After creating both the AI model and the Android mobile application and integra the model within the application, we have a function mobile application that replied t the project goals. The welcome interface (Figure 12) is the first window encountered launching the mobile application. This interface welcomes back the user and resume role of the application. It lasts for 10 s, then one of these two scenarios may arise: • If the user is not authenticated, he will be directed to the Sign in interface.

•
If not, he will be directed to the Home interface.
The authentication interface, as depicted in Figure 13, is the second window enc tered after launching the application and seeing the welcome interface. Its role is to se access to the application; it allows the entry of a sign-in and a password. After the va tion of the information entered by the sign in button, two scenarios are presented: • If the information is validated, the user will be redirected to the main interface o application, which is the Home interface.

•
If not, an error message will be displayed.

Application Realization
After creating both the AI model and the Android mobile application and integrating the model within the application, we have a function mobile application that replied to all the project goals. The welcome interface (Figure 12) is the first window encountered after launching the mobile application. This interface welcomes back the user and resumes the role of the application. It lasts for 10 s, then one of these two scenarios may arise:

•
If the user is not authenticated, he will be directed to the Sign in interface.

•
If not, he will be directed to the Home interface. Figure 11. Process overview.

Application Realization
After creating both the AI model and the Android mobile application and integr the model within the application, we have a function mobile application that replied the project goals. The welcome interface (Figure 12) is the first window encountered launching the mobile application. This interface welcomes back the user and resume role of the application. It lasts for 10 s, then one of these two scenarios may arise: • If the user is not authenticated, he will be directed to the Sign in interface.

•
If not, he will be directed to the Home interface.
The authentication interface, as depicted in Figure 13, is the second window enc tered after launching the application and seeing the welcome interface. Its role is to se access to the application; it allows the entry of a sign-in and a password. After the va tion of the information entered by the sign in button, two scenarios are presented: • If the information is validated, the user will be redirected to the main interface o application, which is the Home interface.

•
If not, an error message will be displayed.  The authentication interface, as depicted in Figure 13, is the second window encountered after launching the application and seeing the welcome interface. Its role is to secure access to the application; it allows the entry of a sign-in and a password. After the validation of the information entered by the sign in button, two scenarios are presented:

•
If the information is validated, the user will be redirected to the main interface of the application, which is the Home interface.

•
If not, an error message will be displayed.  Figure 13. The sign interface. Figure 14 shows the general and main menu of the application. It has four s "Online detection", "Take picture", "Real time detection", and "Show uploads". Once the operator opens the "Online detection" interface, as shown in Figure  can enter the field name, the zip code name, and pick a picture either from the ga from the camera. Once the user has picked a picture, the water meter number wi tected and extracted. Finally, both the image and the number will be displayed. can press the register meter button to go to the second interface to be able to e water current units, see the monthly consumption of the water, see the date of the ing, and locate the water meter location by clicking on the location button view. the operator presses the submit button to save all the water meter data in the fireb  Figure 14 shows the general and main menu of the application. It has four sections: "Online detection", "Take picture", "Real time detection", and "Show uploads". g Data Cogn. Comput. 2022, 6, x FOR PEER REVIEW Figure 13. The sign interface. Figure 14 shows the general and main menu of the application. It h "Online detection", "Take picture", "Real time detection", and "Show u Once the operator opens the "Online detection" interface, as shown can enter the field name, the zip code name, and pick a picture either fr from the camera. Once the user has picked a picture, the water meter nu tected and extracted. Finally, both the image and the number will be di can press the register meter button to go to the second interface to be water current units, see the monthly consumption of the water, see the d ing, and locate the water meter location by clicking on the location butt the operator presses the submit button to save all the water meter data i Once the operator opens the "Online detection" interface, as shown in Figure 15, they can enter the field name, the zip code name, and pick a picture either from the gallery or from the camera. Once the user has picked a picture, the water meter number will be detected and extracted. Finally, both the image and the number will be displayed. Then he can press the register meter button to go to the second interface to be able to enter the water current units, see the monthly consumption of the water, see the date of the recording, and locate the water meter location by clicking on the location button view. Finally, the operator presses the submit button to save all the water meter data in the firebase.  When the user opens the "Real time detection" interface ( Figure 16), th Yolo detection model and its accuracy to know the limits of the AI model a best experience possible with this mobile application to perform their job s  When the user opens the "Real time detection" interface ( Figure 16), they can test the Yolo detection model and its accuracy to know the limits of the AI model and to have the best experience possible with this mobile application to perform their job successfully.  When the user opens the "Real time detection" interface ( Figure 16), th Yolo detection model and its accuracy to know the limits of the AI model a best experience possible with this mobile application to perform their job s

Discussion
The present work was carried out within the framework of an indus tion with the Company of Production and Management of Water in Tuni dataset of 1100 images.
The training was performed on 70% of the images, 10% for validation 20% of the images. We then conducted a test on 150 real images that were from a water meter. The detection process applied to the 150 images, includ with numbers between two positions. It resulted in 2 erroneous and 148 co Thanks to the learning approach, the proposed system allowed for ch value that will be stored in the database. The obtained recognition rate was

Conclusions
The present work was carried out within the framework of an indus tion with the Company of Production and Management of Water in Tunis type will be developed and used in the context of the company digitizatio ance.
The objective of this paper was to develop an AI model based on deep algorithms, and artificial intelligence, which allows us to detect and extra numbers. Moreover, this model was integrated into an Android mobile fact, the meter images are taken by the cameras of the operator's smartpho application allows them to detect, extract, calculate the consumption monthly, and finally save all relevant information in the Firebase, such as t ber, location, date, etc.
The accuracy obtained from the object detection model with the tiny Y The results obtained and the studies and the experiments carried out hav

Discussion
The present work was carried out within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. We used a dataset of 1100 images.
The training was performed on 70% of the images, 10% for validation and the test on 20% of the images. We then conducted a test on 150 real images that were photographed from a water meter. The detection process applied to the 150 images, including some cases with numbers between two positions. It resulted in 2 erroneous and 148 correct values.
Thanks to the learning approach, the proposed system allowed for choosing the low value that will be stored in the database. The obtained recognition rate was 98.67%.

Conclusions
The present work was carried out within the framework of an industrial collaboration with the Company of Production and Management of Water in Tunisia. This prototype will be developed and used in the context of the company digitization and governance.
The objective of this paper was to develop an AI model based on deep learning, OCR algorithms, and artificial intelligence, which allows us to detect and extract water meter numbers. Moreover, this model was integrated into an Android mobile application. In fact, the meter images are taken by the cameras of the operator's smartphones. Then, our application allows them to detect, extract, calculate the consumption of each meter monthly, and finally save all relevant information in the Firebase, such as the meter number, location, date, etc.
The accuracy obtained from the object detection model with the tiny YOLOv4 is 98%. The results obtained and the studies and the experiments carried out have also enabled us to highlight certain areas of improvement for our algorithm, such as enrichment and optimization of the speed and efficiency of our system.
Despite the results obtained, several perspectives of this work are being developed. The future work aims at making a diagnosis of intelligent consumption, on the one hand, and a secure data backup using blockchain technology, on the other hand.
The blockchain will create a system allowing traceability not only of the data but also of the different transactions that take place.