1. Introduction
With the constant development of technologies and the use of automobiles, vehicular traffic has become a major issue for people’s flow. This is mainly due to the disorderly increase in the number of automobiles in cities and the lack of a quality public transport service. According to [
1,
2], the number of vehicles registered in Brazil increased by 12% in 2023 (4,108,041 vehicles) compared to 2022 (3,667,325 vehicles). In turn, these cities did not see their road infrastructure develop proportionally, thus causing major traffic jams and making it impossible for people to move in a timely manner. According to the US Federal Highway Administration, in 2016, each automobile in the US reached 53 h of delay, totaling about 8.8 billion hours of delay nationwide [
3].
Such traffic congestion can cause several problems, such as loss of time, increased emissions (CO
2), stress, accidents, high fuel consumption, and noise pollution [
4]. Studies, such as those conducted by Santos et al., show that the deployment of smart traffic lights can reduce CO
2 emissions by up to 40% while significantly improving traffic flow [
5].
To solve these problems, a concept that is much-discussed in the current literature is that of smart cities. The current literature reports that several technologies have been developed and/or employed to solve different problems in these cities, mainly related to traffic congestion.
Smart cities are a widely discussed topic in the current literature, aiming to promote economic growth, improve sustainability, and, consequently, the quality of life of citizens [
6]. To achieve this goal, these cities must maintain a balance between human capital resources and investments in technology [
7]. According to [
8], the proposed solutions for urban mobility are some of the most important topics concerning smart cities because these solutions contribute significantly to the daily movement of citizens. The integration of blockchain and computer vision has proven effective in reducing congestion and improving traffic management in smart cities [
9].
Smart traffic lights constitute one of these solutions, primarily to replace fixed-time traffic lights. This type of traffic light determines the green time of a given phase based on the actual number of vehicles approaching via a given stretch. Studies that propose smart traffic lights usually seek to improve vehicular flow. The first studies found in the literature regarding smart traffic lights aimed to improve vehicular flow approaching a given intersection, as addressed in [
10,
11], and adaptive systems have been shown to reduce travel times significantly, as demonstrated by studies like Aleko and Djahel’s work, which achieved a 39% reduction [
12].
Smart traffic lights can also be used to reduce fuel consumption and minimize greenhouse gas (GHG) emissions, as developed [
13,
14].
This way, ref. [
5] developed a model in a simulated environment (specific simulation software) to analyze whether the implementation of smart traffic lights contributes to reducing CO
2 emissions in a small Portuguese city. The simulation showed possible CO
2 reductions by 32–40% and a significant increase in average speed and, consequently, a reduction in waiting time.
In [
15], an efficient algorithm for smart traffic lights was developed, aiming to reduce vehicular fuel consumption and GHG emissions. The study was able to reduce both consumption and emissions at each intersection and, consequently, reduce the average waiting time when compared to previous scenarios.
In some studies, video cameras in conjunction with a system developed in Python using the OpenCV library have been used to support vehicle counting in smart traffic lights. Additionally, machine learning models have demonstrated high accuracy in predicting traffic flow and optimizing light timings. For example, Hernández-Mejía et al. developed machine learning algorithms to predict traffic flow and enhance smart traffic light efficiency, achieving notable improvements in traffic management [
16]. These supporting tools have been used because they are cheaper solutions, as there is no need to install pieces of hardware such as sensors. Similarly, the study by [
17] used smart cameras at intersections to count vehicles, employing a distributed control algorithm to control the traffic lights, providing a significant reduction in average waiting time. This study yielded better results than those obtained [
18,
19].
Notably, this smart system must show a high percentage accuracy in counting vehicles. In the study of [
20], a system was developed to minimize vehicular congestion on roads by a smart traffic light without installing any type of hardware. This system employed live images from surveillance cameras to perform vehicle counting and later decide on green time based on traffic density using OpenCV functions. The obtained vehicle detection accuracy was 83.33%.
In [
21], a smart traffic light system was developed using video cameras and computer vision. This system uses several OpenCV functions—background subtraction, bubble detection, and object tracking—to estimate the number of vehicles in each lane and their proportional density compared to the other lanes. The application of this system has enabled the reduction in average waiting time, the minimization of traffic congestion, and the improvement of road safety.
In the study by [
22], a computer vision model was also used to control smart traffic lights through the functions of OpenCV and live-feed surveillance cameras. In the counting process, the system obtained a 93.11% accuracy, but this value decreases significantly when facing heavy traffic and where there is a flow of rickshaws.
We must also emphasize that the present work provides a cost-effective alternative for the implementation of smart traffic lights compared to the widely available commercial technologies on the market, in addition to offering other economic benefits. This is of utmost importance for cities like Limeira, which are located in developing countries. Thus, an economic analysis of the contribution of this technology becomes indispensable. It can be said that smart traffic lights, especially when integrated with IoT technologies and energy-efficient systems, can lead to significant cost reductions for municipalities. For example, research shows that intelligent traffic light systems can save up to 80% in energy costs compared to traditional systems by adjusting lighting and operations based on traffic density and timing [
23]. These systems also allow for remote monitoring and management, further reducing the need for frequent maintenance, which lowers labor costs.
Operational costs are quite relevant in places like Limeira. It is noted that the implementation of smart traffic lights reduces operational and maintenance costs, as highlighted by Neis et al. [
24], who evaluated low-cost sensors for smart systems, showing that these sensors are not only economically efficient but also practical for municipalities. Furthermore, implementing smart traffic control systems with adaptive capabilities can decrease the wear and tear on infrastructure and minimize the need for road closures due to maintenance [
25]. Additionally, the use of smart sensors to remotely manage traffic lights also reduces the physical footprint required for maintaining these systems [
12].
Considering the economic perspective, it is also noted that the use of Python and OpenCV further reduces costs. As outlined by Dilek and Dener [
26], computer vision technologies have become a key component in modern intelligent transportation systems, offering high accuracy in vehicle detection and traffic monitoring, especially when integrated with software solutions like OpenCV. For example, the use of Python with OpenCV represents a cost-effective solution, as it avoids the need for expensive sensors installed at intersections. In many cases, commercial systems utilize sensors such as magnetometers or inductive loops, which are costly and require frequent maintenance. In contrast, your computer vision system with cameras offers a viable and low-cost solution for vehicle detection. Using OpenCV for vehicle counting is widely adopted due to its simplicity and low implementation costs, especially when compared to proprietary commercial systems that use physical sensors [
27,
28]. Additionally, the cameras already installed at many intersections can be reused, resulting in significant savings for municipalities.
Finally, Deshpande and Hsieh [
29] highlight how cyber–physical systems for traffic light control can enhance traffic management by integrating real-time data and machine learning models, which not only improves traffic flow but also reduces the overall delay at intersections. Thus, the implementation of smart traffic control systems using Python and OpenCV not only offers accurate vehicle counting but also improves traffic flow without the high initial costs associated with commercial systems that rely on specialized hardware. Furthermore, as demonstrated by Navarro-Espinoza et al. [
16], machine learning models can significantly improve traffic flow prediction, enabling smart traffic lights to adjust in real-time and effectively reduce congestion.
Analysis of the current literature shows that there are few new studies, especially employing a system using the OpenCV library and video cameras. These pieces of technology should be used to improve vehicle counting accuracy rate and because it is a low-cost smart traffic light system, being an interesting solution for countries like Brazil.
This work aims to develop a code in the Python programming language to identify and count vehicle flow, in addition to a second programming logic for the development of a vehicle flow-based smart traffic light. In other words, it involves the development of a low-cost smart traffic light prototype. The first part of the objective is to develop a code in Python using an algorithm with the OpenCV library for motion detection. The second part is made up of the development of the smart traffic light based on the results of monitoring and counting vehicles; that is, when there are many vehicles, the green color will be activated, and when there are none, the red will light up.
This study is mainly justified by this type of technology seeking to develop new programming in the Python language that is easy to apply and highly effective. Moreover, this is a case study using real data. To the best of our knowledge, we are unaware of studies similar to this one in Brazil, which currently belongs to the group of countries with the worst traffic in the world. Therefore, this study comes to fill this gap in the current literature.
This paper is organized as follows:
Section 2 presents the materials and methods used;
Section 3 presents the results and discussion; and, finally,
Section 4 presents the conclusions and summary of the main findings.
2. Materials and Methods
2.1. Case Study
The city of Limeira—the inland state of São Paulo, with a population of 291,869 inhabitants in 2022, as estimated by the [
30], has a high rate of vehicle traffic at specific points and at certain times. One of these points is located between two campuses of the State University of Campinas, the School of Technology and the School of Applied Sciences, 1400 m from each other, and one of the intersections with heavy traffic is at the intersection point between Dr. Fabrício Vampré Avenue and Cônego Manoel Alves Avenue, since the speed of vehicles is very high, which hinders the movement of pedestrians, including university students. This specific point is controlled by some traffic lights in several directions and routes, one of them for pedestrians, with green light time at this point being very short, making it impossible for them to cross during the programmed time, and, as an alternative, pedestrians tend to run or cross during the red light time, which ends up being dangerous in both situations. Therefore, this work aims to develop a smart system to monitor traffic flow at the intersection between Dr. Fabrício Vampré Avenue and Cônego Manoel Alves Avenue and, thus, make it possible to adjust or control traffic light time at this intersection.
2.2. Vehicular Traffic at the Intersection
The avenues that will be worked on operate in several intersection directions and routes: vehicles can follow Avenue Dr. Fabrício Vampré before and after the intersection point, or they can move to Avenue Cônego Manoel Alves. If they are going toward the Jardim São Paulo district, drivers can enter Avenue Cônego Manoel Alves toward the School of Applied Sciences, and if they are going toward the central area, they can enter the avenue toward the School of Technology. This can be observed through the blue lines in
Figure 1, which will be called the 1st phase.
Vehicles traveling through the Cônego Manoel Alves Avenue toward the School of Applied Sciences can follow it after the intersection point, but they can also enter Dr. Fabrício Vampré Avenue, going both towards Jardim São Paulo and towards the center. This can be observed through the green lines in
Figure 1, which will be called the 2nd phase.
Finally, we will have the possibility of drivers who are traveling in Cônego Manoel Alves Avenue toward the School of Technology entering Dr. Fabrício Vampré Av. toward Jardim São Paulo or toward the center. This can be observed through the red lines in
Figure 1, which will be called the 3rd phase. Currently, the analyzed intersection operates with a fixed-time traffic signal, which consists of three phases.
2.3. Pedestrian Movement at the Intersection
Regarding pedestrians who want to move from one point to another at the intersection, the situation gets a little more complicated since the directions of the routes, as shown in
Figure 1, can change constantly. For example, on the stretch of Dr. Fabrício Vampré Av. towards the center, there are two pedestrian crossings; the person who wishes to cross these stretches must worry about a programmed time of approximately 5 s, and taking into account the size of the distance of the lanes, this time makes it unfeasible for a pedestrian to cross by walking, so they must hasten or stop halfway.
Alternatively, people tend to cross the road during red lights, and because of the forms of signaling explained in
Figure 1, pedestrians must pay attention to three directions, making the movement of cars extremely confusing for them.
As the focus of this work is to analyze the flow of vehicles and subsequently implement traffic control, the movement of pedestrians will not be addressed more accurately. However, a possible improvement in traffic would benefit not only vehicles but also pedestrians who access this area.
2.4. Identifying and Counting Vehicles
Several videos were recorded with different cameras at different times and in different weather conditions in order to test and improve the algorithm. After some tests, three videos were selected to validate the prototype. These three videos were recorded, each filming different positions and angles, to collect as many vehicles as possible while they traveled the road under analysis. The videos were recorded with an iPhone XR smartphone and with a selfie stick, which made it easier to keep the phone fixed at the time of shooting. Because these were only test recordings, there were some inaccuracies in camera positioning and external movements, but nothing that would greatly affect the final result.
Figure 2 shows a frame from one of the videos of a rainy day.
Considering the need to differentiate between pedestrians, cars, trucks, buses, and bicycles, one solution capable of providing that was the use of a motion detection algorithm. To this end, the OpenCV library was used in this project. Furthermore, a new video was shot in the same location in order to test a different flow of vehicles.
It is important to highlight that the OpenCV library is employed for motion detection, but additional image processing algorithms are necessary for effective real-time vehicle monitoring and counting. This process begins with creating a Video Capture object, which opens the recorded video file and reads it frame by frame.
Next, we create an object to perform background subtraction on the video frames using the Background Subtractor MOG2 method. This step is crucial for isolating moving objects from the static background.
A loop is then implemented to read each frame of the video. If reading fails, the loop is interrupted. Background subtraction is applied to each frame, facilitating the detection of motion against the static backdrop.
Following this, we perform an image processing operation to segment the image into regions of interest based on pixel intensity values. This segmentation is achieved using a thresholding technique, resulting in a binarized image. The threshold function is applied to the output of the background subtraction to accomplish this.
We then utilize the find contour function to identify contours in the binarized image, storing them in a list. After collecting the contours, we create another loop to assess the area of each contour using the contour area function. If a contour’s area exceeds 1000 pixels, we draw a rectangle around the detected object in the original frame.
Finally, the original frame, with rectangles drawn around the moving objects, is displayed in a preview window, showcasing the results of our motion detection and vehicle counting system.
The idea of the code is to follow the flowchart represented in
Figure 3, making use of all OpenCV dependencies. This flowchart shows the whole idea of programming the identification and counting of vehicles for the implementation of the smart traffic light prototype.
Within the whole cycle, multiple operations are applied to process each frame of the video. Within each frame, multiple contours can be detected, so these contours are filtered and analyzed. If the contour area is smaller than 500, it is disregarded, so very small squares are discarded from within the code, and with a value above 500, the contour undergoes a treatment in which a reference point will be generated at its center. If this contour, which is on a moving object, crosses the reference line of the video, one more will be added to the vehicle count.
The code contains the coordinates chosen for the points of the extension of both line segments, in addition to a variable called tolerance that was created to delimit an additional value of pixels for the detection of the points, which will be necessary later in the code (for the detection accuracy drops absurdly without this value defined). Within the main loop, a variable called fgmask was created, and it aims to subtract the background from the frame so as to count only moving objects. The for cycle was used so that the code runs through each delimited contour and thus calculates the area to filter to calculate the center of the object for later creation of the point for counting to verify if the distance between the point and the segment is less than or equal to the tolerance, and, if it is, the counting is increased by 1.
2.5. Smart Traffic Light Operation
After all the tests with the code on the videos and the counter operating at a high accuracy rate, the next step consists of developing a smart traffic light prototype. To this end, the post-processing step will be divided into two parts: preparing traffic light signaling times based on traffic flow with the aid of vehicle counting and the visual representation of the traffic light together with a timer. The post-processing will be performed only on the last video recording mentioned above since it focuses on the road stretch of Cônego Manoel Alves Avenue toward the School of Technology.
2.6. Preparing Signaling Times
The first decision before modifying the code was to think about the times of each signaling: the green signaling needs to stay on for a certain time; after that, the yellow signaling comes into action, and the red signaling time was determined as the yellow signaling time plus the red signaling time.
The idea of green signaling consists of the traffic light remaining green as long as there is vehicular flow in the video segment in question, and if a certain time passes without any vehicle passing (for this, the reference is the video counter), the traffic light changes to yellow and then to red. The green traffic light time was set to 17 s since, after analyzing the stretch of the avenue, this time was adequate due to the vehicular flow and counting speed during the recording times; if within this 17 s time a vehicle passes on the road and the counter detects it, increasing the counting by one, the timer resets and counts to 17 again, which ensures that the traffic light will remain green as long as there is vehicular flow on the stretch in question.
However, such an approach still shows flaws: if there was a constant flow with the counting progressing in less than 17 s, the green light signaling could remain that way forever, which would be a problem for vehicles on other roads waiting at the red traffic light. To solve this, a second time was set to 60 s: if the green traffic light remains active for 60 s, regardless of whether the flow is still increasing in the counter, it must necessarily become inactive, turning yellow and then red. Thus, the maximum time the traffic light remained green was 60 s and the minimum time was 17 s, which will vary according to the traffic flow.
The yellow traffic light signaling time was set to 5 s, based on the range of 3 to 5 s for this traffic light state, according to the Traffic Engineering Commission [
1], and the red traffic light signaling time was, therefore, the sum of 17 s of green traffic light plus 5 s of yellow traffic light (the red traffic light signaling time will always be 22 s, as its time does not vary with the green traffic light signaling time, since, for the specific project, only the green traffic light signaling time should vary on this road).
With the times defined, we created code blocks with the function of creating variables for the traffic light times and making the green traffic light signaling time work according to the counting; after that, it may be possible to make the visual representation in the video to visualize the state of the traffic light prototype in operation.
2.7. Visual Representation
The visual representation of the traffic light on the video screen must also be highlighted. Previously, we planned to use colored LEDs in conjunction with the ESP32 CAM, exactly as tested in the simulation on the Wokwi website. However, this plan was discarded. An alternative solution was to display the traffic light colors in writing based on the signaling time of each one on the video screen itself.
Then, we had a better idea that would make the visual representation easier: implement a traffic light icon positioned on the right corner of the video beside the counter. Thus, it would show the green color highlighted based on the pre-set time of 17 s or the maximum of 1 min; 5 s highlighted in yellow and then 22 s in red.
In addition, a timer will also be placed below the counter, which will show on the screen the time of each traffic light change. It will be introduced with the same intention of facilitating the visual representation of the changes.
2.8. Flowchart of Code for Implementation
The code was enriched with programming logic to include the representation of the different traffic light states, in addition to adding on-screen time duration management, and, finally, there was a simplification and refinement of the functions present within the code to make the created program more effective and less demanding on computer processing (data structuring techniques were used for this purpose).
Figure 4 shows the flowchart with a general explanation of the code functioning.
The main modifications were made within the main loop, with the code blocks responsible for code visualization; however, the implementation logic of the times appears in the third step of the flowchart, and through the main loop, the variant state of the green traffic light will update according to the traffic flow. After testing, the present results were satisfactory for the representation of smart traffic lights using Python.
3. Results and Discussion
The results are divided into three sections: vehicle identification, traffic counting through video recordings, and the implementation of smart traffic lights based on these recordings via simulation, followed by financial analysis.
3.1. Vehicle Identification and Counting Results
According to
Figure 3, the results indicate that, during Step 2, a total of 27 vehicles of various sizes, including cars, buses, trucks, and vans, were manually counted. It is important to note that the recording session commenced under rainy conditions. Of these 27, a total of 28 vehicles were counted and detected by the two lines. Thus, the success rate for this code was 96% and an error rate of only 4%, which was the best result obtained until then (noting that this is the total gross value obtained by dividing the total number of vehicles passing through the lines and the total number of vehicles detected).
The success rate obtained by us was higher than that of [
20], which obtained 83.33%, and higher than that of [
22], which reached a rate of 93.11% and had many difficulties in places with heavy traffic and rickshaw flow.
The vehicles on each line were also counted. The direction toward the School of Technology was called the “Left Line”, while the direction toward the School of Applied Sciences was called the “Right Line”. On the Left Line, accuracy was 100%, with 20 vehicles observed with the naked eye and 20 vehicles detected by the sensors. On the Right Line, 7 vehicles passed, but 8 were detected, resulting in an accuracy of 88% and an error rate of 12%.
Finally, each vehicle was separated according to its corresponding size. In the set of vehicles containing trucks and buses, four vehicles were seen with the naked eye, with a counting of three vehicles, scoring a 75% success rate and 25% error. Such outcome is mostly related to camera angle and positioning; thus, the code had difficulties in detecting the entire structure of a bus, for example. In the set of vehicles containing cars, a total of 19 vehicles were seen with the naked eye, and a total of 21 vehicles were detected, so the success rate was 90%, with an error rate of only 10%. Finally, 4 vehicles, such as mopeds and motorcycles, were seen with the naked eye with detection of also 4 vehicles, reaching 100% accuracy.
Therefore, it is concluded that the smaller the size of the vehicle, the greater the accuracy in this case.
Figure 5 and
Figure 6 illustrate this.
Based on the results obtained from the test and the high accuracy rate, we decided to perform another test with a new recording on a different day and time with a change in the code regarding the line position for the new video and observe the results obtained. This new test was carried out considering only the vehicles that come through the avenues Cônego Manoel Alves and Doctor Fabrício Vampré, both toward the School of Technology, unlike the first, which also contemplated those headed to the School of Applied Sciences (
Figure 7).
The video was recorded on a Wednesday near 5 pm, a time of heavy traffic, as 48 vehicles passed the recorded point during a 10 min recording, which is a significantly higher number compared to the previous recording. It is worth noting that there was no rain during this recording. With this code, the accuracy achieved was high, reaching 96%, as the code’s counter recorded a total of 50 vehicles. The red line was inserted to ensure that the car actually crossed the pedestrian crossing and continued along the road, increasing the accuracy.
Regarding the code, the only changes regarded the counting lines, which became only one and were adjusted in the respective position according to the new video. In addition, a subtle change was made in the tolerance element since the camera position in the new recording was unfavorable, and, thus, this slight increase in tolerance helped correct the camera adjustment problem.
The result of this simulation was satisfactory, as the programming of the LEDs worked correctly and different weather conditions did not interfere with the outcome, providing an optimal basis for starting the post-processing tests, which, in this case, were performed within the Python code itself.
3.2. Implementation Simulation Results
A simulation was conducted in WOKWI as a validation for the implementation of smart traffic lights. This simulation was based on the last recorded video, that is, as a post-processing to determine traffic light color times according to the flow of vehicles identified and counted.
The traffic light times responded well to pulses that were received through the delimited times, which enabled the progression to Step 3 of the project, so a smart traffic light prototype could be assembled.
The results obtained are presented in
Figure 8, in which each image represents an activated traffic light color, starting with the green state, passing to the yellow state, and finally to the red state, which is the standard sequence of a traditional traffic light, together with the vehicle counter and the timer.
3.3. Considerations of Financial Aspects
This section presents a preliminary financial perspective of the proposed system, focusing on its potential as a cost-effective alternative to traditional commercial solutions. Unlike AI-based traffic sensors available on the market, which typically cost between USD 2500 and USD 3000, the experimental approach explored in this study employs an ESP32 Camera Module Kit. This module, priced between USD 10 and USD 20, offers a low-cost option for traffic monitoring, particularly relevant for developing countries with limited budgets.
Given that this is an experimental study, the financial considerations reflect the equipment used in the prototype and may not account for the full cost of a large-scale implementation. However, the initial findings indicate that the proposed system can significantly reduce operational costs by utilizing open-source technologies and repurposing existing infrastructure, such as video cameras already installed at intersections.
The adoption of such a solution can be particularly beneficial for small to medium-sized cities, where budget constraints often limit the adoption of high-cost technologies. Furthermore, reduced maintenance costs are anticipated, as the use of software-based vehicle counting minimizes the need for specialized hardware, lowering the need for frequent maintenance.
While it is not possible to directly compare the development costs of the experimental system with the fully developed commercial technologies, this study suggests that the use of computer vision and the ESP32 module can offer a financially viable solution. By enabling remote monitoring and management, this approach may further decrease labor and maintenance expenses.
In conclusion, this system highlights the potential for significant cost savings, though future studies will be necessary to validate the long-term financial sustainability of the proposed solution. Additional research could explore real-time implementation in diverse urban contexts to further assess both its financial and operational impact.
4. Conclusions
After carrying out all the necessary tests and validations as to the Python code and the methods used for the preparation of this work, a smart traffic light prototype was developed, demonstrating a viable implementation in the stretch between the two avenues in question. This prototype worked very successfully, and the vehicle count was obtained with high accuracy. The simulation in WOKWI of the smart traffic light confirmed the validation of the prototype and all programming as expected in the objectives.
Monitoring with a counter enabled obtaining data on traffic flow at different times and on how they impact local congestion. With a high counting accuracy, the second step was post-processing, effectively introducing a smart traffic light based on a timer determined by vehicle counting, and thanks to that, it can be demonstrated how vehicular flow at given moments can influence the decision-making of the smart system.
Another factor to be considered is the code functioning in different recordings at different angles, with both recordings having high accuracy rates, with two of them being above 95%. Thus, these different fields of view proved the feasibility of future projects or research involving other intersection areas.
According to the results obtained, this study enables new studies based on the development of smart traffic lights, thus aiming at an increasing improvement in vehicular traffic and flow in the area, as well as an increase in accessibility involving both drivers and pedestrians who flow daily through these roads. The use of traffic counting through video recordings developed in this study significantly contributes to the economic viability of the project, as it reduces costs dramatically compared to traditional AI-based traffic sensors, resulting in potential savings of up to 99.71%. This affordability allows for broader implementation and research opportunities, making smart traffic management more accessible, especially in countries like Brazil.
The next step is the implementation of this prototype and the expansion of road monitoring for detailed traffic analysis. In addition, the prospects for improving the code for implementation in intelligent traffic management systems include real-time traffic flow assessment, queue formation analysis, and accurate travel time estimation. In this regard, additional research is already underway to optimize the prototypes, with a particular focus on real-time data processing. A significant positive aspect of this research is the active involvement of undergraduate students, who have participated in the development process from coding using open-source software to hardware modeling while also highlighting the low-cost nature of the implementation. The main challenge to be addressed concerns the infrastructure required for the assembly of the prototypes, as it necessitates stable power connections and secure, protected locations for computer installations.