Next Article in Journal
Spatiotemporal Heterogeneity Analysis of Yangtze River Delta Urban Agglomeration: Evidence from Nighttime Light Data (2001–2019)
Next Article in Special Issue
Evaporation Duct Height Nowcasting in China’s Yellow Sea Based on Deep Learning
Previous Article in Journal
Prediction of Soil Organic Carbon under Different Land Use Types Using Sentinel-1/-2 Data in a Small Watershed
Previous Article in Special Issue
Multiscale Decomposition Prediction of Propagation Loss in Oceanic Tropospheric Ducts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inter-Urban Analysis of Pedestrian and Drivers through a Vehicular Network Based on Hybrid Communications Embedded in a Portable Car System and Advanced Image Processing Technologies

by
Eduard Zadobrischi
1,2,* and
Mihai Dimian
1
1
Department of Computers, Electronics and Automation, Faculty of Electrical Engineering and Computer Science, “Stefan cel Mare” University, No.13. Str. Universitatii, 720229 Suceava, Romania
2
Department of Computer Science, Technical University of Cluj-Napoca, Gh. Baritiu St. 26-28, 400027 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(7), 1234; https://doi.org/10.3390/rs13071234
Submission received: 19 January 2021 / Revised: 18 March 2021 / Accepted: 21 March 2021 / Published: 24 March 2021
(This article belongs to the Special Issue Advanced Artificial Intelligence and Deep Learning for Remote Sensing)

Abstract

:
Vehicle density and technological development increase the need for road and pedestrian safety systems. Identifying problems and addressing them through the development of systems to reduce the number of accidents and loss of life is imperative. This paper proposes the analysis and management of dangerous situations, with the help of systems and modules designed in this direction. The approach and classification of situations that can cause accidents is another feature analyzed in this paper, including detecting elements of a psychosomatic nature: analysis and detection of the conditions a driver goes through, pedestrian analysis, and maintaining a preventive approach, all of which are embedded in a modular architecture. The versatility and usefulness of such a system come through its ability to adapt to context and the ability to communicate with traffic safety systems such as V2V (vehicle-to-vehicle), V2I (vehicle-to-infrastructure), V2X (vehicle-to-everything), and VLC (visible light communication). All these elements are found in the operation of the system and its ability to become a portable device dedicated to road safety based on (radio frequency) RF-VLC (visible light communication).

Graphical Abstract

1. Introduction

According to World Health Organization reports, we can see that a leading cause of death is determined by road accidents, which lead to more than 1.35 million deaths annually. These reports also include the vulnerable categories found among the victims, such as pedestrians, cyclists, motorcyclists, and other drivers. Another important conclusion is that more than 93% of deaths caused globally are in low and middle-income countries that do not automatically have adequate road infrastructure and are unable to provide the best conditions for road users, although these countries contribute only 60% approximately of the total number of vehicles in the world. We can say that many deaths are also caused by non-compliance with safety measures and failure to wear a seat belt; between 20 and 50 million people suffer less than fatal injuries, but some of these accidents cause serious bodily injuries followed by disability for life [1,2]. We can say that in addition to human suffering, we can also talk about economic loss. Whether we analyze society, family, or nations as a whole, these events cause losses in both directions, the costs of caring and treating victims, and also loss of labor; the economy is losing staff and capable people, and families are losing members that provided stability. Thus, road accidents resulting in casualties and loss of life cost about 3% of the gross domestic product of almost any country in the world [3]. Current approaches are meant to reduce some of the risk factors to which drivers are exposed, but there are many less controllable elements talking including the unpredictable human factor. Human error is a major cause of road accidents, followed by speed and failure to adapt to traffic conditions, driving under the influence of alcohol and psychoactive substances, preventive driving, unsafe road infrastructure, old and unsafe vehicles, inadequate medical services, applied road laws and regulations, and emotional states or neurological diseases. Analyzing all this information, we can see that the proposed and presented topic requires an approach and a debate both at the structural level and at the level of practical action by implementing viable solutions to resolve this issue properly and reducing the number of victims and the loss of human lives as a result of road accidents.
The literature includes studies and analysis in this field since the 1970s, especially the Oriented Gradient Histogram (HOG), this being an efficient way to extract important features from an image, then obtaining a model that can be classified and later recognized as a series of objects, and here we can refer to some of the works of Badler and Smoilar [4]. Later approaches and future directions appeared in Badler’s article [5] and that of Gavrila [6,7]. In addition, collaborations in the matter of analyzes and practical demonstrations by Mikolajczyk et al. [8] later adapted other elements, in terms of detection, from Dalal and Triggs [9,10]. We can say that part of the presented literature belongs to the category of elements that laid the foundations for image detection and analysis without which at this time there would not have been a qualitative evolution and a homogeneous development of hardware and software components. This complex field offers satisfaction in terms of results and predictability, but detection of pedestrian or driver movements is difficult to achieve because the human factor is totally unpredictable, and in over 60–70% of cases are difficult to detect, automatically increasing the degree of error in detection. Technological advancement and market impositions have forced car manufacturers, most of them after 2014, to incorporate various efficient driving systems and safety systems, or assisted driving modules [11]. These systems are dedicated to car drivers because often the human factor is not conscious enough to meet the requirements of pre-installed applications. The human factor decides and initiates actions that lead to certain movements in order to solve problems. All existing techniques are investigated and designed in intense experimental evaluations in external environments and uncontrolled conditions. Therefore, in the design and development of a system dedicated to road safety, adaptability to all vehicles is an important contribution, and a decisive step in the realization of new devices. The analysis and creation of behavioral, emotional, and visual analysis modules but also pedestrian predictability, and safety in the passenger compartment of the vehicle or infrastructure according to the strict requirements imposed by the competent authorities, outline another important contribution. In addition to these elements, yet another contribution is the use of hybrid RF-VLC communications in the transmission of information to the external environment or to other traffic participants.
The remainder of this article is organized as follows. Section 2 addresses existing problems and solutions in the field of road safety and vehicle applications, highlighting the challenges and solutions set out in the literature. Section 3 presents the proposed system and the architecture on which it is based, including hardware and software components. Section 4 details the software architecture and tests with experimental data the the proposed solutions in the field of road safety. Section 5 provides a discussion that sets out the experimental results obtained in the processing and analysis of developed sensory modules that confirm their usefulness and necessity in production, while Section 6 provides conclusions.

1.1. Automotive Safety and Future Perspectives

We can arrive at a future perspective on the further development of the system, and also its implementation in a vehicular network, using V2V, V2I, V2X, or VLC communications, offering a complete solution to those who use the presented system. Based on the information obtained and processed, priority will be given and to constraints in certain elements or basic functions of vehicles, or voice warning to drivers about danger or traffic events, thus reducing the number of road accidents. Although the last decade has seen a new shift in the fields of technology and the automotive industry, the field of communications between vehicles (V2V) or between infrastructure and vehicles (I2V) offers enormous potential in the process of reducing traffic accidents and increasing road safety. According to existing studies in the area of intelligent transportation systems (ITS) and the reports by the National Traffic Safety Administration (NHTSA), these communications have the capacity to positively influence traffic events by up to 75–80% [11].
We can say that the development of intelligent systems in the automotive field has a solid history with over a decade of activity, with several companies as pioneers in this segment: Volvo, Mercedes-Benz, Audi, Volkswagen, BMW, Toyota, Tesla and others. These car manufacturers have managed to develop their own communication systems based on various technologies such as DSRC (dedicated short-distance communication) by implementing physical models through solid hardware components, sensors, cameras, and augmented reality. Existing applications demonstrate their necessity and usefulness by increasing the degree of road safety offered, especially by assisted driving equipment working with the human factor in critical moments of reaction time or negligence. Each system has the ability to be stopped, without taking control of the car, in extreme situations [12].
However, the difficulty of developing road safety systems based on various types of communications is a difficult process in terms of maintaining high requirements and standards on the essential elements that guide the field. Applications developed to obtain vehicle systems used in road safety and based on optical communications require low latencies that can increase accuracy but must be kept below 20 ms in order to provide a useful time to transmit the collision detection or the identified event. Thus, we can say that even the literature highlights the need to deliver data packets and distances of up to 300 m for the prevention and communication of information. Technological efforts so far have developed platforms and equipment based on sensory modules but also systems that use LiDAR sensors to be able to design an overview of the area in which cars move. The car manufacturers have also implemented systems that design holograms on the asphalt level with different messages in order to warn pedestrians or other traffic participants about the problems encountered. The multitude of road and pedestrian safety systems and methods only increase and develop the field on a beneficial trend for safe driving, there is still a difficult factor to control, that is the human factor, unpredictable, difficult to model, and unpredictable conditions and reactions [13,14].

1.2. Motivation in the Field of Road Safety

This paper presents practical simulations of predictability in the field of assisted driving, analyzing the behavior of pedestrians, drivers, and preventive driving. The practical simulations are based on a mixed augmented reality platform and visible light communications, but also on road safety applications. The developed system is based on sensory modules installed both on the outside of the car and inside, with on-board cameras, exterior cameras, and communication modules between the vehicle interface and the developed system. The detected and processed elements are presented on a universal display that can be installed on the center console and communicates directly with the developed modules or through RF communications. Later, these can be upgraded by using VLC communications.
The analysis of pedestrians and the predictability of their next movements but also the analysis of the emotional states that drivers go through are essential components presented in this paper. Identifying whether drivers are aware of the use of seat belts and analyzing all elements leads to a model through which we obtain certain messages that are then sent to the center console, but also to the car network, thus being interpreted by other cars. This direction comes as an aid in solving traffic events caused by carelessness, delayed reactions or undertaking other activities, by traffic participants or pedestrians. Future research prospects offer the opportunity to develop the field of road safety based on the exponential growth of car users, but also the large number of cars that benefit from pre-installed systems dedicated to safe driving. We can say that the development of applications and systems dedicated to the automotive sector has wide applicability and creates a series of different challenges that put both the industry and the research environment to the test. The analysis of pedestrians, drivers, and traffic events is diverse and has different characteristics, whether of appearance, movement, height, speed, color, unpredictability, and other factors that can influence a situation. However, over the years, these challenges have been addressed by automotive research groups and beyond, undertaking to analyze and characterize behaviors, movements, states, and complete identification characteristics of the profile of pedestrians, drivers, and analysis and predictability of future movements [15,16].
The motivation and necessity of the study in this direction are based on technological advance and in the next decade vehicles will be built at another level and will address a much larger segment of customers. Cars will have built-in Artificial Intelligence (AI), the Internet of Things (IoT), and Big Data systems. All these steps will shape a new lifestyle and will transform ordinary cars into autonomous vehicles. This aspect has already been put into practice for a short time in tests. The latest autonomous designed prototype without seats and steering wheel in the front is the Rolls-Royce Vision 100. This vehicle already works based on systems with virtual assistants and built-in artificial intelligence such as Call Eleanor. Current systems are progressing with natural language processing (NLP), which aims to ensure direct interaction with intelligent systems dedicated to the driver. This element makes the interconnection between Computer Vision and the user, which has, as the final goal the identification of other vehicles, people, indicators, and traffic management systems. IoT technologies outline a well-defined domain path through the technology’s ability to interact with sensors, cameras, and connect with existing devices. Another aspect is related to LiDAR (Light Detection and Ranging) technologies, which are based on laser sensors installed in the upper part of the vehicle performing a 360° scan [17]. This process allows the vehicle to design a three-dimensional model of the area in which it is identifying objects within its range. We consider the evolution of this significant field, and we try to incorporate into intelligent systems all the technologies of the future, whether we are talking about V2V, V2I, V2R, V2X, or VLC [18,19].

2. Related Work

2.1. Vehicular Communication and Driving Safety Applications Issue, Related Works and Methods

2.1.1. Current Stages in DSRC, V2V, V2I, and V2X Communications

The current situation shows that most vehicles benefit from sensor-based equipment that can participate in crowdsourcing applications. They can implement ad hoc vehicle networks (VANET) based on DSRC-type technologies in data transport. There is also a multi-hop transmission for collection between road type units (RSU) in VANET, but in these conditions there is the possibility of a delayed and extremely difficult transmission [20]. In order to address these issues, academia is trying to build hybrid networks that incorporate these technologies by transforming them into new communication protocols dedicated to the timely transport of data between vehicles and infrastructure. The most intensively studied mechanisms approach clustering type centralization with a single hop in order to maximize the data supply. However, there is also a compromise that reduces the cost of cellular bandwidth but also the way in which the delay occurs. Thus, a supply of data is managed in the agreed system through multi-hop transmissions (V2V) or cellular networks. The lack of ideal protocols and technologies highlights the C-V2X (Cellular-Vehicle-to-Everything) based on autonomous and dedicated driving in intelligent transport systems. Its variety and volatility allow an evolution from LTE (Long Term Evolution) to V2X to NR (New Radio) and V2X. These elements indicate complementarity and a capacity to work together in order to obtain low latencies and ideal reliability. Features have highlighted the high efficiency of using such hybrid communications. At the heart of C-V2X communications are cells that have the ability to carry vital information that is periodically broadcast from vehicle to vehicle through cooperative awareness services (CAM) [21,22].
We can say that the research highlights several levels and programming modes dedicated to semi-resistant iterations with utility in type detection (SB-SPS), where a vehicle detects the environment and interconnects resources in time and space through the frequencies available in the process of CAM transmission. Another aspect that the academic environment has in mind is the detection of the channels in real-time and the adjustment of transmission power in order to avoid interference in relation to other vehicles in the network. We can say that these aspects are the purpose of all research on systems dedicated to transport and road safety, regardless of the type of communication protocol used. The recent literature highlights that V2V and V2I communication systems are decisive and significant elements in traffic optimization and the development of road safety systems. Therefore, new standards have emerged in this direction, ITS-GS (IEEE 802.11p) and C-V2X (3GPP), previously highlighted and presented [23,24]. The performance of both standards in terms of physical layers and associated MAC layers is presented in [25], with the capacity of a link and of their use in obtaining notable performance in a loaded network. It also shows how the performance advantage offered by C-V2X for low-density levels in cases of congestion becomes high and automatically the performance difference is reduced until ITS-G5 reduces the gap and exceeds C-V2X. The study of latencies and reliability is also analyzed in [26] in conditions of the emergence of a new generation of communications that can raise the standard of requirements, decrease latency and increase reliability so that the delivery process is completed in a timely manner. Expectations from fifth-generation (5G) networks are extremely diverse in terms of the requirements they have to meet. Although the C-V2X has progressed in terms of LTE V2X to the 14th version, this is only in the context of improving reliability and low latency performance to meet even the most demanding V2X applications. Critical applications in dynamic environments are most promising in the research studies because they highlight the limitations of a technology or protocol. Most communication models dedicated to the road sector and traffic safety, as well as non-safety (fluidization, prevention, and analysis), are based on radio technologies. Whether we are talking about direct C-V2X communications or the IEEE 802.11p standard [27], all these resources in an urban environment characterized by congestion and density of vehicles becomes outdated and problems arise. These issues are analyzed throughout the literature but do not focus on a mix of emerging technologies that function as a hybrid platform. That is why we still consider the approach of hybrid RF-VLC type communications as useful in road safety and traffic management systems [28,29].

2.1.2. Debate on the Directions of Development in Road and Pedestrian Safety

From studies and research conducted to eliminate or significantly reduce the number of road accidents, it is important to address issues regarding pedestrian intentions and the predictability of movements performed by them. The emotional states of drivers that condition or impose certain barriers and produce behavioral reactions are often responses conditioned by what the driver is going through, subsequently requiring the analysis and detection of preventive behavior and the use of seat belts. According to studies, the factors presented above are the basis of many road accidents or adverse events in traffic. Thus, we can say that uncontrolled reactions or unpredictable movements of drivers or pedestrians in a fraction of a second can produce an unfortunate event that can lead to loss of life. In the first phase, we can say that a large part of reaction and driving style is directly influenced by the emotional states that a driver or pedestrian goes through, thus changing the degree of concern, related activities, attention, confusion, nervousness, and other factors that manage to distract them and making them vulnerable. According to the medical literature and analysis performed by Dr. Peter Noel Murray, emotions have a role and behavior is directly influenced by them. This study was performed using magnetic resonance imaging (MRI) observing in detail brain activity and emotion produced by states, feelings, deeds, and reactions [30,31].
Therefore, emotions force human beings to act and decide in a timely manner, especially if we exemplify this through a case study of angry people who in a state of reflex just want to get out of that area or to ‘face the opponent’. Such reactions are also provoked in the case of happy or unhappy drivers; others exult and emanate positive states that make them focus only in that direction and others do the opposite, but the conclusion is that both states cause altered movements and lead to inconsistency whether in driving or walking.
We can say that the role played by emotions and other mental states, including fatigue while driving a car, is essential. These become extremely important factors in influencing decisions affecting types of reaction, perception and organization of memory [32,33], details, classification, preferences, evaluation, decisional analysis [34,35], concentration, attention, performance, and communication [36,37]. The efforts made by academia and research teams, including the development of intelligent systems to analyze the human factor, have progressed in a certain direction, but insufficiently in terms of the number of road accidents. The analysis of the driver with each use of the vehicle is not enough because conditions can change quite often and unpredictably. A driver’s emotional state must be uniform and support his/her abilities without disturbing certain activities. Thus, we consider extremely important the direction of research developed by exposing the problems in practical cases and solving them with the help of systems dedicated to traffic safety and pedestrian safety. These developed and developing components are meant to analyze and process all the necessary information from the external environment and to transmit it to drivers, vehicles, pedestrians, but also to intelligent traffic systems, all within a hybrid-mixed platform to reduce the problems previously mentioned.

2.1.3. Debating the Current State of RF-VLC Hybrid Communications in Road Safety Applications

The current proposal and direction in which the applications of our research group are developing can only compare certain modules with those currently existing in the literature. This is due to the development of several systems in parallel on different communication principles and protocols. Some of them work on the basis of RF communications, some with VLC communications, and this paper incorporates these communications in a hybrid form of RF-VLC in the first stage. The main goal is to highlight the capacity and portability of the developed system but also the amount of information that can be analyses, processed, and transmitted through such a system, whether we are talking about RF or VLC communications. In the first stage, the development of the system is presented in this paper for demonstration purposes in order to highlight the need for a system that encompasses all the deficiencies that exist at this time in production, as well as issues that have not been addressed by other research groups. Improvements are constant and maintain a direction that solves as many problems as society faces. Future steps also include devices to be installed on the surface of cars to transmit and receive information from other vehicles at a considerable distance. Current studies offer feasible perspectives because the VLC communications realized can support the transmission and reception of information at distances of up to 300 m in ideal conditions [38].
The analyses and research carried out aimed at performing tests in real scenarios and at behavioral investigation of pedestrians and drivers in an external environment and an existing infrastructure; sensory and video analysis made these elements true sources of information and factors in decision-making in order to prevent loss of life. In the first phase, the development of intelligent systems dedicated to the automotive sector is based on modules designed for adaptability in different situations and on universal design in order to have a greater impact on users. Part of the developed components focuses on the prevention and detection area in the vehicle and another part on the external environment, considered as very volatile and full of challenges and insecurity. The novelty and adaptability of the developed system that is in a continuous process of improvement have the possibility to identify and transmit information through several types of communication, whether we are talking about visible light about 4G or Wi-Fi. The use of these future perspectives in the field of communications facilitates extremely high data traffic, high-quality video calls, online televisions, or video applications that the fourth generation (4G) offers. We can say that the frequency band has values between 1800/2600 GHz, which can facilitate the transfer of information at transfer speeds of about 3 GB/s [39,40]. Although 4G communications were used in the case of scenarios and applications, also being available to the public, the fifth generation runs at speeds between 10 and 50 Gb/s, using an initial frequency of 1800 GHz and 2600 GHz, with subsequent adjustments that can increase between 30 and 300 GHz. Studies conducted by private research centers show that in laboratory tests in ideal conditions the technology reaches speeds of about 1 Tb/s. To consolidate the claims and expose them to the public, in 2019 the Einride company, that deals with certain aspects of the development of applications dedicated to Ericson, have managed the performance of controlling a truck remotely over a distance of about 1200 km on the Gothenburg, Sweden to Barcelona, Spain. The results and the way the tests were performed made the experiment a success and the conclusions were that, in the field of vehicle applications, 5G communications can manage and communicate information in a timely manner with extremely low latencies [41,42].

3. Methods

3.1. Research Trends in Automotive Safety Applications and Hardware Design and Implementation

Given the previous context and the usefulness of 4G and 5G communications in in-vehicle applications, we can say that the degree of complementarity between RF and VLC communications is extremely high so we can rely on a mix of technologies to obtain an ideal platform for communication between vehicles and infrastructure. Visible Light Communication (VLC) proposes the use of visible light (380–780 nm) as a data carrier; this factor gives us the possibility of obtaining a bandwidth of approximately 400 THz. This facilitates the increase of communication speed, reaching transfer rates of hundreds of Gb/s [43,44]. Drawing a parallel, DSRC technology is divided into several channels, approximately seven. Each channel has about 10 MHz, however, the technologies discussed can be complementary and can streamline certain branches of transport, especially the development of systems dedicated to intelligent traffic management [45]. As a first step, the system is based on hardware elements developed as needed, filling certain deficiencies with commercial modules in terms of data acquisition and transfer and the mode of communication between vehicles. The collection of information inside the vehicles is done through the CAN port with the help of an OBD dongle and the information related to the passenger compartment of the car is obtained through sensory and video analysis. The exposure of the information is done individually and distinctly depending on the prioritization of events but also their severity, including sounds and voice messages perceptible to the user. The presentation of the information is done on an interface developed in Angular and .NeT language for the acquisition of data and their timely exposure after processing from sensors or cameras [46,47]. The software application allows multiple network connections and external access even from mobile devices for better exposure to users who are not currently in traffic. Universal navigation has been designed to display the information, adaptable to any type of car dashboard with the necessary dimensions to fit it on the middle console. Its design and implementation were done with the help of a detailed analysis of the dimensions available on the center console of the cars for the later design of the navigation.
The designed device is based on the intensive study of available spaces in the passenger compartment and the middle console area in order to incorporate it either on the console or inside it for a better user browsing experience and to achieve a high comfort factor. Following the studies, 3D elements were designed that later built in portable navigation with a touchscreen of about 17 cm diagonal size, with a height of 10 cm and a depth of 7 cm, sufficient to incorporate electronic components inside. The stage at which the system is located requires a new analysis of the dimensions and the redesign of the whole set in order to obtain more generous spaces to be able to make the necessary upgrades.
According to the previous presentations on mobility and flexibility in the future, circuits dedicated to the connections and interconnections between the given acquisition modules and shields for the sensor packages have been designed. Raspberry Pi3 and Pi4 modules with a 7-inch touchscreen screen and a tablet or mobile phone control module were used for the interface. In terms of optimization and processing, the final configuration benefited from an upgrade with the Raspberry 4 microcontroller, with a processing power of 1.5 GHz and a memory of 4 GB RAM, all of which were supplemented by a Neuronal USB stick Intel Coral to achieve increased processing and accuracy in the measurements performed. Figure 1 represents the designed assembly with all the components and modules contained by the system in the first stage of development. Another microcontroller used in the control and acquisition of data was Teensy 3.6, being ideal in processing information and transmitting it to other devices in the dedicated system.
According to the simplified architecture in Figure 2 the main module is based on ARM Cortex-M4 with the main purpose of Debug-Interface and process control and parsing instructions from the other module, based on a Samsung K42G324ED processor with 2 GB-DRAM memory to instantiate the video elements and their interconnection with the message decoding and construction module. The input of images is made from several sources, whether we are talking about the cameras installed in the passenger compartment of the car or about a camera mounted outside for a much higher degree of comparison and increased accuracy in obtaining conclusive and real measurements usable in the research process [48,49].
For communication with the system an improved semi-commercial module is used with OBD II connectivity through the CAN port of the vehicle. This is designed to communicate via 3G, 4G, LTE, Wi-Fi, or Bluetooth, the platform being made on a microcontroller Raspberry Pi W able to provide sufficient processing time and sufficiently secure and stable connectivity in ideal conditions for quality communication. This can be treated in the future and can be improved by mounting a dedicated antenna to increase stability and decrease latency. All these components are found in a dedicated interface developed as previously mentioned in the language Angular and .NET with some elements designed for React JS, the data being collected through an API, embedded in a Cloud platform. This first stage is strictly addressed to the analysis and detection of risk factors leading to road accidents, then the system must be developed in the direction of vehicular and inter-vehicle communications using VLC, V2I, V2V, or V2X in order to expose a solution dedicated to the problems that face the whole of humanity. These changes and improvements must be made in accordance with the legislation in force without making adjustments that would disrupt or affect people’s lives. The main concern is to obtain relevant information and to signal certain aspects that can lead to a negative impact, but without endangering the lives of drivers or pedestrians through various constraints that restrict their rights.

3.1.1. Description of Prototypes and Practical Scenarios

Considering the complexity and the way in which more and more road accidents result in the loss of human lives, we consider every aspect important regarding the analysis and management of traffic events, but also in the adjacent areas or on the route that a car follows. Thus, within a month, trials and scenarios were made regarding traffic events at different times and in different locations in order to highlight as many elements as possible that can negatively influence the general situation. In the first stage, the most loaded routes are analyzed both from the point of view of traffic and areas heavily transited by pedestrians. A multitude of accidents is based on pedestrians who simply neglect traffic signs or misunderstand certain movements that later make them vulnerable to cars and the reactions of drivers. We consider that by investigating and describing certain characteristics and warning the driver about a possible non-compliant movement of pedestrians, the avoidance of an accident is largely achieved. Therefore, events outside the car can count in equal proportions with those inside, so we consider the need to classify practical scenarios and simulations according to these priorities. Another aspect treated is the one in which the road infrastructure, traffic lights, and markings, but also traffic participants, are analyzed; all these elements will be identified and prioritized according to the severity or potential of each possible event in a road accident. This will analyze the distances and times that cars travel in certain scenarios and vehicles that do not maintain an ideal behavior in traffic by cataloging their main risk factor and highlighting these.
All these elements are considered risk factors and increase the complexity of an unforeseen event in traffic. In order to complete the list of factors that negatively influence the smooth running of events, aspects inside the passenger compartment of the car were also taken into account, namely the safety elements that the driver is obliged to use and that protect those around him. Perhaps the most important component not exposed to the general public which can be considered by drivers is s a restriction or violation of their human rights. Therefore, we refer to the situation in which we analyze a driver behaviorally and emotionally, and through this analysis we highlight features related to the mental state of the users of a car. We know very well that psychological tests are mandatory to acquire the right to become a driver and then to change permits or move to another car category. These psychological tests are performed superficially and do not address all aspects of behavioral deviations or certain behavioral deficiencies that can have an extremely serious impact on the act of driving a car. Thus, each behavioral deviation or difficult reaction can disrupt the driver’s cognitive process, and automatically there is the likelihood that certain movements are performed too late.
Another aspect analyzed inside the passenger compartment, which is in the first phase and is also treated in the simulations performed, is related to the observance of the safety measures that a car provides, more precisely the seat belts, which 60–70% of cases can alleviate the impact of the car with an obstacle, reducing the likelihood that the driver will suffer serious injuries. The seat belt is a measure of prevention and protection for both drivers and users, although, over the years, the safety systems have been improved, and when the belt is not found in the base the driver is warned but is not forced to use the seat belt. The likelihood is that an element of restraint will automatically become for a driver a reason to consider himself under unjustified pressure, or that his rights are violated, so many drivers find it useful to wear seat belts or other means of restraint in order to evade its use and to protect themselves. Thus, in the analysis and studies performed by either image analysis or frontal sensors, elements related to the use of safety systems can identify these issues, and the driver is forced to comply. The constraints are mechanical in nature so that the vehicle is not usable until all safety measures have been taken, thus obtaining a high degree of prevention. All the systems developed so far are meant to offer comfort to the users, but no definite obligations are imposed on them, which can save their life simply by being used.
To complete the system, the same mode of reception is used with the help of photodiodes, installed in traffic light areas to capture the signal and perform the process of data acquisition, amplification, processing, and decoding. As mentioned above, intelligent traffic lights have been developed over the years and remarkable results have been achieved in data transmission even over distances of over 100–150 m in favourable weather conditions without having a BER of 10 to −7 [50,51]. At a future stage, these prototypes will have the role of communicating with the traffic management systems but also with the control and management systems of vehicles and road object detection, behaving as a hybrid network in which communications are supported in creating an infrastructure dedicated to safe driving. Studies and technological developments in production have focused more on user-based safety systems to protect the driver and prevent unforeseen events, which is extremely useful but has not advanced with the systems presented in this paper, and there are many factors that require a thorough analysis. The technological mix presented is feasible from all points of view and its integration into existing systems is only a matter of time before implementation. The information exists and the existing infrastructure allows the necessary changes to revolutionize the traffic and management systems without exposing additional economic costs.
We can say that systems can have a much better match with reality by using perception sensors needed for depth analysis that could highlight the stereo vision specific to human beings, or the use of LiDAR sensors as an alternative method to compare the results of identifications with camera-based systems and image processing. All the elements presented are under development and testing in order to present a viable solution in the evolution of road safety systems dedicated to safe and accurate driving and the protection of pedestrians and other road users.

3.1.2. Description Pedestrian Setup and Practical Scenarios

Automotive technology analysis and vision systems gravitate to the area of computerized management and driverless implementation with the omission of some extremely important elements. In the last decade, studies carried out continue to increase computing power and implementation based on convolutional neural networks, developing rapidly and leading to extremely notable results in the field of computer vision driving assistance. We can say that the most commonly used methods are the R-CNN series (Region-Based Convolutional Neuronal Networks), and SSD (Single Shot MultiBox Detection), but also the controversial YOLO being developed by a team that operates within the Darknet platforms. YOLO (You Look Once), algorithm proposed by RedMon and Grishick [52], has the ability to detect objects by regression and location exclusion and classification of the object from one end to the other by a single parsing. This makes it position itself at the top for speed algorithms, but with a low degree of accuracy in the case of small objects and an error rate in the case of pedestrian scenes with a high degree of complexity [53,54].
Therefore, the YOLO algorithm has been constantly improved, and the latest version is known as YOLOv3, which uses the ideal K-means method in automatic grouping and selection of the best regression frame from the initial stage to outline the data set. Another mechanism that helps to overcome the shortcomings of the past, the multi-scale anchor mechanism [48], was adopted to increase the detection accuracy of objects of a certain size. For a better experience and much higher quality results, Liu together with Anguelov and other collaborators proposed in [55] the SSD (Single Shot MultiBox Detector) algorithm using the regression method, for position detection and integration but also for classification in a single network. Another aspect is that the SSD network has been upgraded to VGG16 [56] in order to be able to replace the complete connection between the network and the extracted convoluted layer. Each convolutional layer we have created produces a feature map that it uses as a method of entering predictions, thus resulting in a feature map based on several regression scales. The map of low-level regression characteristics contains much more useful information to preserve details and obtain training sets, improved in quality and accuracy by returning errors. We can say that the YOLO algorithm, obtains results with much better accuracy when it comes to small objects and the degree of detection and speed have increased exponentially in real-time applications. To use the algorithm in traffic safety or assisted driving applications we need an extremely high real-time and a performance far superior to existing algorithms. Thus, there is a contradiction when it comes to the characteristics of algorithms in the trade-offs between speed and accuracy, but also many other factors that could favorably adjust the detection process, such as the extraction characteristic and the size of the input images.
Therefore, the use of SSD (Single Shot Detector) and Faster R-CNN (Region Convolutional Neural Network) architectures, but also MobileNet or Inception type extractors, helps complete the entire detection process proposed for simulation scenarios but also increases the speed of processing. We can say that for increased accuracy COCO SSD should be used, which increases fast processing capabilities, but in terms of efficiency and accuracy again false data were obtained. According to the analyses and studies performed together with the existing models, the algorithms were improved and subsequently configured in order to integrate them into the simulated scenarios, presenting themselves in the first simulations with satisfactory results and in accordance with the previously proposed hardware architecture. The erosion and analysis of input blocks were based on the analysis of an architecture based in MobileNetV2 and YOLOv3, improved by increasing the analysis capacity of several blocks, and using Neural Intel Coral increased processing time [57,58].
According to the graphical representation and data collection, one can observe the distribution of reference builders and destroyers according to the models presented and used. Each configuration has a meta-architecture selection model, with an extractor characteristic of the model (RESNET and INCEPTION RESNET), including resolutions for inputs, or a number of proposals to increase R-CNN and R-FCN processing. Another method used to obtain results as needed and efficiently parameterized is the cascade operation mode, so the browsing time decreases by completing a much more efficient subset of data [59,60]. Tests conclude that R-FCN and SSD models offer better speeds as opposed to R-CNN, which tends to handle slower models but at the same time offers significantly higher accuracy, even if the processing time sometimes reaches 100 ms for each image processed. If the analyzed elements are found in a single region and subsequently limited to a single object pursued, the processing time increases and automatically limits the proposed regions for analysis. Image detection and analysis systems come with flaws in terms of the restrictive imaginary threshold of detection that sometimes cannot be exceeded for various reasons. All these results have led to the development of a mixed, hybrid solution that can overcome the obstacles mentioned above and meet all the requirements and unforeseen elements from the external environment, working hard on adaptation to the context. The changes in the model are aimed at outlining an improved algorithm for the analysis of the entire image in order to increase the degree of predictability with the possibility of analyzing the overall context of the input images. Generally speaking, the prediction time is much better than the standard of existing models and can reach a speed for the feature analysis algorithm of about 100x compared to R-CNN Fast and 1000x T-CNN [61,62,63].

3.1.3. Discussion on the Architecture and Algorithms

The choice of using an anchor box is also found in Faster-RCNN and SSD but the size of that box is set manually, which later causes the entire network to converge quite slowly when preparing the drive set, subsequently making a local optimization. Thus, YOLOv3 attracts on the basis of the anchor and the mechanism of the Faster-RCNN box, the novelty element being the K-means grouping method for the experimental data set and the optimal grouping of the anchor boxes. We can say that the classic grouping method proposed by K-means for a data set uses the Euclidian Distance function, which symbolizes that these cassettes are larger and can cause errors in the detection clusters, and thus erroneous groupings occur and can deviate from the standard. Therefore, we adopted IOU [64] (which highlights the overlap ratio for candidate type boxes generated and marked as original) which are evaluated as a result after grouping, and the error caused by the scale of the box can be avoided. The elements presented above and the distance function can be calculated according to the Equation (1) below:
d ( box ,   centroid = 1 IOU   box , centroid  
where the box is the sample; centroid is characterized by the central point of the cluster; and IOU (cassette, centroid) can be said to represent the overlap ratio of the cluster cassette and the center box.
In the component of pedestrian detection and predictability, the essential data sets were extracted in order to be able to outline the ideal models being grouped, and the IOU scores with different values of k were subsequently compared. Thus, only the results with K values of x are chosen in the end, considering the complexity of the model given for analysis. The limitations imposed on pedestrian identification predictability increased with a series of parameterizable and automatic reactions, and the results obtained were much clearer and more concise in order to solve the problems exposed in the paper. Regarding the YOLO algorithm, we can also say that it is an end-to-end network, so we have a loss called square sum error throughout the calculation process [65,66], this being a simple addition of differences, even coordinate errors, IOU errors, or in the most special and unwanted cases classification errors. This loss function can be highlighted and expressed by the following Equation (2):
              loss n = i = 0 s 2 coordErr + IOUErr + clsErr  
We can say that when the function by which the loss is estimated is added, the share of each loss function in the total estimates for the expected loss must also be taken into account. Thus, we can have a coordination error that can be consistent with the weight of the classification error and from this we deduce that the model becomes unstable and, during the simulations, it will be divergent. When calculating the IOU error, there is a possibility that there may be objects, in that grid or not, with a different contribution to the loss rate in the detection network. If the objects have a similar amount of information, they can outline a high degree of confidence in the analysis process, otherwise the degree of confidence is approximately zero, which distorts the process and the influence of errors in calculating the parameterizable gradient in the network. We can say that improved YOLO largely solves the problems exposed and analysed by the square rooting of the elements that contain information and are of the type (w and h) of characteristics related to the size of the analysed object, but does not completely solve the problems of detection and contextual analysis of pedestrians. In order to outline a software structure to be able to expose the information as accurately as possible, a set of detection models was built and added, containing elements with characteristics and parameters usable in the proposed approach. Detection models are created strictly for the analysis and predictability of pedestrians with several ways of classification and identification in different instances, whether we are talking about indoor or outdoor environments, human characteristics, or elements that can disrupt the process (see Figure 3).
The implemented architecture is based on standard elements without complicating the analysis process by classifying the information according to how the process advances in the desired direction, by outlining each pedestrian identified with a transparent gradient that has two shades, namely red and green. These shades are the way to identify a pedestrian who can resort to uncontrolled movement and at the same time has a short distance between himself and the car. Pedestrians who have a buffer distance of approximately 15–20 m between them and the vehicle are not identified as possible hazards because the distance allows and offers the possibility of operating a procedure that does not endanger anyone. According to the proposed architecture, future changes can be made in order to optimize the software application to identify a larger set of data and to classify in an organized way other elements of major importance for traffic and its safety. The degree of complexity and accuracy that this system has shown in the first stage offers solid perspectives for future research and full development. The process of delimiting and contouring the images filters them, performs detection, and manages each identified feature using the logical regression model. We can say that when the points (h and w) of height and width are well defined, we calculate the middle of each grid obtaining a prediction point within that object. Thus, if the use of logical and independent classifiers offers an extremely high degree of accuracy and efficiency, we are guaranteed a better prediction class and automatically some results expose characteristics and analyses with a high degree of accuracy. The process of completing a data set and parsed objects go through a step in which each image in the prepared set of models is compared point by point. Thus, the complexity of movement in the case of the parameterized data set becomes much easier. This is much simpler, there is the possibility of an analysis by overlap, and pedestrian detection is performed at the same time without interleaving and problems in the detection process.
Through the process of analysis and tracing of the overlapping and detection characteristics in two layers, we can say that we succeed in building a map of functions, these being later urgently needed in the concatenation process. This process becomes a complex one and requires a combination of factors that use high computing power and processes such as detection speed are automatically affected; however, a suitable method would be to use Tiny-yolov3. This version does not provide the expected results, being a simplified version of YOLOv3. This mini-network contains only seven convolutional layers and approximately six grouping layers. We can say that it is about five times slower in the sampling required to obtain ideal detection processes [67,68]. The structure of this network outlines through its annexes a mini-network that can later increase speed but loses accuracy. The tests performed for the purpose proposed by this paper saw an improvement to this solution, reaching about three convolutional layers and thus improving the process of extracting features, but this was not enough. Using the improved YOLOv3 base and implementing analysis and detection on two layers in parallel, they automatically increased both the accuracy and the degree of parsing of each detection procedure, consolidating the analysis performed for each concatenated feature map. Thus, using this method, the results obtained are largely the most qualitative of the entire process undertaken and we consider this study useful in the field of road safety and pedestrian detection. The method approached, in the end, managed to obtain semantic information that has an increased degree of fine granulation, thus in more plastic terms bringing the smoothing of the images that the whole detection process needed. The addition of new convolutional layers considerably increases the quality, but at the same time they need computing resources but also memory resources, so at this moment in the second stage elements are used that improve performance through much more useful computing power for nonlinear execution processes, subsequently obtaining processes for extracting network features with increased detection accuracy.

4. Results

4.1. Experimental Evaluation and Results

The simulated scenarios were drawn and made on routes that stretched over a distance of about 6 km several times, set according to the flow of vehicles and traffic congestion, and there was a greater flow of pedestrians (see Figure 4). In the creation of the scenarios, unforeseen events such as blockages were taken into account and therefore the measurements lasted for about 25 days with most results on weekends. The crowded times were around noon, between 11:00 and 14:00. The distance travelled of approximately 6 km took place on the crowded streets in Suceava near the campus of Ștefan cel Mare University, more precisely on Ștefan cel Mare Boulevard and on the connecting streets George Enescu and Universității Street, with a capacity of flow and transit of between 800 and 1300 cars up to the maximum degree of congestion [61]. Out of all the outlined scenarios, approximately 10 were selected, those most loaded with information and lasting more than 180 s, in order to analyze a larger sector of the road and to highlight as many elements as possible. In all these scenarios, weather conditions are tested, e.g., sunlight falling perpendicular to the road surface which automatically disrupts the measurements, the speed of the vehicle or the interleaving of objects in the detection plane, the clothing of pedestrians.
All these features are analysed gradually increasing both the flow of pedestrians and the speed of movement in order to identify the progress of the first developed version of the proposed system. The first four scenarios, presented in Figure 5, are performed on secondary alleys and streets that do not contain many elements that are difficult to identify, ideal for outlining training sets. The speed in these scenarios is between 15 km/h and 20 km/h, an ideal speed for detection, but in these conditions any negative event can be remedied in a timely manner without endangering the lives of traffic participants or pedestrians.
After the completed stage, to a large extent as expected, there are three scenarios that take place on a route with increased density of vehicles, pedestrians, traffic lights and people who do not have distributive attention and throw themselves into the process of crossing the streets without care. At this stage, the results were as expected and there were no technical problems, apart from that created by sunlight that in some areas obscured the measurements and identification of pedestrians by covering them with light reflection either from asphalt or car windshield. The last set of scenarios increases the degree of difficulty because of the complexity of the route but also the density of vehicles and pedestrians. The speed of travel increased, reaching 65 km/h speed in some sections, at which the identification and predictability process have intermittencies and deficiencies in finding the necessary characteristics. The first part of this scenario takes place on a poorly circulated secondary alley. The moment the car travels on the main road, the first annoying factor is sunlight, which constantly falls on the surface of the windshield, hood, and road surface.
This factor disrupts both the movement and the detection process, during which time no pedestrians were identified. After turning the car on an ideal route, we notice that as the speed increases the analysis and the detection of pedestrians is delayed; the speed was between 55 km/h and 62 km/h see Figure 6. In some cases, detection was made, but the identification from the image seems to be delayed. This problem is caused by the hardware component. The elements that created problems can be remedied both by adjusting the hardware components and by adding optical filters to reduce FOV (Field of View) and extracting parasitic light, but also by adding new video cameras both in the front of the car and in the side mirrors for a wide spectrum of observation [69]. All these elements make significant adjustments in increasing quality and accuracy, with more sources and more comparison areas.
The analysis of all scenarios is highlighted in Table 1 which details the number of frames, travel speed, errors, and accuracy shown by the proposed module. Another not negligible aspect is that HoG features can be implemented for image processing. This would greatly increase the processing performance, and it is imperative to analyze the gradient several times as a linear operator. We can say that the accuracy in the case of pedestrian detection may have decreased caused by the total number of frames per second and the amount of information to be processed. This can be seen in the table below.
The current features that are being worked on improve the detection area of the algorithms by performing an image analysis based on contours and delimitations between the pedestrian area and the road. This step is made to exclude conflicts of any kind by sequencing, and is much more eloquent; based on leaving spaces allocated, the classifications improved, with over 20 different classes of static objects. In the first phase, the localization and reprocessing errors were in line with expectations, but the accuracy and precision were improved by overlapping the convolution layers, obtaining results about 10% more conclusive than in the early stages. The major advantage of the YOLOv3 algorithm is that it uses independent classifiers for class analysis and the binary losses obtained have cross-entropy and this factor identifies the class much more objectively [70,71]. When such logical and independent classifiers are used, the object created in its own model can be detected from a wide range of other objects. Therefore, the studies carried out led to the use of this algorithm, and the adjustments made it possible to identify over 80 categories of objects, exponentially adjusting the computing power.
According to the analyses carried out in the literature, there are other elements that need to increase the effectiveness of the approach in terms of clothing and increased accuracy [72,73]. Identifying key objects for vulnerable people is also an aspect that we are developing. We can see in the graph in Figure 7 that through small data packets and detailed segments a viable accuracy is obtained following the detection process.

4.2. Description of External Environment and Practical Scenarios

According to the literature and elements presented in previous chapters, the future of road safety systems is taking shape and is based heavily on the analysis of unpredictability in and out of traffic, namely pedestrians, infrastructure, cyclists, and objects that can distract and lead to the production of a negative event.
The predictability of these traffic elements is increasingly difficult due to the flow of cars and the very unpredictable and extremely hurried human character of certain reactions. Based on the structure of the system presented in Section 2, a system dedicated to external data processing and its transmission in a mini-traffic management system is developed in that direction, more precisely an intelligent traffic light that can intercept information promptly and provide a viable and motivating alternative. The proposal under development has the same characteristics as the other modules developed, but it addresses the external environment and gathers different information.
Therefore, the use of data exchange via VLC or V2X can provide traffic information at a relevant time without disturbing the proper functioning of the devices. When bidirectional communications are used there are advantages not only in informing the system driver, but also other receivers in the next area that benefit from VLC receivers. The existing infrastructure allows the interconnection of these devices without major changes affecting their functionality.

4.2.1. Description of External Environment and Practical Scenarios

The algorithm used at the moment relies strictly on convolutional DNN (Deep Neuronal Networks) to achieve the classification (see Figure 8). This created model is based on a classification of a much larger AlexNet high-resolution data drive set, using for its development several efficient graphics processing units through convolution and parallel processing operations [74,75]. The basic network through which this algorithm operates is a rectified nonlinear type (ReLU) function expressed with the Equation (3):
  f x = max 0 , x
Unlike the other equation, the model in Equation (4) is the neural standard, which provides much more efficient and fast results in terms of experimenting on large data packets.
  f x = ( 1 + e x ) 1
Therefore, this type of neuronal standard, or more precisely this type of neuron, does not require normalization of the input parameters to manage the saturation. Thus, to realize the local normalization scheme, the activity of the neuron is calculated by a x , y i applied in the function kernel i at position (x, y), applying the nonlinearity ReLU, obtaining a normalized response of the activity b x , y i and given using Equation (5):
  b x , y i =   a x , y i k + a j = max 0 ,   i n 2 min N 1 , i + n 2 ( a x , y i ) 2 β  
The role of Equation (5) is to sum all the maps related to the nucleus adjacent to the same spatial position, and the notation N is the integer assigned to all the cores in a layer. Therefore, unlike the standard model of the algorithm, it has a much smaller number of classes, even of those of interest such as cars, buses, bicycles [76,77]. In this direction, there are studies and niche systems such as Cyber-Physical, dedicated to the analysis of all types of bicycle, moped, or other devices in this category.
We can say that the field of vehicle detection plays an important role regardless of the area of application, civilian or military, but also in the management and planning of urban traffic. Incorporating all the basic functions into a feature dedicated to detection helps to process and create image analysis tools according to need [78,79,80].

4.2.2. Experimental Evaluation and Experimental Results

Because of the achievements of the simulations, they were molded using the same scenarios as in the case of the previous module, except that the detection method and the amount of information were different. According to the simulations, we obtain simultaneous detections on some elements of interest that are part of the category of those that offer safety in a traffic management system. We can say that visual tracking aims to deal with the streams of non-stationary images that change over time. Most existing algorithms have the ability to track objects in controlled environments but in areas with significant variations or fluctuating lighting they do not provide feedback as expected. This aspect is caused by the use of fixed analysis models without listing more details and characteristics, which limits the range of apparent detection. Detection based on incremental algorithms includes several components that correctly analyze the sample and add an ample modeling factor based on the basic observations. These adjustments demonstrate that there is the capacity for consistent improvements in the section dedicated to position, lighting, and area [81,82].
Remotesensing 13 01234 i001
Therefore, we can see in Figure 9 that we have the possibility of identifying several objects at the same time, the information subsequently transmitted to the car interface or the traffic management system. The detection process is exemplary independently as a trigger, focusing on determining the transmission and, in turn, is maintained throughout the process. Therefore, once we get the shape of a class and it is detected according to the code sequence presented, a box aims to highlight each detected element and the specific message is ready to respond to the detected class, then sent to a prioritization list. According to experimental data, the classes mentioned in the detection process have a maximum interference time of between 55–120 milliseconds, presented in Table 2 [83].
Being a network method for detecting objects in deep neural network images and video sequences, it discretizes the output space of boundary boxes in a set of default type boxes. These processes are on different scales, always using the location of the feature map [55]. When the prediction is made, the network provides results for each category of object that match the shape of the model. A graphical simulation illustrates the results obtained in the presented scenarios (Figure 10).

4.3. Description Emotion Drivers Setup and Practical Scenarios

According to the theory and literature, the algorithm used manages to combine classification and detection by the position regression procedure for the processed area, analyzing the area and the detection framework, and turning everything into a regression problem. We can say that during all this time the problem of detection is concentrated in the form of a convolutional neural network, concatenating the output data with the inputs from the original image by parsing the location of the characteristics. Therefore, we consider that the detection frames of those faces depend on the received coordinates and outline the last prediction frame for all frames in the created grid, so the coordination values of the anchor box are randomly reset and the network is fully examined to obtain a punctual and concrete analysis between the detection distance and the predictability area, as opposed to the standard defined by Equation (6), where IOU are characterized by the ratio between the intersections of unique feature sets and the real area.
d x = i j 1 IOU ( box i , truth j )  
Therefore, according to the literature, to have a qualitative and conclusive analysis and detection process, we must carefully follow certain specific steps in terms of interactions, given their maximum number, the speed of each conjunction v = 1, and using the K-means algorithm with the role of grouping the data to be able to identify and to obtain the central point m of the cluster. According to the calculations for the degree of individual sharing, it is necessary to group the elements in a common function in the case of calculating the distance being Equation (6). If we have a shorter distance, we can automatically say that the shared value is higher. If we take into account the degree of sharing for each element, the degree of matching can be sufficiently adjusted to analyze each zonal conjunction by Equation (7):
F i = i = 1 m d i   i = 1,2,3 . m  

4.3.1. Description of Emotion, Drivers Setup and Practical Scenarios

The next step consists of the ascending ordering of the results according to the degree of matching with the analysed characteristics, comparing the final elements of analysis of individuals in the form of the first individuals analysed, n, where (n < m). Therefore, we obtain an operation that realizes the proportional selection, and later the cross one, calculating uniformly the variation in the form P(t), obtaining the variation P i t . We can say that by combining n and a individuals scanned and permuted in memory, the insertion is achieved through a new clustering process of the form: n + t. If we compare the physical form of people and groups according to the degree of fit, a filtering function Fmin   x i x j can also be implemented. Repeating the previous instance and applying Formula (7) is done to update the memory by algebraic and evolutionary calculation of type e = e + 1 until a much larger number of iterations are reached, which tends to obtain a much smaller number of features, but of higher quality.
Concatenating information at the cluster level and improving existing elements by outlining the best forms of matching and similarities obtains a detection box with an ideal IOU. The direct benefits of the algorithm are related to the nature of the 53 convolutional layers, which exercise and create three feature maps at different scales, semantic information with standards, and high characteristics, according to Figure 11 [84,85]. According to studies, about six emotional states have been identified that can change the state of drivers, leading to unforeseen events in traffic. Thus, depending on the area and the features of the population, the most common states identified and outlined as models to be identified are: excited, tender, sad, happy, scared, angry.

4.3.2. Description of Emotions, Drivers Setup and Practical Scenarios

The extractions were performed through the detection module, using detection scales and processing scales with four points of complex analysis, combining the sampling of features in the case of larger images based on scales and anchor boxes, being a predictive analysis of the following target to be analysed, with standardization scales and feature grids.
Being emotional states that emanate externalized facial expressions, we need an in-depth and detailed analysis according to Figure 12, calculating between the areas and points of interest formed by the positioning of the mouth, nose, and eyes to expose and extract the characteristic vector. Depending on these aspects, a final profile is outlined. By sampling the 32 layers in the analysis of the processing field, superficial filtering information is received and the losses from the convolutional and sampled determination of the traversed multilayer can distort the results. Several anchor boxes and predictive analysis clarify these issues by standardization scales and by forming boundary grids. Thus, the division of the face into several regions is a viable solution that identifies the oral and ocular region, outlining areas with a large number of features, delimiting them from the auxiliary region, and extracting information about shape, contour, geometric information, or facial movements. Increasing the data and creating the training elements used is done through the iterations below in which the compared model and the created one analyze and identify similarities to display the degree of validation between the characteristics that match an emotional state related to a state according to the processed models. Given that this vectorization process does not execute and responds with a valid echo, a class of confusion matrices is outlined that plots the whole process to resume these iterations and creates a new set of comparative data. The exposure of the identified and classified areas was extracted and transmitted in lists that were subsequently reconsolidated and revalidated together with classes generated, new instances, and data verification in the introduced set.
A classifier report is presented with information based on criteria of precision, repeal and information support. We can say that this classification report highlights a degree of confusion regarding some emotional states that were not identified and validated as non-compliant, being an erroneous answer. We can say that the current system has a continuous approach and development because facial expressions must be identified from multiple variables of the human face, color, posture, and even facial expression or orientation. Therefore, iterations dedicated to facial movements and muscles under the eyes and nose were incremented for further classification by comparing the characteristics with the set of training values. An important aspect that we must not neglect is the one related to the naturalness of each emotion exposed by the subject, especially in the warning and road safety systems in which reactions are also involved. In order to analyze the characteristics in a robust way of facial emotions under natural conditions, the model considers nonlinear classification. Thus, the discriminative capacity of learning processes is combined with a precise classification in natural environments as a local spatio-temporal descriptor [86,87].
Remotesensing 13 01234 i002
These negative events come from the area of facial features and non-pervasive expressions in the information. The final conversion and extraction also benefited from image smoothing and filtering elements to eliminate multiple detections from different frames, presented in Table 3.
The previously presented problem in which the model suffers recognition problems has been solved by repeating the detection instances. These models have been standardized and some facial expressions suffer differences in the final exposure. The final corrections and identified characteristics do not guarantee that the extracted vector contains a single emotion that is specific to a certain condition. Being a concordance between the convolutional neural network and between a deep neural network, the developed models can approach facial expressions in several directions and states. The performance offered by deep neural architectures exposes up to six hidden layers of the detection process. Therefore, the detection and extraction of emotional traits are also based on active-looking models or coding of facial actions treated by Ekman. The final presentation of the results shows a recognition accuracy with performance in the direction of reasoning emotions using the methods proposed and expressed by the graphical illustration in Figure 13, based on the data obtained [88,89].
Facial grimaces and uncontrolled reactions can lead to confusing reactions and guarantee invalid or non-compliant identification. A conclusive first data set was extracted from several processed sequences and all six emotional states with repeated iterations in the case of tender and sad states were identified, with some common elements and characteristics that can be confused in facial analysis (exposures are shown in Figure 14).
Human states and emotions represent feelings and manifestations of the human body in different conditions and situations that depend on the personality of each person. This concludes the complexity of the phenomenon and clearly shows that detection and a standard designed model may have different pattern characteristics identified in other people. These complicate the detection process and decrease accuracy, requiring classification and a larger number of subjects to outline a map with data sets and features. According to the filtering procedures performed and which facial elements were extracted in outlining a facial histogram, facial elements containing feature vectors were also composed. Identifying the similarities between the training sets and the processed frames led to a comparative classification that provides an extensive calculation of the recognition rate and provides the security of a valid data set and an automatically viable response.

4.4. Description of Seatbelt Setup and Practical Scenarios

We can say that in a complete and ideal system the identification of the use of seat belts becomes a priority and necessary to obtain clear results. According to manufacturers in the automotive industry and studies conducted, most vehicles after 2006–2007 come with audible warning systems in the use of seat belts, because they, in more than 80% of accidents, have managed to prevent serious injury to drivers. Thus, we intend to identify the use of seat belts and notify drivers of this issue, including, where appropriate, their coercion in using the protective measure. In the representations made, the seat belts are represented as straight lines with an inclination slope of approximately 45 degrees and are mostly found to the left or right of the driver. Sometimes the detection process can be obstructed either by the color of the clothing accessories or by the rapid movements performed by the driver, requiring a delimitation and contouring of the global edges and drawing a perpendicular line on the analysed area. A precautionary and safety measure is the analysis of the movements preceding the fastening of the belt in the base positioned between the seats. We can say that, from pixel-perfect to pixel-wise, all the elements that are analysed in this detection process deal with all the probabilities and hypotheses that define the gradient orientation, distributed in the range of values from 45 degrees to 225 degrees, this being only a first stage. Gradual extraction and gradient analysis give the detection area that can be identified as the first iteration and as the first set of data, eliminating the other positions, leaving only those ideal and conclusive angles (see architecture in Figure 15).

4.4.1. Discussion of Architecture

Another aspect is how ideal conditions are processed and imposed by applying smoothing filters for each pixel before the image is processed according to Equation (8):
x , y < 0
The orientation characteristics of the gradients are identified by presenting the edges, making an individual contour of each pixel. It can be identified as a conclusion factor and delimits the main feature that describes the area of incidence between the driver’s seat and the passenger’s seat, analyzing also the area of the support pillar on which the belt is located. Therefore, the value of this gradient on an X, Y scale can be calculated according to the degree of orientation in the pixel location calculated according to the standards where the X and Y scale are defined as follows in Equation (9):
G x = 1 0 1 2 1 2 1 0 1
and the standard Y-scale template is defined as follows in Expression (10):
G y = 1 2 1 0 0 0 1 2 1    
Using these methods to achieve the smoothing process we obtain a first process by which we increase the noise resistance and automatically we can use the addition of pixels on the diagonal to be analysed.
Remotesensing 13 01234 i003
According to studies and models outlined and created, the detectable element is defined as a small constant in a distant plane with a radian unit. Subsequently, elements and methods are constrained that identify the model and eliminate interference from the detection process. When the detection process does not show detectable elements, a loop is entered to perform a new extraction of new coordinates to analyze the inclination and to outline a new section for the analysis of the above conformations. If the detection and linear analysis converge are at long distances a plot is made until the element is detected and compared with the previous identifications [90]. This result of the comparison is introduced in a contoured and highlighted framework according to the generated model and based on the mathematical expression:
T =               1     1 . .           1   1   1 1   1   1 1   1   1                   1   1                        
The expression represents each pixel identified for the contour map outline and the position for each pixel and the degrees of inclination compared to the mapped position. Thus, the area that contains a high degree of fit and that automatically indicates a higher value of similarities is identified by a positive notation and contains the searched for element.

4.4.2. Experimental Evaluation and Practical Results

The experimental results and simulations are both carried out during the day with low light elements using the camera from the basic system presented in the introductory section. Some weather conditions can influence the degree of detection or decrease the accuracy, but the ambient lights in the passenger compartment of the vehicle, outlining a much better exposure, remediate these problems. Several frames and blocks of images were executed with different movements and ways of fastening the seat belt in an attempt to obtain the most conclusive results in this process. A dotted line that sections the graded area, which subsequently analysed the position of the pillar and the position of the shoulder to draw an area of incidence between the two areas, performs the contouring of the detected elements [91,92].
To exclude invalid results, simulations were performed without the presence of a seat belt in the analysed data sets, and this aspect is presented in Figure 16 without identifying a gradient or elements that present the aspects contained in seat belt safety. The whole process of analysis and detection was shaped by that gradient of straight formation with a predetermined inclination of the realized training models. Therefore, we can say that the methods used and the sequential algorithm designed and obtained returned an increased degree of accuracy and provides a complete analysis of the requests made. The analysis performed and the experimental data are set out in Table 4.
According to the results and methods presented, the seat belt detection and presence module offers a viable and useful answer, although the complexity of this architecture is not of a high degree of difficulty, outlining and completing the whole system through this much-needed facility in the conditions presented in this paper.
We can say that semantic image segmentation can be a solution for future approaches to computer vision and deep learning. Most applications need mechanisms that provide accurate and efficient segmentation, especially when it comes to inland navigation and virtual reality systems. The concepts of revision of the applied models and procedures are constantly evolving and pose new challenges. Finally, the results are not quantitative in terms of the methods described and the data sets evaluated are auxiliary components and in a development phase. Therefore, we emphasize that future work in this direction will aim at methods and techniques of learning and image processing leading to an ideal solution for implementation in production. According to the obtained results, a representative graphical simulation was generated in Figure 17 with the data extracted from the procedures [93,94].

5. Final Discussion Concerning the Proposed System

The proposal for this system based on image and sector processing that includes mixed communications at a future stage was evaluated at the infrastructure level and in various scenarios, variables, distances, and precarious weather conditions proving that it can have the ability to behave ideally and to be useful in in-vehicle communication applications. Based on the results obtained in the research, having a developed system adaptive to the context based on VLC, and capable of frequency modulation and demodulation over distances of more than 150m, we believe that these results can be conclusive in future systems to be developed. Therefore, for better development of the future process for VLC-V2V-V2X systems in the automotive field, attention should be focused on specific skills such as noise resistance, communication interval, transfer speeds, and mobility, and adaptability in a certain unstable environment [95,96].
The article aimed to highlight the complementarity but also the usefulness of VLC-DSRC-V2X systems, combining systems and methods for image detection and processing, coding techniques, and adjustability of systems according to requirements. We can say that, in terms of the practical part through which the four modules were made that aimed to identify and predict possible traffic events that could have a negative impact, especially through the loss of human lives, these aspects have emerged to obtain a hybrid system that has the capacity to form a vehicle network between vehicles and infrastructure that excludes human errors and delayed reactions that can cause extreme human damage. Thus, as future directions are considered several aspects of testing and system integration in the main VLC application must work with the developed system to provide reliable connections and the transmission of information promptly without delays and at a BER rate (Bit Error Rate) that is extremely low. Experimental tests will also be able to identify the system’s ability to meet a low latency standard. These tests must demonstrate the ability to provide latencies below 20 ms for messages transmitted by the system that identify aspects of traffic, messages that have dimensions of around 400 bits [97,98].
Another aspect that needs to be taken seriously is to demonstrate complementarity in the case of incorporating an IEEE 802.11p 5.9 GHz DSRC system which from traffic density or congestion studies may have difficulty in transmitting data. Besides, studies show that there are cases of complementarity between the two modes of communication [99] while suggesting that to achieve the best possible reliability DSRC and VLC should work together. Therefore, in such circumstances VLC technology provides low latency communications between nearby vehicles and the DSRC channel provides and contributes to the performance of secure and point-to-point information transmission, making it ideal for traffic safety applications [100]. The methods and algorithms developed can work on any device that provides them with the necessary computing power and a minimum storage capacity to be able to perform the processing and identification process.

6. Conclusions

The premise on which the study was based and outlines future directions is oriented towards the development of intelligent systems dedicated to the automotive sector and pedestrian and road safety systems. Through this work, the deficiencies and needs of the field were analyzed and studies and how this syncope can be managed as efficiently as possible. The development of an autonomous system dedicated to the problems stated in the article was achieved and several traffic scenarios were experienced based on the deficiencies highlighted. The solid novelty elements on which the work is based are related to the way it communicates the system and its ability to relate through the cloud platform with other devices in the network, based on RF technologies. One direction in which the study is conducted is that in which the system will be able to communicate mixed and VLC and RF according to the presentations in the manuscript, based on a VLC transmitter and a VLC receiver, optical filter, transimpedance circuit, elements under test in VLC-based systems. The analyses and simulations were based on thorough studies of current deficiencies and what has not been implemented so far in terms of driver and pedestrian safety systems. Emotional analysis and unpredictability of drivers become extremely important aspects in the identification and effective control of a vehicle.
The performance of the solution is highlighted by the results presented in the article specifying that no processing performance enhancement modules or high-performance sensory modules were used. The architecture and components on which the system is based are largely commercial except for control devices that make their own contribution to the modular interconnection of systems. Based on the performances obtained by the algorithms, there are premises for further quality developments and system improvement by incorporating new communication protocols that offer scalability and adaptable behaviour according to needs. A future direction is that in which the current system will benefit from sensory modules connected to the ground so that navigation can intercept and decode the area in which the vehicle is moving.
The three-dimensional design of the travel perimeter highlights a clear direction in the development of autonomous guidance and assisted driving systems. Another direction in which research is being conducted and developed is to maintain distances between vehicles by any means. Aspects to be taken into account in future developments come from the area of latencies and dependence on a decongested spectrum without high information densities that disrupt data communications. All these elements conclude that the studies and simulations performed were relevant and produce a change in the initial proposal. In order to further develop the system, we consider it useful to implement the system on a fleet of vehicles that will benefit and in a few months to perform traffic measurements and, finally, present data to show the reliability and efficiency of the system in regarding road traffic accident prevention without human intervention. All research and development efforts of authors and research teams are channeled and targeted in that area and in supporting efficiency in traffic, intelligent traffic systems, automotive equipment and systems dedicated to the automotive sector, as well as VLC-based communications, V2X or DSRC.

Author Contributions

Conceptualization, E.Z. and M.D.; methodology, M.D.; software, E.Z.; validation, E.Z. and M.D.; formal analysis, E.Z.; investigation, E.Z.; resources, M.D.; data curation, E.Z.; writing—original draft preparation, E.Z. and M.D.; writing—review and editing, E.Z. and M.D; visualization, E.Z.; supervision, M.D. and E.Z.; project administration, M.D.; funding acquisition, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was founded by a grant of the Romanian Ministry of Research and Innovation, CCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0917, contract 21 PCCDI/2018, within PNCDI III. This work also was funded by Ministry of Research and Innovation within Program 1-Development of national research and development system, Subprogram 1.2-Institutional Performance-RDI excellence funding projects, contract no. 18PFE/16.10.2018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

Key Acronyms Used in this Paper
HOGOriented Gradient Histogram
V2VVehicle-to-Vehicle
V2IVehicle-to-Infrastructure
V2XVehicle-to-Everything
VLCVisible Light Communication
I2VInfrastructure-to-Vehicle
ITSIntelligent Transportation System
NHTSANational Traffic Safety Administration
DSRCDedicated Short Range Communication
LiDARLight Detection and Ranging
RFRadio Frequency
AIArtificial Intelligence
IoTInternet of Things
NLPNatural Language Processing
V2RVehicle-to-Road
MRIMagnetic Resonance Imaging
CANController Area Network
OBDOn-Board Diagnostics
LTELong-Term Evolution
APIApplication Programming Interface
R-CNNRegion Based Convolutional Neural Networks
SSDSingle Shot MultiBox Detector
YOLOYou Only Look Once
COCOCommon Objects in Context
R-FCNRegion-based Fully Convolutional Networks
T-CNNTube Convolutional Neural Network
IOUIntersection over Union
FOVField-of-View
ReLURectified Linear Units

References

  1. World Health Organization. Fact Sheet 358—Road Traffic Injuries; World Health Organization: Geneva, Switzerland, 2016. [Google Scholar]
  2. World Health Organization. Global Status Report on Road Safety; Licence: CC BYNC-SA 3.0 IGO; World Health Organization: Geneva, Switzerland, 2018. [Google Scholar]
  3. U.S. Department of Transportation Research and Innovative Technology Administration. Frequency of Target Crashes for IntelliDrive Safety Systems; Technical Report; U.S. Department of Transportation Research and Innovative Technology Administration: Washington, DC, USA, 2010. [Google Scholar]
  4. Badler, N.I.; Smoliar, S.W. Digital Representations of Human Movement. ACM Comput. Surv. 1979, 11, 19–38. [Google Scholar] [CrossRef]
  5. Wang, J.; Badler, N.; Berthouze, N.; Gilmore, R.; Johnson, K.; Lapedriza, A.; Lu, X.; Troje, N. Panel: Bodily Expressed Emotion Understanding Research: A Multidisciplinary Perspective; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  6. Ferranti, L.; Brito, B.; Pool, E.; Zheng, Y.; Ensing, R.M.; Happee, R.; Shyrokau, B.; Kooij, J.F.P.; Alonso-Mora, J.; Gavrila, D.M. SafeVRU: A Research Platform for the Interaction of Self-Driving Vehicles with Vulnerable Road Users. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1660–1666. [Google Scholar]
  7. Roth, M.; Jargot, D.; Gavrila, D.M. Deep End-to-end 3D Person Detection from Camera and Lidar. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 521–527. [Google Scholar]
  8. Mikolajczyk, K.; Tuytelaars, T.; Schmid, C.; Zisserman, A.; Matas, J.; Schaffalitzky, F.; Kadir, T.; Van Gool, L. A Comparison of Affine Region Detectors. Int. J. Comput. Vis. 2005, 65, 43–72. [Google Scholar] [CrossRef] [Green Version]
  9. Mikolajczyk, K.; Schmid, C.; Zisserman, A. Human Detection Based on a Probabilistic Assembly of Robust Part Detectors. Comput. Vis. ECCV 2004, 3021, 69–82. [Google Scholar]
  10. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the International Conference on Computer Vision & Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  11. Boyraz, P.; Hansen, J.H. Active vehicle safety system design based on driver characteristics and behaviour. Int. J. Veh. Saf. 2009, 4, 330. [Google Scholar] [CrossRef]
  12. Kenney, J.B. Dedicated Short-Range Communications (DSRC) Standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
  13. Wu, X.; Subramanian, S.; Guha, R.; White, R.G.; Li, J.; Lu, K.W.; Bucceri, A.; Zhang, T. Vehicular Communications Using DSRC: Challenges, Enhancements, and Evolution. IEEE J. Sel. Areas Commun. 2013, 31, 399–408. [Google Scholar]
  14. Ligo, A.K.; Peha, J.M.; Ferreira, P.; Barros, J. Throughput and Economics of DSRC-Based Internet of Vehicles. IEEE Access 2017, 6, 7276–7290. [Google Scholar] [CrossRef]
  15. Koziol, M. MWC Barcelona 2019: On the Road to Self-Driving Cars, 5G Will Make Us Better Drivers; IEEE Spectrum Press: New York, NY, USA, 2019. [Google Scholar]
  16. Zadobrischi, E.; Cosovanu, L.; Avătămăniţei, S.; Căilean, A. Complementary Radiofrequency and Visible Light Systems for Indoor and Vehicular Communications. In Proceedings of the 2019 23rd International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 9–11 October 2019; pp. 419–423. [Google Scholar]
  17. Mannoni, V.; Berg, V.; Sesia, S.; Perraud, E. A Comparison of the V2X Communication Systems: ITS-G5 and C-V2X. In Proceedings of the IEEE Vehicular Technology Conference (VTC), Kuala-Lumpur, Malaysia, 28 April–1 May 2019. [Google Scholar]
  18. Zadobrischi, E.; Dimian, M. Vehicular Communications Utility in Road Safety Applications: A Step toward Self-Aware Intelligent Traffic Systems. Symmetry 2021, 13, 438. [Google Scholar] [CrossRef]
  19. Liu, S.; Gaudiot, J. Autonomous vehicles lite self-driving technologies should start small, go slow. IEEE Spectr. 2020, 57, 36–49. [Google Scholar] [CrossRef]
  20. Haider, A.S.; Hwang, S.-H. Adaptive Transmit Power Control Algorithm for Sensing-Based Semi-Persistent Scheduling in C-V2X Mode 4 Communication. Electronics 2019, 8, 846. [Google Scholar] [CrossRef] [Green Version]
  21. Cohen, S.A.; Hopkins, D. Autonomous vehicles and the future of urban tourism. Ann. Tour. Res. 2019, 74, 33–42. [Google Scholar] [CrossRef]
  22. Zheng, W.; Liang, L. Fast car detection using image strip features. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2703–2710. [Google Scholar]
  23. Du, X.; Ang, M.H.; Rus, D. Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 749–754. [Google Scholar]
  24. Zhou, Y.; Wang, G.; Xu, G.; Fu, G. Safety driving assistance system design in intelligent vehicles. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 2637–2642. [Google Scholar]
  25. Chen, S.; Hu, J.; Shi, Y.; Zhao, L.; Li, W. A Vision of C-V2X: Technologies, Field Testing, and Challenges with Chinese Development. IEEE Internet Things J. 2020, 7, 3872–3881. [Google Scholar] [CrossRef] [Green Version]
  26. Qi, W.; Landfeldt, B.; Song, Q.; Guo, L.; Jamalipour, A. Traffic Differentiated Clustering Routing in DSRC and C-V2X Hybrid Vehicular Networks. IEEE Trans. Veh. Technol. 2020, 69, 7723–7734. [Google Scholar] [CrossRef] [Green Version]
  27. Sadio, O.; Ngom, I.; Lishou, C. Controlling WiFi Direct Group Formation for Non-Critical Applications in C-V2X Network. IEEE Access 2020, 8, 79947–79957. [Google Scholar] [CrossRef]
  28. Jamal, S.; Zeid, H.; Malli, M.; Yaacoub, E. Safe driving: A mobile application for detecting traffic accidents. In Proceedings of the 2018 IEEE Middle East and North Africa Communications Conference (MENACOMM), Jounieh, Lebanon, 18–20 April 2018; pp. 1–6. [Google Scholar]
  29. Croce, D.; Giarre’, L.; Pascucci, F.; Tinnirello, I.; Galioto, G.E.; Garlisi, D.; Valvo, A.L. An Indoor and Outdoor Navigation System for Visually Impaired People. IEEE Access 2019, 7, 170406–170418. [Google Scholar]
  30. Das, K. A Theoretical Approach to Define and Analyze Emotions. Int. J. Emerg. Ment. Health Hum. Resil. 2017, 19, 1–14. [Google Scholar]
  31. Girish, K.; Syeda, A.; Shireen, D. Magnetic resonance imaging (MRI)—A review. Int. J. Dent. Clin. 2011, 3, 65–70. [Google Scholar]
  32. Joyce, E.; Readinger, W.O.; Bora, K. Decision-Making and Cognitive Biases. Encycl. Ment. Health 2016. [Google Scholar] [CrossRef]
  33. Nuijten, A.; Benschop, N.; Rijsenbilt, A.; Wilmink, K. Cognitive Biases in Critical Decisions Facing SME Entrepreneurs: An External Accountants’ Perspective. Adm. Sci. 2020, 10, 89. [Google Scholar] [CrossRef]
  34. Zamkah, A.; Hui, T.; Andrews, S.; Dey, N.; Shi, F.; Sherratt, R.S. Identification of Suitable Biomarkers for Stress and Emotion Detection for Future Personal Affective Wearable Sensors. Biosensors 2020, 10, 40. [Google Scholar] [CrossRef] [Green Version]
  35. D’Errico, F.; Leone, G.; Schmid, M.; D’Anna, C. Prosocial Virtual Reality, Empathy, and EEG Measures: A Pilot Study Aimed at Monitoring Emotional Processes in Intergroup Helping Behaviors. Appl. Sci. 2020, 10, 1196. [Google Scholar] [CrossRef] [Green Version]
  36. Portilla-Tamarit, J.; Reus, S.; Portilla, I.; Fuster Ruiz-de-Apodaca, M.J.; Portilla, J. Impact of Advanced HIV Disease on Quality of Life and Mortality in the Era of Combined Antiretroviral Treatment. J. Clin. Med. 2021, 10, 716. [Google Scholar] [CrossRef] [PubMed]
  37. Christelle, P.; Céline, L.; Jean-Marie, C. The Influence of Emotions on Driving Behavior; Hennessy, D., Ed.; Nova Science Publishers: New York, NY, USA, 2011. [Google Scholar]
  38. Căilean, A.-M.; Dimian, M.; Popa, V. Noise-Adaptive Visible Light Communications Receiver for Automotive Applications: A Step Toward Self-Awareness. Sensors 2020, 20, 3764. [Google Scholar] [CrossRef] [PubMed]
  39. Zadobrischi, E.; Avatamanitei, S.-A.; Cailean, A.-M.; Dimian, M.; Negru, M.; Eduard, Z. Toward a hybrid vehicle communication platform based on VLC and DSRC technologies. In Proceedings of the 2019 IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 5–7 September 2019; pp. 103–107. [Google Scholar]
  40. Cailean, A.-M.; Dimian, M. Current Challenges for Visible Light Communications Usage in Vehicle Applications: A Survey. IEEE Commun. Surv. Tutor. 2017, 19, 2681–2703. [Google Scholar] [CrossRef]
  41. Verbruggen, F.J.R.; Silvas, E.; Hofman, T. Electric Powertrain Topology Analysis and Design for Heavy-Duty Trucks. Energies 2020, 13, 2434. [Google Scholar] [CrossRef]
  42. Francisco, B.; Benjamin, K.; Dirk, H.; Tobias, K. A Systemic View on Autonomous Vehicles: Policy Aspects for a Sustainable Transportation Planning. disP Plan. Rev. 2018, 54, 12–15. [Google Scholar]
  43. Cailean, A.; Cagneau, B.; Chassagne, L.; Popa, V.; Dimian, M. A survey on the usage of DSRC and VLC in communication-based vehicle safety applications. In Proceedings of the IEEE 21st Symposium on Communications and Vehicular Technology in the Benelux (SCVT), Delft, The Netherlands, 10 November 2014; pp. 69–74. [Google Scholar]
  44. Cailean, S.A.; Dimian, M.; Done, A. Enhanced design of visible light communication sensor for automotive applications: Experimental demonstration of a 130 m link. In Proceedings of the 2018 Global LIFI Congress (GLC), Paris, France, 8–9 February 2018; pp. 1–4. [Google Scholar] [CrossRef]
  45. Romeo, F.; Campolo, C.; Molinaro, A.; Berthet, A.O. DENM Repetitions to Enhance Reliability of the Autonomous Mode in NR V2X Sidelink. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–5. [Google Scholar]
  46. Rohling, H. Car-to-car communications. In Proceedings of the 2012 International Symposium on Signals, Systems, and Electronics (ISSSE), Potsdam, Germany; 2012; pp. 1–5. [Google Scholar] [CrossRef]
  47. Moniaga, J.V.; Manalu, S.R.; Hadipurnawan, D.A.; Sahidi, F. Diagnostics vehicle’s condition using obd-ii and raspberry pi technology: Study literature. J. Phys.: Conf. Ser. 2018, 978, 012011. [Google Scholar] [CrossRef]
  48. Rimpas, D.; Papadakis, A.; Samarakou, M. OBD-II sensor diagnostics for monitoring vehicle operation and consumption. Energy Rep. 2020, 6, 55–63. [Google Scholar] [CrossRef]
  49. Moussa, M.; Moussa, A.; El-Sheimy, N. Steering Angle Assisted Vehicular Navigation Using Portable Devices in GNSS-Denied Environments. Sensors 2019, 19, 1618. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Avătămăniţei, S.; Căilean, A.; Zadobrischi, E.; Done, A.; Dimian, M.; Popa, V. Intensive Testing of Infrastructure-to-Vehicle Visible Light Communications in Real Outdoor Scenario: Evaluation of a 50 m link in Direct Sun Exposure. In Proceedings of the 2019 Global LIFI Congress (GLC), Paris, France, 12–13 June 2019; pp. 1–5. [Google Scholar]
  51. O’Brien, D.C.; Zeng, L.; Le-Minh, H.; Faulkner, G.; Walewski, J.W.; Randel, S. Visible light communications: Challenges and possibilities. In Proceedings of the IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications, Cannes, France, 15–18 September 2008; pp. 1–5. [Google Scholar]
  52. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
  53. Wu, F.; Jin, G.; Gao, M.; He, Z.; Yang, Y. Helmet Detection Based on Improved YOLO V3 Deep Model. In Proceedings of the 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), Banff, AB, Canada, 9–11 May 2019; pp. 363–368. [Google Scholar]
  54. Yang, H.; Chen, L.; Chen, M.; Ma, Z.; Deng, F.; Li, M.; Li, X. Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model. IEEE Access 2019, 7, 180998–181011. [Google Scholar] [CrossRef]
  55. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A. SSD: Single Shot MultiBox Detector. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  56. Deng, Z.; Sun, H.; Zhou, S.; Zhao, J.; Lei, L.; Zou, H. Fast multiclass object detection in optical remote sensing images using region based convolutional neural networks. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  57. Chen, E.; Tang, X.; Fu, B. A Modified Pedestrian Retrieval Method Based on Faster R-CNN with Integration of Pedestrian Detection and Re-Identification. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018; pp. 63–66. [Google Scholar]
  58. Jockel, L.; Klas, M.; Martinez-Fernandez, S. Safe Traffic Sign Recognition through Data Augmentation for Autonomous Vehicles Software. In Proceedings of the 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C), Sofia, Bulgaria, 22–26 July 2019; pp. 540–541. [Google Scholar]
  59. Bao, C.; Chen, C.; Kui, H.; Wang, X. Safe Driving at Traffic Lights: An Image Recognition Based Approach. In Proceedings of the 2019 20th IEEE International Conference on Mobile Data Management (MDM), Hong Kong, China, 10–13 June 2019; pp. 112–117. [Google Scholar]
  60. Chen, Y.-Y.; Jhong, S.-Y.; Li, G.-Y.; Chen, P.-H. Thermal-Based Pedestrian Detection Using Faster R-CNN and Region Decomposition Branch. In Proceedings of the 2019 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Taipei, Taiwan, 3–6 December 2019; pp. 1–2. [Google Scholar]
  61. Vasuki, P.; Veluchamy, S. Pedestrian detection for driver assistance systems. In Proceedings of the 2016 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, 8–9 April 2016; pp. 1–4. [Google Scholar]
  62. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  63. Setio, A.A.A.; Ciompi, F.; Litjens, G.; Gerke, P.; Jacobs, C.; Van Riel, S.J.; Wille, M.M.W.; Naqibullah, M.; Sanchez, C.I.; Van Ginneken, B. Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE Trans. Med. Imaging 2016, 35, 1160–1169. [Google Scholar] [CrossRef]
  64. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016. [Google Scholar]
  65. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression. arXiv 2019, arXiv:1902.09630. [Google Scholar]
  66. Güven, J. Investigating Techniques for Improving Accuracy and Limiting Overfitting for YOLO and Real-Time Object Detection on iOS. Bachelor’s Thesis, Malmö Universitet, Malmö, Sweden, 2019. [Google Scholar]
  67. Buzzy, M.; Thesma, V.; Davoodi, M.; Mohammadpour Velni, J. Real-Time Plant Leaf Counting Using Deep Object Detection Networks. Sensors 2020, 20, 6896. [Google Scholar] [CrossRef]
  68. Joshi, R.C.; Yadav, S.; Dutta, M.K.; Travieso-Gonzalez, C.M. Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People. Entropy 2020, 22, 941. [Google Scholar] [CrossRef] [PubMed]
  69. Zadobrischi, E.; Cosovanu, L.-M.; Dimian, M. Traffic Flow Density Model and Dynamic Traffic Congestion Model Simulation Based on Practice Case with Vehicle Network and System Traffic Intelligent Communication. Symmetry 2020, 12, 1172. [Google Scholar] [CrossRef]
  70. Qiu, X.; Yuan, C. Improving Object Detection with Convolutional Neural Network via Iterative Mechanism. Bioinform. Res. Appl. 2017, 10636, 141–150. [Google Scholar]
  71. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1646–1654. [Google Scholar]
  72. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  73. Takatori, N.; Nojima, K.; Matsumoto, M.; Yanashima, K.; Magatani, K. Development of voice navigation system for the visually impaired by using IC tags. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 5181–5184. [Google Scholar]
  74. Yang, X.; Yuan, S.; Tian, Y. Assistive Clothing Pattern Recognition for Visually Impaired People. IEEE Trans. Hum. Mach. Syst. 2014, 44, 234–243. [Google Scholar] [CrossRef]
  75. Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
  76. Banerjee, C.; Mukherjee, T.; Pasiliao, E. An Empirical Study on Generalizations of the ReLU Activation Function. In Proceedings of the 2019 ACM Southeast Conference, Kennesaw, GA, USA, 18–20 April 2019. [Google Scholar]
  77. Favorskaya, M.; Andreev, V. The Study of Activation Functions in Deep Learning for Pedestrian Detection and Tracking. ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W12, 53–59. [Google Scholar] [CrossRef] [Green Version]
  78. Wang, W.; Wang, L.; Ge, X.; Li, J.; Yin, B. Pedestrian Detection Based on Two-Stream UDN. Appl. Sci. 2020, 10, 1866. [Google Scholar] [CrossRef] [Green Version]
  79. Hadi, R.A.; Sulong, G.; George, L.E. Vehicle Detection and Tracking Techniques: A Concise Review. Signal Image Process. Int. J. 2014, 5, 1–12. [Google Scholar] [CrossRef]
  80. Stephen, S.; Chetan, T.; Vancheswaran, A.; Ahmed, E.; Liviu, I. Improving Bicycle Safety through Automated Real-Time Vehicle Detection. 2010. Available online: https://scholarship.libraries.rutgers.edu/discovery/fulldisplay/alma991031549992004646/01RUT_INST:ResearchRepository (accessed on 19 January 2021).
  81. Ross, D.A.; Lim, J.; Lin, R.-S.; Yang, M.-H. Incremental learning for robust visual tracking. Int. J. Comput. Vis. 2008, 77, 125–141. [Google Scholar] [CrossRef]
  82. Fleyeh, H.; Dougherty, M. Road and traffic sign detection and recognition. In Proceedings of the 16th Mini-EURO Conference and 10th Meeting of EWGT, Poznan, Poland, 13–16 September 2015; pp. 644–653. [Google Scholar]
  83. Dewi, C.; Chen, R.-C.; Tai, S.-K. Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. Electronics 2020, 9, 889. [Google Scholar] [CrossRef]
  84. Zhang, Z.; Cui, L.; Liu, X.; Zhu, T. Emotion Detection Using Kinect 3D Facial Points. In Proceedings of the 2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI), Omaha, NE, USA, 13–16 October 2016; pp. 407–410. [Google Scholar]
  85. Kumar, G.A.R.; Kumar, R.K.; Sanyal, G. Facial emotion analysis using deep convolution neural network. In Proceedings of the 2017 International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 28–29 July 2017; pp. 369–374. [Google Scholar]
  86. De, A.; Saha, A. A comparative study on different approaches of real time human emotion recognition based on facial expression detection. In Proceedings of the 2015 International Conference on Advances in Computer Engineering and Applications, Ghaziabad, India, 19–20 March 2015; pp. 483–487. [Google Scholar]
  87. Shojaeilangari, S.; Yau, W.-Y.; Nandakumar, K.; Li, J.; Teoh, E.K. Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning. IEEE Trans. Image Process. 2015, 24, 2140–2152. [Google Scholar] [CrossRef]
  88. Verma, A.; Singh, P.; Alex, J.S.R. Modified Convolutional Neural Network Architecture Analysis for Facial Emotion Recognition. In Proceedings of the 2019 International Conference on Systems, Signals and Image Processing (IWSSIP), Osijek, Croatia, 5–7 June 2019; pp. 169–173. [Google Scholar]
  89. Ko, K.; Sim, K. Development of a Facial Emotion Recognition Method Based on Combining AAM with DBN. In Proceedings of the 2010 International Conference on Cyberworlds, Singapore, 20–22 October 2010; pp. 87–91. [Google Scholar]
  90. Zhou, B.; Chen, L.; Tian, J.; Peng, Z. Learning-based seat belt detection in image using salient gradient. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 547–550. [Google Scholar]
  91. Qin, X.-H.; Cheng, C.; Li, G.; Zhou, X. Efficient seat belt detection in a vehicle surveillance application. In Proceedings of the 2014 9th IEEE Conference on Industrial Electronics and Applications, Hangzhou, China, 9–11 June 2014; pp. 1247–1250. [Google Scholar]
  92. Arimitsu, S.; Sasaki, K.; Hosaka, H.; Itoh, M.; Ishida, K.; Ito, A. Seat Belt Vibration as a Stimulating Device for Awakening Drivers. IEEE/ASME Trans. Mechatron. 2007, 12, 511–518. [Google Scholar] [CrossRef]
  93. Huh, J.H.; Cho, S.H. Seat Belt Reminder System in Vehicle Using IR-UWB Radar. In Proceedings of the 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC), Guiyang, China, 22–24 August 2018; pp. 256–259. [Google Scholar]
  94. Malathi, M.; Sujitha, R.; Revathy, M. Alcohol detection and seat belt control system using Arduino. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–3. [Google Scholar]
  95. Joshi, K.; Roy, N.; Singh, G.; Bohara, V.A.; Srivastava, A. Experimental Observations on the Feasibility of VLC-Based V2X Communications under various Environmental Deterrents. In Proceedings of the 2019 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), Goa, India, 16–19 December 2019; pp. 1–4. [Google Scholar]
  96. Jia, J.; Zou, P.; Hu, F.; Zhao, Y.; Chi, N. Flexible Data Rate V2X Communication System beyond 1.84 Gb/s Based on MIMO VLC and Radar Integration. Appl. Sci. 2020, 10, 6636. [Google Scholar]
  97. Yeh, C.H.; Chow, C.W.; Gu, C.S.; Guo, B.S.; Chang, Y.J.; Weng, J.H.; Wu, M.C. 400 Mbit/s OOK green-LED visible light communication with low illumination. Opt. Quantum Electron. 2018, 50, 430. [Google Scholar] [CrossRef]
  98. Li, H.; Chen, X.; Guo, J.; Chen, H. A 550 Mbit/s real-time visible light communication system based on phosphorescent white light LED for practical high-speed low-complexity application. Opt. Express 2014, 22, 27203–27213. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Bazzi, A.; Masini, B.M.; Zanella, A.; Calisti, A. Visible light communications as a complementary technology for the internet of vehicles. Comput. Commun. 2016, 93, 39–51. [Google Scholar] [CrossRef]
  100. Masini, B.M.; Bazzi, A.; Zanella, A. Vehicular Visible Light Networks for Urban Mobile Crowd Sensing. Sensors 2018, 18, 1177. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Illustration of the proposed system. The visible light communication (VLC) receiver and emitter in outdoor and indoor simulations. Navigation system with modules installed on the central console.
Figure 1. Illustration of the proposed system. The visible light communication (VLC) receiver and emitter in outdoor and indoor simulations. Navigation system with modules installed on the central console.
Remotesensing 13 01234 g001
Figure 2. Schematic of the proposed architecture for the communication system. Module-based on ARM Cortex-M4 and Raspberry Pi4 and interface communication.
Figure 2. Schematic of the proposed architecture for the communication system. Module-based on ARM Cortex-M4 and Raspberry Pi4 and interface communication.
Remotesensing 13 01234 g002
Figure 3. Schematic of the processing algorithm for pedestrian/object detection.
Figure 3. Schematic of the processing algorithm for pedestrian/object detection.
Remotesensing 13 01234 g003
Figure 4. The route on which the scenario simulations took place.
Figure 4. The route on which the scenario simulations took place.
Remotesensing 13 01234 g004
Figure 5. Pedestrian detection and predictability movements. Red grids show a dangerous location and green grids show a secure location or position for pedestrians.
Figure 5. Pedestrian detection and predictability movements. Red grids show a dangerous location and green grids show a secure location or position for pedestrians.
Remotesensing 13 01234 g005
Figure 6. Unidentified elements and pedestrians, due to driving speed, stray light, or distance from the vehicle.
Figure 6. Unidentified elements and pedestrians, due to driving speed, stray light, or distance from the vehicle.
Remotesensing 13 01234 g006
Figure 7. Graphic analysis based on the table of details of identification accuracy, frames and pedestrians detected from the total number of people in the images, frames lost and speed detected.
Figure 7. Graphic analysis based on the table of details of identification accuracy, frames and pedestrians detected from the total number of people in the images, frames lost and speed detected.
Remotesensing 13 01234 g007
Figure 8. The architecture for AlexNet: Convolution, max-pooling, LRN (learn resource network) and totality connected (FC) layer.
Figure 8. The architecture for AlexNet: Convolution, max-pooling, LRN (learn resource network) and totality connected (FC) layer.
Remotesensing 13 01234 g008
Figure 9. Results with detection object for external environments, interference time, and percentages for detection.
Figure 9. Results with detection object for external environments, interference time, and percentages for detection.
Remotesensing 13 01234 g009
Figure 10. Graphic analysis based on the table on detection accuracy details, total blocks decoded, frames, and objects detected from total identified elements existing in the image.
Figure 10. Graphic analysis based on the table on detection accuracy details, total blocks decoded, frames, and objects detected from total identified elements existing in the image.
Remotesensing 13 01234 g010
Figure 11. Schematic of the processing algorithm for emotion drivers detection. detection.
Figure 11. Schematic of the processing algorithm for emotion drivers detection. detection.
Remotesensing 13 01234 g011
Figure 12. Position of features in eye and mouth region and facial expressions that can reflect mental state.
Figure 12. Position of features in eye and mouth region and facial expressions that can reflect mental state.
Remotesensing 13 01234 g012
Figure 13. Graphic analysis based on the table of detection accuracy details, total blocks decoded, precision for emotion states.
Figure 13. Graphic analysis based on the table of detection accuracy details, total blocks decoded, precision for emotion states.
Remotesensing 13 01234 g013
Figure 14. Experimental results on the detection of emotional states.
Figure 14. Experimental results on the detection of emotional states.
Remotesensing 13 01234 g014
Figure 15. Schematic architecture for seat belt detection.
Figure 15. Schematic architecture for seat belt detection.
Remotesensing 13 01234 g015
Figure 16. Experimental results regarding the presence of the seat belt.
Figure 16. Experimental results regarding the presence of the seat belt.
Remotesensing 13 01234 g016
Figure 17. Graphical analysis based on the table showing detection accuracy details for seat belt identification.
Figure 17. Graphical analysis based on the table showing detection accuracy details for seat belt identification.
Remotesensing 13 01234 g017
Table 1. Identification accuracy details, frames, and pedestrians detected from total people existing in images, frames lost, and detected travel speed.
Table 1. Identification accuracy details, frames, and pedestrians detected from total people existing in images, frames lost, and detected travel speed.
Video Duration in SecondsTotal Number of Blocks DecodedFrames DetectedFrames Lost Total Pedestrians Pedestrians Detected Travel Speed km/hDetection Accuracy %
18010,56244,3242968615–2089.50%
22098033876121259115–2572.80%
26059922461221288415–2065.60%
18097625803311015635–4055.40%
22078932876541337630–4057.10%
2803942146145986735–4568.30%
2803162803761765640–5031.80%
2802403476871054855–6045.70%
35012424611231786160–6534.20%
2807433241341445460–6537.50%
Table 2. Detection accuracy details, total blocks decoded, frames and objects detected from total identified elements existing in the image.
Table 2. Detection accuracy details, total blocks decoded, frames and objects detected from total identified elements existing in the image.
Video Duration in SecondsScenarioFrames DetectedFrames Lost Total Objects Objects DetectedTravel Speed km/hDetection Accuracy %
180189641219614615–2074.40%
220278951922511115–2549.30%
260367852732818415–2056.10%
180484654520112635–4062.60%
220556833233317630–4052.80%
28066433671989735–4548.90%
2807345314644410460–6523.40%
Table 3. Identification accuracy details, total blocks decoded, precision for emotional states.
Table 3. Identification accuracy details, total blocks decoded, precision for emotional states.
Emotional StatesPrecisionRecallAlg.Total Frame
Score
Excited0.680.690.71127
Tender0.370.960.3877
Sad0.240.980.2998
Happy0.770.890.8467
Scared0.550.580.3389
Angry0.780.780.7167
Table 4. Seat belt identification accuracy details, blocks decoded.
Table 4. Seat belt identification accuracy details, blocks decoded.
Seat Belt DetectedSeat Belt UndetectedTotalAccuracy
Seat Belt Frame18114623761.7%
Seat Belt Fasten23717031753.6%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zadobrischi, E.; Dimian, M. Inter-Urban Analysis of Pedestrian and Drivers through a Vehicular Network Based on Hybrid Communications Embedded in a Portable Car System and Advanced Image Processing Technologies. Remote Sens. 2021, 13, 1234. https://doi.org/10.3390/rs13071234

AMA Style

Zadobrischi E, Dimian M. Inter-Urban Analysis of Pedestrian and Drivers through a Vehicular Network Based on Hybrid Communications Embedded in a Portable Car System and Advanced Image Processing Technologies. Remote Sensing. 2021; 13(7):1234. https://doi.org/10.3390/rs13071234

Chicago/Turabian Style

Zadobrischi, Eduard, and Mihai Dimian. 2021. "Inter-Urban Analysis of Pedestrian and Drivers through a Vehicular Network Based on Hybrid Communications Embedded in a Portable Car System and Advanced Image Processing Technologies" Remote Sensing 13, no. 7: 1234. https://doi.org/10.3390/rs13071234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop