An Intra-Vehicular Wireless Multimedia Sensor Network for Smartphone-Based Low-Cost Advanced Driver-Assistance Systems

Advanced driver-assistance system(s) (ADAS) are more prevalent in high-end vehicles than in low-end vehicles. Wired solutions of vision sensors in ADAS already exist, but are costly and do not cater for low-end vehicles. General ADAS use wired harnessing for communication; this approach eliminates the need for cable harnessing and, therefore, the practicality of a novel wireless ADAS solution was tested. A low-cost alternative is proposed that extends a smartphone’s sensor perception, using a camera-based wireless sensor network. This paper presents the design of a low-cost ADAS alternative that uses an intra-vehicle wireless sensor network structured by a Wi-Fi Direct topology, using a smartphone as the processing platform. The proposed system makes ADAS features accessible to cheaper vehicles and investigates the possibility of using a wireless network to communicate ADAS information in a intra-vehicle environment. Other ADAS smartphone approaches make use of a smartphone’s onboard sensors; however, this paper shows the application of essential ADAS features developed on the smartphone’s ADAS application, carrying out both lane detection and collision detection on a vehicle by using wireless sensor data. A smartphone’s processing power was harnessed and used as a generic object detector through a convolution neural network, using the sensory network’s video streams. The network’s performance was analysed to ensure that the network could carry out detection in real-time. A low-cost CMOS camera sensor network with a smartphone found an application, using Wi-Fi Direct, to create an intra-vehicle wireless network as a low-cost advanced driver-assistance system.


Introduction
Vision from inside vehicles is becoming more common, especially in autonomous vehicles. Sensory networks used within vehicles, and how their applications improve the awareness of vehicles on the road, were investigated. Smartphones are used in forward, lateral, and inside assistance ADAS applications, in object and lane detection, tracking, and traffic sign detection. Forward assistance includes autonomous cruise control (ACC), which assists drivers in automatically keeping a safe driving distance, and forward collision avoidance (FCA), which provides a warning to the driver in the event of a potential accident. Forward assistant methods use radar and LiDAR. FCA also uses sensors, such as radar and LiDAR, but car manufacturers are using video in conjunction with radar, which has opened up a wide range of topics in research studies, i.e., using image processing in object detection, while sensor fusion techniques can be used to complement sensor devices for vehicle detection [1][2][3][4].
Smartphones are cheaper alternatives to forward assistance ADAS. Many methods of monitoring road and traffic conditions, using smartphones, have been proposed; the first approaches used sensors, three-axis accelerometers, and GPS [5][6][7][8]. Vehicle detection has been conducted using image processing, as well as alternatives to radar and LiDAR by using local binary patterns (LBP) and Haar-like features to train AdaBoost classifiers [9].
Smartphone-embedded hardware and cost-efficiency benefits have been used to address these driving behaviours [10,[30][31][32][33]. Algorithms are used to detect driving events while using the front camera of the smartphone to monitor the driver's facial expressions and gestures; intervention can be triggered before an accident occurs [34]. CarSafe was the first driver safety smartphone application on smartphones to use dual cameras where the front camera was used for monitoring the driver and the rear camera to detect the following distance and lane drifting [10].
ADAS applications on smartphones have been implemented, using iOS, as well as Android operating systems, including CarSafe, SideEye, and DeepLane [9,10,13,27]. Android smartphones have an extensive range of hardware combinations, but as shown by DeepLane, smartphones with better GPUs run at higher frame rates, making real-time applications reachable [13].
Wired architectures in vehicles are the most traditional, but new wireless network alternatives exist, such as IVWSN, which are being considered by the automotive industry. With the help of wireless technologies, the vehicle's harnessing is reduced, thereby lowering costs and fuel consumption [23]. CAN are shown to have bandwidth limitations on the CAN bus with the introduction of camera-based ADAS [35]. Other wired networks used in vehicles exist with varying bitrates, which have higher bandwidths and are capable of transmitting vision information, such as media-oriented system transport (MOST) [36]. IEEE 802 and ultra-wideband (UWB) solutions are currently being investigated, but because they still require electrical power sources from the vehicles, the advantages are mitigated [35]. However, the inner vehicle is a challenging environment for radio propagation because of metal objects and passengers inside the vehicle [37]. As a result of this complex environment, design considerations should be carefully considered when developing a wireless sensor network inside a vehicle where the reliability of the network diminishes as traffic on the network increases, causing delays in information being transferred [38,39]. Currently, different network compatible devices are available, e.g., Wi-Fi, Bluetooth, UWB, and Zigbee.
An IoT-IVWSN network consisting of end-devices, control unit, and a display that has a large number of end-device sensors, was shown to be a good alternative in vehicles to control and manage sensors [39]. Zigbee technology was shown to be useful for sensors that do not require high data transmission rates. However, when transferring multimedia and real-time video streaming, it would not be a practical solution. UWB and Wi-Fi are better solutions for high data rate implementations, such as multimedia video, because of the high data rate and low power [40]. A network of wirelessly interconnected devices that retrieve video and audio streams is known as wireless multimedia sensor networks (WMSN), where low-cost CMOS cameras, supplying multiple media streams, can provide a multiresolution description of scenes. Sensors that are capable of collecting multimedia data require computation-intensive processing, which may require processing to be conducted on the sensor nodes or computational hubs where WMSNs enable a new approach to perform distributed computations on nodes [41].
Vision-based sensors have become very popular in ADAS because of their low costs, compared to radar and LiDAR, but vision-based sensors perform poorly in unfavourable weather conditions. Measuring the distance of an object using vision sensors is also less accurate than LiDAR and radar, whereby using stereo vision (dual cameras) requires a wide base length between cameras for triangulation purposes; a radar's robustness against weather and the ability to determine distance more accurately can be harnessed instead, and is shown to work well in multi-sensor scenarios that lower false positives in detection [2,42,43].
ADAS depend on sensor information from the surrounding environment, processed by the electronic control unit (ECU) in high-end vehicles. These sensors include radar, LiDAR, ultrasonic sensors, and IR sensors, and the accommodate ADAS features, such as lane departure warning, parking assistance, and blind spot monitoring [44,45]. Ultrasonic sensors are among the cheapest sensors in ADAS, which are primarily used to find distances of static objects at a short-range and at slower speeds [44]. Ultrasonic sensors have a shorter range than radar, limited to a range of 2 metres, but could serve as a cheaper, less power consuming alternative for low-cost ADAS.
In this paper, we propose using a wireless ADAS, as shown in Figure 1; the ECU and display unit in a high-end ADAS are merged into FU 2, consisting of a smartphone. The sensors, FU 1 and FU 3, communicate with the smartphone through a wireless network where the ADAS techniques and processing are carried out (FU 2.4). By using the standard connection capabilities, the network can consist of Wi-Fi and Bluetooth protocols. Wi-Fi is a better-suited protocol for transferring information from camera sensors where an IP network is required to establish communication. The smartphone in FU 2.2 becomes an access point (AP), and the camera sensor's hosted video server allows the smartphone to access video through the Wi-Fi network (FU 1.2).
The network also consists of blind spot sensors that are not as data-intensive as the camera sensors. The blind spot sensors only require the distances to be sent to the smartphone and require minimum data throughput. FU 3.2 is used by the blind spot sensor to transfer data to the smartphone. Finally, the display and audio of the smartphone are used to interact with the driver, warning about potential risks. There is no active research on WMSN for ADAS applications, and its collaboration with smartphones is non-existent. Complementary metal-oxide semiconductor (CMOS) camera sensors have become more affordable and were shown to be helpful in ADAS, but they only exist as wired solutions; a wireless network of vision sensors has research potential [46]. Ultra-wideband (UWB) and Wi-Fi have high transmission rates that can be used for video transfer in smartphones, making them excellent additions to vision sensor networks. UWB would be the ideal choice because of the low power usage, high transmission rate, and low cost of UWB. However, UWB modules in smartphones have only recently become commercially available in high-end models, such as the iPhone 11, subsequently leading to new research avenues. An intra-vehicular wireless sensor network (IVWSN) can expand a smartphone's limited sensor perception to a cost-efficient, camerabased ADAS that uses real-time object detection to detect potential hazards around the vehicle's rear, blind spots, and front spatial areas.
The ADAS is wholly dependent on the data received by the IVWSN. Wireless technologies available on smartphones are considered, namely Wi-Fi and Bluetooth. This paper is organised as follows, in Section 2, these two wireless technologies are explored to use their capabilities and technology stacks that sensors will then implement in the IVWSN. Network topologies for multiple sensor uses are examined, followed by detailed communication methods and protocols to communicate with different sensors in the network. Detailed designs of the proposed camera nodes that use video streaming through the network are presented. Additional sensory data are also provided via a detailed design of the blind spot nodes. A comprehensive explanation of a developed smartphone ADAS application follows the hardware development, realising the complete solution. Different ADAS techniques implemented on the smartphone are shown with detailed explanations.
In the concluding sections, the systems being designed and developed, are discussed. The network's results are divided into: simulated environment, inta-vehicle environment, and power consumption. Simulated and actual environments are shown, which are compared with controlled environments and actual intra-vehicle counterparts. The performance results of the ADAS techniques implemented on the smartphone are illustrated and discussed for lane, collision, and blind spot detection.

Materials and Methods
Using an intra-vehicle wireless sensor network, structured by a Wi-Fi Direct and Bluetooth topology, a low-cost ADAS alternative extends a smartphone's sensory perception by using a camera-based wireless sensor network. As previously shown in Figure 1  An IVWMSN network for an intra-vehicle environment that uses a low-cost camera, blind spot nodes, and a smartphone device, is discussed in the following section. By using smartphone-implemented Wi-Fi technologies, such as Wi-Fi Direct, the smartphone can act as the network host onto which video sensors can directly connect without adding a physical router as part of the architecture. By using the Wi-Fi protocol, higher transfer rates can send sensory data from a video sensor to enable advanced object detection around a vehicle. Three camera node prototypes and an Android phone are shown in Figure 3.
Video streaming from the network can then be used for real-time object detection applications for an ADAS on a smartphone device. Detection has successfully been carried out on video being sourced from a simulated video stream and network-sourced video streams. Due to large models not being able to run on smaller embedded devices, such as smartphones, they used a faster and lighter CNN, such as MobileNet, instead.
Lane detection and collision avoidance have been implemented for the ADAS, running on Android. It has been shown that a low-cost ADAS, using a smartphone, can carry out image processing techniques capable of detecting lanes on a real-time video stream. It has also been shown that a low-cost ADAS, using a smartphone, is able to carry out object detection techniques where distance estimation and collision risk can be calculated on a real-time video stream. The real-time video stream being sourced from the intra-vehicular wireless video network, is used to assist the driver by detecting lanes and warning the driver when he/she is moving closer to the centre of that lane. The implemented system helps the driver by warning him/her visually on the screen, i.e., about lane divergence to the left or right of the vehicle. The real-time video stream also assists the driver by warning the driver when his/her vehicle is moving closer to hazardous objects. The implemented system helps the driver by warning him/her visually on the screen about a possible collision in the front or rear of the vehicle.

Network Topology
The network's main objective is to supply video feed to the processing unit, which is the Android device in the topology previously described. Wi-Fi Peer-to-Peer (P2P), also known as Wi-Fi Direct, is used in the design architecture to avoid having to use a physical router in the topology. A P2P 1:n topology is used where multiple clients are connected to one group owner with a single SSID, with a single security domain [47]. Not all devices support Wi-Fi Direct, but group owners support client legacy devices that fall outside the directional multi-gigabit (DMG) support. The transfer will still operate at IEEE 802.11 g or newer 2.4 GHz, supporting maximum physical bit rates of 54 Mbit/s. The legacy device supporting Wi-Fi identifies the group owner as a standard AP, as long as the smartphone supports Wi-Fi Direct. Most off-the-shelf Wi-Fi modules do not support Wi-Fi Direct, and using a legacy approach is more attainable. The basic topology is shown graphically in Figure 4.
The network consists of another Bluetooth network used by both the blind spot sensors and vision sensors. The primary purpose of the Bluetooth network is discussed in the following sections. The camera nodes wait for the AP to start and then connect to the group owner. Depending on whether the network has been created before in the Android application's life-cycle, the network can be re-established with a created SSID and passcode, but when the Android API creates a new network, a randomly generated SSID passcode is created. This caveat of Wi-Fi Direct prevents hard coding of the AP credentials to the camera node. A means of communication is needed to update the credentials on the camera nodes, in order to update network security details. Bluetooth pairing between the sensors and the smartphone is used for this communication. Bluetooth has shown unsatisfactory results when used for video transfer, but many low-cost Wi-Fi modules include Bluetooth communication as well [40]. Bluetooth protocol has been used as a serial communicator between devices, leaving the Wi-Fi protocol exclusively for video streaming. Using the Bluetooth serial communicator, information, such as the mentioned random generated passcode AP credentials, can be passed securely to the Bluetooth-paired camera nodes from the Android device. Figure 5 provides the overview of the communication between the camera node and Android smartphone.

Bluetooth Communication
The Bluetooth protocol RFCOMM, in addition to L2CAP protocol, emulates an RS-232 serial port. The serial communication between the Android device and the camera node is used with a simple data stream and registered serial port service universally unique identifier (UUID) to allow communication and use of this service [48]. After the connection to the camera node through Bluetooth, protocol is established. The IP addresses are saved in the local cache of the Android application to be retrieved when streaming is initiated.

Wi-Fi Communication
The network is managed and hosted by the Android application as the group owner. The camera nodes are seen as legacy devices as they do not support Wi-Fi Direct, but this does not negatively affect the architecture, since other intricate Wi-Fi Direct features are not needed. The camera node works within 2.4 GHz networks only. Such a 2.4 GHz network creation will cause miscommunication if the group owner establishes a network set at newer 5 GHz networks, as seen on high-end Android devices, which will cause the scan phase that has been initiated by the camera node to fail in connecting to the network. For this reason, the Android application is set to change the bandwidth on the creation of the network at 2.4 GHz channel 1 between the frequency range 2401-2423 MHz. Other channels, 1 to 14, can be explored if the need arises. The Wi-Fi network is hosted in a background thread of the Android application.

Video Streaming
The camera nodes host an HTTP server that serves JPEG images from the CMOS camera. Once the camera node is connected to the network, the Android application can request data from the camera node, using the IP address acquired from the previously mentioned Bluetooth engagement. The images taken by the CMOS camera are streamed through TCP on the Wi-Fi network that is hosted by the group owner. A port and HTTP endpoint are opened on the camera node HTTP server, allowing GET requests with multipart content-type "multipart/x-mixed-replace" to stream the image frames to the mobile phone. A boundary parameter is passed to the content-type as a delimiter to separate body parts of data, coming from frames [49]. The connection is kept open as long as the client requests packets, allowing the motion-JPEG (M-JPEG) stream to be processed by the Android device. The M-JPEG is not handled by the Android device as a video file, but is received as an image bitmap instead. A raw bitmap supports future object detection implementation without stripping frames from a video file, thereby requiring extra processing on the Android application.
An ADAS requires real-time capabilities concerning where delays can be hazardous to the driver. The safety of the driver is an uncompromising factor that needs to take priority. Acceleration and deceleration time (ADRT) is used when the driver adjusts his/her speed. The time difference between the driver having received the visual signal, and his/her reaction, needs to be slower than the video feed, which alternatively would serve as no use to the driver [40]. For the video streams to be helpful in a real-time application, the frame rate experienced by the driver on the ADAS assistance display should be at a minimal delay. Previous testing calling tasks, such as Bluetooth communication, has caused the Android video output view experience to be staggered, due to shared processing. The camera capturing an HTTP server was placed on a dedicated core on the camera node's microcontroller to prioritise the video stream, in order to improve upon this problem. Another interaction, such as Bluetooth communication, was placed on another core with lower prioritisation. Prioritisation and core tainting were conducted by using FreeRTOS, a real-time operating system kernel for embedded devices and the previously used microcontroller's dual-core capability. Each camera node consists of the state machine, as shown in Figure 6.

Camera Node
The functional diagram shown in Figure 7 illustrates the design of the camera node. A low-cost OV2640 CMOS image sensor (FU1.1) captures images with an image array capable of maximum image transfer rates of 30 fps at SVGA and up to 60 fps in CIF [5]. The OV2640 camera chip also gives users control over image quality and formatting for future results optimisations (FU1.

Design Schematics and PCB Layouts
A breakout board for the ESP32-CAM board has been designed that allows the development board to mount the camera node to a vehicle PCB, shown in Figure 8. The camera node should be able to withstand outdoor elements to be able to take proper field readings. The PCB is concealed in a protective container with ingress protection (IP), protected from dust and water. Figure 9 shows the designed schematic, where FT232RL IC is a TTL/RS232 converter used to program the ESP32 through a USB connection. The camera node con-sumes power through the USB connection, but jumpers have been placed on consuming battery power for future field testing.

Camera Node Embedded Logic
The camera node's software was written in C to communicate with the ESP32 microcontroller. Before the camera node hosts the HTTP server, the Bluetooth controller is engaged by starting the Bluetooth serial service by broadcasting the device name. The device name is prefixed with "ADAS_CAMERA", which the Android application uses to filter paired devices for the ADAS application. The camera nodes remain in this state until the Android application sends the aforementioned SSID and passcode that has been terminated by consecutive 0x23 hex values to indicate the successful transfer for both the SSID and passcode.
After credentials are received successfully, the camera is initialised, and the Wi-Fi module attempts to connect to the Android-hosted network. If a successful connection to the network is complete, the local IP of the camera node is transmitted to the Android application, and the node's HTTP server starts. The lightweight HTTP server creates a listening socket on TCP for HTTP traffic, where user-registered handlers are invoked, and sends back HTTP response packets to the Android application. The server has one purpose, which is to feed the image feed from the camera, where the server's port socket is set at 80, and the user-registered handler is an HTTP GET method at the root "/". The Android application then requests the feed from the URL "http://<camera node ip>:80". When the user-registered handler is invoked, an image is requested from the camera and converted to JPEG compression, which is then sent to the Android application. The invoked handler continues this process indefinitely until the connection is closed by the smartphone client.
Upon booting of the camera node, the Bluetooth communication is initialised with Wi-Fi communication. This only happens once during the camera node's life cycle, but this can fall over to a restart if a failure occurs, such as brownouts or system locks. Added handshaking is implemented to force the video streaming to restart on the Android device command "CMD=9". This mechanism of using "CMD" with a number is reserved to carry out other functionalities in the future. After initialisation, FreeRTOS is used to initiate a task pinned to core 0 of the microcontroller. This core deals with Bluetooth communication only. The state machine on core 0's side receives the group owner's (GO) SSID and password from the Android application by implementing the handshaking mechanism shown in Figure 10.

Blind Spot Node
The functional diagram shown in Figure 11 illustrates the design of the blind spot node. The node consists of three main components, namely the proximity sensor (FU1), microcontroller (FU2), and Bluetooth communication (FU3). Focusing on keeping the entire ADAS at a low cost, an ultrasonic sensor is used as the proximity sensor, which provides a 2 to 400 cm non-contact measurement with a ranging accuracy of 3 mm [50]. The ultrasonic sensor uses an echo and trigger mechanism that has been acquired by the microcontroller (MCU), where the distance is calculated by using the speed of sound. The same ESP32 development board, being used for the camera node from previous investigations, was used as the MCU of the blind spot node. The ESP32 32 bit microcontroller can run the Bluetooth controller (FU3.1), which is used to send through the calculated distance received from the ESP32 microcontroller and ultrasonic transceivers, using an RFCOMM Bluetooth serial.
The ultrasonic sensors pointing out from the side of the casing are directed to the blind spot area of the vehicle. The final prototype is shown in Figure 12.

Lane Detection
As an experimental implementation, the lane detection was written in Python, which was then written in JAVA on Android. OpenCV is a well-known library that focuses on real-time computer vision and was used in this implementation [51]. The same library is ported and available to the Android environment, where the same implementation logic is used for the Android development after a simulation experimentation is successful. The implemented Python streamer uses a previously developed simulator to simulate the oncoming road video stream. Each frame is captured and passed through the process shown by Figure 13.
An edge detection mechanism is used to detect lanes. Hough transform assists in detecting imperfect straight lines, which will provide further assistance when dotted lanes are at play.
In this implementation, the preprocessed frame is passed through a Hough transformation that returns an array of lines. A gradient is then calculated for each line in the array, where a slope smaller than a set acute angle is rejected. This slope is expressed by By rejecting angles that are not within the set bounds, errors are avoided, such as horizontal lines that do not serve as possible road lanes.
Furthermore, to determine an estimate of where the vehicle is positioned relative to the lanes, a simple approach was taken by using the x-intercept of the detected lane and the centre of the width of the frame. The intercept of the lane was calculated by using the following where I is the image height in pixels, assuming a Cartesian plane of the image where a (0, 0) point starts at the top left corner, and the x-intercept is then used to calculate the distance from the centre by using half the width of the image. Essential lane assistance is then added, which will trigger a warning when the driver exceeds a certain tolerance of the left or right side lanes. The tolerance can be set as a percentage of half the width. Once the driver exits the safe zone, the driver will be warned that the vehicle is diverging lanes.

Collision Detection
A multi-scale approach to optimise detection discovery by using different detection windows to detect objects in proximity to the driver's vehicle was shown to deliver good results [9]. In this paper, however, "detection" does not use the windows as detection windows. Instead, it uses them as collision potentials because the detection was already done through a convolution neural network. The different regions are used as collision levels, namely near, intermediate, and far, to warn drivers that a collision could occur. The far region wraps the vanishing point area in the horizon, where the closer regions wrap a larger area of the images. The overlapping region is calculated for each detection to determine whether a detected box falls within a particular region. The two metrics Overlap x and Overlap y are used to compute the OverlapArea, which can be calculated as the product by Overlap y = max(min(A y(bottom) ; B y(bottom) ) − max(A y(top) ; B y(top) )) (4)

Distance Estimation
A manual calibration process is required to determine the focal length, which is required before distance estimation can be carried out. After the focal length is calculated through a calibration process, the distance of the objects can be estimated by using the same equation, with the extension of adding a right angle calculation to determine the deviation from the centre of the image, which is better illustrated by Figure 14. The figure shows how the distance is recalculated when an object has been detected and deviates from the centroid of the image. The recalculated object is calculated by using the adjusted equation The distance to the centre of a detect box is calculated by using the aforementioned pinhole method. If the detected box deviates from the centre, the distance is adjusted by using Equation (6).

Simulated Environment
Tests were carried out, using different environments. Firstly, the nodes were simulated by using simulator nodes, hosting streams at different resolutions. The different streams consisted of low, medium, high and ultra-high resolutions, tested at concurrent connection combinations of one to six streams. The simulator ensures that each stream acts as a camera node by looping the same video at different resolutions. Network performance was then shown for the actual hardware camera nodes in a controlled environment to illustrate the manner in which the network and camera nodes performed without an intravehicle environment. Lastly, an intra-vehicle environment was used where camera nodes were placed inside the vehicle's front and rear to take the same readings as the controlled environment.
Six streams were run, from one to six concurrent streams at different resolutions, and then the averages of the frame rates per second were calculated, as shown in Table 1. At the lowest resolution, frame rates were the highest, with decreasing frame rates as the streams increased. As the resolutions increased, frame rates dropped to rates that did not accommodate real-time applications.
At high resolutions, the current solution cannot support an ADAS. Improved transmission methods, such as HTTP live streaming (HLS) and real time streaming protocol (RTSP), would improve this by incorporating improved encoding and decoding strategies, but would require more demanding hardware and increased costs.

Intra-Vehicle Environment
The intra-vehicle environment was tested by placing the sensors externally at the front and at the rear of the vehicle. The tests were carried out on a stationary vehicle. An extra camera node was also placed on the passenger seat to be compared to the controlled environment. Readings were then taken during the 10 min. The smartphone was mounted inside the vehicle at the driver's seat near the vehicle's dashboard. Figure 15 shows the throughput of all three sensors and Figure 16 reveals the density plot. The throughput of the network hovers at under 300 kbit/s. Figure 17 shows the frame rate of all three camera nodes fed to the Android device.

Android Battery Consumption
Multiple streams, as well as different resolutions, were tested to record current draws at different combinations. Figure 18 shows the matrix of current draw averages at different stream to resolution combinations. As expected, as streams and resolutions increased, the current draw increased, due to more processing on the smartphone.

Collision Avoidance
Collision detection uses a MobileSSDv1 model, pre-trained on the COCO dataset to detect vehicles from the video network stream. The detector uses three different areas to inform the driver about a potential collision. The performance of the detector was tested by implementing the Bosch Boxy data set, which consists of annotated vehicles for training and evaluating object detection methods for self-driving cars on freeways [52]. By comparing the annotated frames with the detection carried out by the collision detector, a confusion matrix, as shown in Figure 19, was constructed. The matrix was divided into three regions to illustrate different binary outcomes.

Distance Detection
A test vehicle was used to carry out the distance estimation algorithm. Three different intervals were marked on the road at 3 metre intervals. The focal length was calibrated at a measured 9 metres, and then the distance was estimated at ground truth measurements of 3, 6, and 9 metres, respectively. Table 2 shows the different intervals, and the accuracies of the readings after the distance estimation was carried out. Distance errors aggravate as the distance from the original 9 metre calibration point changes.

Lane Detection
Forming part of the ADAS, the driver is warned when potentially entering a lane on the left or right. If the warning tolerance is too high, then every lane being detected will fire a warning to the driver. Too high a tolerance is not ideal because only lanes close to the vehicle, which are assumed as drifting of the vehicle, should be used to warn the driver. On the other hand, if the tolerance is too low, the detected lanes will never enter the tolerance area that should trigger a warning. The tolerance is set at an estimate of the width of the ADAS-equipped vehicle. Figure 20 shows the area where warnings to the driver will be fired if the lane's x-intercepts fall within the hatched areas. Accuracy is the number of correctly classified lanes, divided by the total number of examples in the test set. Accuracy is helpful but does not cater for subtleties of imbalances or weighting of false negatives and false positives. The F-score helps to solve this by placing more emphasis on recall or precision. By setting β = 2, recall is twice as important as precision, considering that it is much worse to miss a lane than to give a false alert for a non-existing lane.

Blind Spot Detection
A distance experiment was set up where intervals separated by a metre, starting from the vehicle, were tested, to test the accuracy and performance of the developed blind spot nodes. Raw Bluetooth distance sample outputs were taken at every metre interval, using the blind spot node and a Bluetooth Android terminal application. Figure 21 shows the distribution plots of two intervals at 1, 2, and 2.5 metres, respectively. The samples decrease at distances larger than 2.5 metres, due to the ultrasonic sensors diminishing echo signal returns. The accuracy also decreases as the distance increases. The distance accuracy degradation is expected, as the sensor's specifications only support 2 metres.

Android Application
By using the Wi-Fi protocol, higher transfer rates can send sensory data from a video sensor to enable advanced object detection around a vehicle. Video streaming from the network can then be used for real-time object detection applications for an ADAS on a smartphone device. Detection was successfully carried out on video being sourced from a simulated video stream and network-sourced video streams. Due to large models not being able to run on smaller embedded devices, such as smartphones, they used a faster and lighter CNN, such as MobileNet, instead.
Lane detection and collision avoidance were implemented for the ADAS, running on Android. It has been shown that a low-cost ADAS, using a smartphone, can carry out image processing techniques capable of detecting lanes on a real-time video stream. It has also been shown that a low-cost ADAS, using a smartphone, is able to carry out object detection techniques where distance estimation and collision risk can be calculated on a real-time video stream. The real-time video stream, being sourced from the intra-vehicular wireless video network, is used to assist the driver by detecting lanes and warning the driver when he/she is moving closer to the centre of that lane. The implemented system helps the driver by warning him/her visually on the screen about lane divergence to the left or right of the vehicle which can be seen in Figure 22. The real-time video stream also assists the driver by warning the driver when his/her vehicle is moving closer to hazardous objects. The implemented system helps the driver by warning him/her visually on the screen about a possible collision in the front or rear of the vehicle.
Blind spot detection is added to the implemented ADAS running on Android. It has been shown that a low-cost ADAS, using a smartphone, can carry out blind spot detection where ultrasonic sensors can detect the distance of a vehicle in the driver's blind spot, in order to prevent a collision. The Bluetooth devices form an addition to the already developed network that provides a real-time video stream being sourced from the intravehicular wireless video network. The blind spot addition will assist the driver with the help of visual warnings on a screen if a possible collision in the blind spot of the vehicle is going to occur which can be seen in Figure 22. The collision and lane detection processing speeds are measured for each frame before and after detection. The sample rate of the blind spot detector is calculated by the number of packets received per second. Figure 23 shows the density plot of the lane, collision, and blind spot inference times for the ADAS, being run on a middle-range Android smartphone. Collision detection has the most significant average delay processed on a middle-range Android smartphone, running the collision avoidance module. The network delay and the collision detection delay have caused the drop in frames but are still usable in a real-time application. A summary of the proposed system's processing times is shown in Table 3, which includes the corresponding frame rates. The network delay is the most significant contributing factor to frames being dropped.

Conclusions
ADAS are prevalent in high-end vehicles, and low-cost ADAS have not been widely available. Most drivers have smartphones that are capable of carrying out complex processing, and can perform network communications. Smartphones are helpful in ADAS, but sensors are limited to the peripherals on the smartphone device. In this paper, a low-cost smartphone-based ADAS was developed using a high speed IVWMSN as the communication backbone, thereby extending the smartphone's onboard sensors and using the smartphone as a processing platform. It was demonstrated how the proposed system consumes sensory data received from the IVWSN and performs accurate lane detection, collision detection, and blind spot detection in real-time, while Bluetooth communication was used for lower data rate sensors, such as proximity sensors. Data transmission rates required for camera-based sensors in WMSN were achieved. Data transmission rates and the performance of each individual ADAS function were analysed and found to be adequate for the enablement of a functional low-cost smartphone-based ADAS.
It was found that an inexpensive advanced driver-assistance system alternative can be conceptualised by using object detection techniques processed on a smartphone from multiple streams, sourced from an intra-vehicle wireless sensor network, composed of camera sensors. To allow the smartphone application to be used by a driver in real-time, the frame rate is required to be high enough to accommodate the user to react to an ADAS warning. Table 3 shows a summary of the average processing times of the proposed system, which uses the IVWSN network. The Wi-Fi network can reach high throughput rates, but real-time processing is a trade-off that can be improved by using encoding and compression techniques for transferring video streams at higher resolutions. Even though the detectors prefer lower resolution images, other WSN applications might benefit from transferring video streams at higher resolutions. More efficient management of video streams can be implemented to alleviate processing strain and power usage by the smartphone. A switching algorithm can accommodate multiple camera streams by focusing on detection areas of higher priority, depending on driving scenarios determined by the object detection. Data Availability Statement: Publicly available datasets were analysed in this study. There data can be found here: https://github.com/TuSimple/tusimple-benchmark and https://boxy-dataset.com/ boxy/ accessed on 25 November 2021.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: