A Multi-Tier Security Analysis of Ofﬁcial Car Management Apps for Android

: Using automotive smartphone applications (apps) provided by car manufacturers may offer numerous advantages to the vehicle owner, including improved safety, fuel efﬁciency, anytime monitoring of vehicle data, and timely over-the-air delivery of software updates. On the other hand, the continuous tracking of the vehicle data by such apps may also pose a risk to the car owner, if, say, sensitive pieces of information are leaked to third parties or the app is vulnerable to attacks. This work contributes the ﬁrst to our knowledge full-ﬂedged security assessment of all the ofﬁcial single-vehicle management apps offered by major car manufacturers who operate in Europe. The apps are scrutinised statically with the purpose of not only identifying surfeits, say, in terms of the permissions requested, but also from a vulnerability assessment viewpoint. On top of that, we run each app to identify possible weak security practices in the owner-to-app registration process. The results reveal a multitude of issues, ranging from an over-claim of sensitive permissions and the use of possibly privacy-invasive API calls, to numerous potentially exploitable CWE and CVE-identiﬁed weaknesses and vulnerabilities, the, in some cases, excessive employment of third-party trackers, and a number of other ﬂaws related to the use of third-party software libraries, unsanitised input, and weak user password policies, to mention just a few.


Introduction
Nowadays, cars get increasingly smarter and have already become part of the Internet of Things (IoT). According to Statista [1], during the last decade, there is a significant augmentation in the sales of cars with embedded telematics, while the relevant market is estimated to reach about USD 166 billion by 2025, after recovering from the adverse impact of the COVID-19 pandemic. In this context, the term "connected cars" refers to cars which are connected bidirectionally to one or more external networks in some way. Naturally, this functionality provides one with the ability to manage and even, to some extent, control cars remotely, say, by using an app on their smartphone. There exists a wide variety of apps for connected cars, which generally can be split into single-vehicle and fleet use. The focus of this work is on official single-vehicle management apps. Amongst others, such an app continuously gathers vehicle usage and service data for helping the owner to keep up with the vehicle's status in real-time. For instance, the user can be informed about the fuel and oil levels, the estimated driving range, the tire pressure, monitor the distance, fuel consumption and driving efficiency per route, lock and unlock the vehicle remotely, keep an eye on maintenance needs and schedule a service appointment, notified when remote (over-the-air) software updates are available for download, locate a parked vehicle, and many others.
On the other hand, since no standardization or software development best practices regarding the building, vetting, and maintenance of such apps, including standard pro-Thus, far, considerable work has been devoted to addressing security and privacy in vehicular network systems for intelligent transportation system (ITS) usages [8]. Nevertheless, while in this ecosystem, connected car apps are a popular subject, little research has been conducted so far towards evaluating them under the security and privacy prisms. Mandal et al. [9] presented a static analysis approach to discover software vulnerabilities in Android auto infotainment apps [10]. They examined more than 20 infotainment apps available in Google Play at that time, and concluded that nearly 80% of them were potentially vulnerable. Panarotto et al. [11] examined the OpenXC library (http://openxcplatform.com/ (accessed on 20 February 2021)), which provides Android apps with a way, i.e., API, to interact with the car's hardware, and showed how this library can be exploited in the context of injection attacks. Furthermore, the authors proposed a static analysis approach which, according to the authors, nips such attacks in the bud. Recently, Wen et al. [12] proposed a cost-effective and automatic, i.e., no human intervention required, system called CANHUNTER, for reverse engineering of CAN bus commands using just connected car apps. These apps fundamentally rely on the CAN bus commands to engage with the vehicle and achieve compatibility with existing in-vehicle systems. The authors evaluated CANHUNTER by testing it on more than 200 on-board diagnostics (OBD) dongle and in-vehicle infotainment car apps acquired from both the official iOS and Android app markets.
per automotive group and then per app name in that group. The same sorting order is used for the rest of the tables and figures across all the sections of this work.
As shown in Figure 1, two axes of analysis were followed. The first, scrutinises each app statically, while the second examines the app by manually running it. Specifically, the static axis incorporates several stages of analysis, including sensitive permissions and API calls, and third-party trackers, which target mainly the privacy of the end-user, and misconfigurations, weaknesses and vulnerabilities, which focus on the security of the app. For instance, the latter stage looks into the code of each app for possibly identifying CWEs, CVEs, misconfigurations attributed to the use of shared libraries, etc. This stage also includes a basic taint analysis.
Static analysis employed three different tools. Two of them, namely Androtomist [20] and MobSF [21] are open-source, while the other, namely Ostorlab [22], utilised only for outdated software component analysis and taint analysis, is a software-as-a-service (SaaS) product. Details on these tools are given in the respective sections. When looking for weaknesses and vulnerabilities, we also relied on the methodology set out in the OWASP mobile security testing guide [23].
On the other hand, the dynamic analysis axis concentrates on any kind of possibly weak or misconfigured feature that pertain to the user-to-app registration phase, namely the one a new user installs and runs the app with the aim of creating a user account and registering their vehicle. As already mentioned, this type of analysis has been carried out by hand.

High-Level Static Analysis
This section outlines all noteworthy results per app in regard to a first-tier, coarse static analysis. Extracting the security permissions listed in the AndroidManifest.xml file of an app is a key step towards understanding its overall behavior [55]. Additionally, the lookup for potentially privacy-invasive API calls in the app's code can on the one hand provide supplementary information about higher-risk actions the app may perform, and on the other, reveal whether the identified calls coincide with the requested permissions. For this purpose, as already pointed out, the Androtomist tool [20] has been employed. Specifically, for the needs of this study, the tool collected the permissions from the app's manifest file and the API calls from the smali files. Table 2 gathers the potentially privacy-invasive, "dangerous" according to the API [56], permissions requested by each examined app. Bear in mind that, in contrast to a "normal" permission, every higher-risk permission requires prompting the user. In Table 2, the following 12 permissions are identified.
• P1: READ_CALENDAR allows an app to read the user's calendar data. • P2: WRITE_CALENDAR permits an app to write the user's calendar data. • P3: CAMERA grants access to the camera. • P4: READ_CONTACTS allows the app to read the user's contacts data. • P5: WRITE_CONTACTS enables the app to write the user's contacts data. • P6: GET_ACCOUNTS allows access to the list of accounts in the Accounts Service, namely it offers access to the existing accounts on the user's device. • P7: ACCESS_FINE_LOCATION. This permission allows the app to access the precise location of the device via the use of GPS, WiFi, and mobile cell data. It is also required for some connectivity tasks, including connecting to nearby devices over Bluetooth Low Energy (BLE). • P8: ACCESS_COARSE_LOCATION is potentially privacy-invasive as it allows the app to access the approximate location of the device through the use of either or both WiFi and mobile cell data. • P9: READ_PHONE_STATE. This permission allows read-only access to phone state. This includes the current cellular network information, the status of any ongoing calls, and a list of any PhoneAccounts, i.e., apps which can place or receive a phone call, registered on the device. • P10: CALL_PHONE allows an app to initiate a phone call without going through the dialer user interface for the user to confirm the call. • P11: READ_EXTERNAL_STORAGE allows an app to read from external storage, such as an SD card. • P12: WRITE_EXTERNAL_STORAGE permits an app to write to external storage.
As seen from Table 2, all apps requested at least two dangerous permissions. Specifically, a total of 10,9,8, and 7 such permissions were asked by 3, 5, 6, and 5 apps, respectively, while only one app requested the least number of two permissions of this kind. Not less important, all apps requested the ACCESS_FINE_LOCATION permission (P7), and the vast majority of them permission to read and write the external storage (P11 and P12). Moreover, from the same table, it can be inferred that apps of the same automotive group may present an identical or nearly identical distribution of permissions. For instance, this observation stands true for the PSA, and partially for the Mitsubishi and Tata groups. Furthermore, the same allocation of permissions may be perceived for certain apps within a group, e.g., the two "connected" apps in the BMW group.
The above mentioned privacy-invasive API calls have been grouped in Table 3 into three categories, namely cellular network, location, and camera. It is noteworthy that API calls related to the cellular network may also expose the user's location. For example, the phone number (getLine1Number()) or SIM operator name (getSimOperatorName()) may reveal the user's country, while getCellLocation() returns the current location of the device. As shown in Table 4, API calls found in each app were cross-checked against the respective categories listed in Table 3. As observed from the  former table, 21, 28, and 20 apps were found to include API calls pertaining to the cellular network, location, and camera, respectively. Interestingly, only one app includes zero API calls from any of these three categories. Altogether, similarities are perceived among apps that belong to the same group. This is clear for, say, the Mitsubishi and Tata groups, but it only partially applies to others. As displayed in Table 4, a last observation is that one app included camera and cellular network related API calls, for which the necessary permissions were not declared in its manifest file. Interestingly, however, both the missing permissions are announced for this app in its description in Play Store. Specifically, this app contains the getLine1Number() method, which requires at least one extra permission not declared in its manifest file. Therefore, according to the Android API, this call cannot be executed. The same app includes the android/hardware/Camera; → open() method, which according to the API, would not be able to carry out without also declaring the CAMERA permission.

Low-Level Static Analysis
To dig deeper in each examined app, and for Sections 5.1-5.5, we took advantage of the well-known Mobile Security Framework (MobSF) in v3.2.4 [21]. MobSF is capable of performing both static and dynamic analysis and is used as a pen-testing, malware analysis, and security assessment framework. It is also one of the all-in-one tools recommended by the OWASP Mobile Security Testing Guide [23].
As summarised in Table 5, the focus of this part of study is on signer certificate information, APKiD, network security, code analysis aiming at divulging Common Weakness Enumerations (CWE), tracker analysis, manifest analysis, and shared library binary analysis. For all these categories, and for the sake of brevity, we mention only high severity (or high value according to the common weakness scoring system) weaknesses, intentionally omitting all low to medium value ones, which were numerous. For extracting these pieces of information, MobSF decompiles the provided Android application package (APK) using Dex to Java decompiler (Jadx) [57]; code de-obfuscation processes may also be applicable to this step.
On the other hand, the last two subsections of the current section are devoted to the use of outdated third-party software by the apps and on taint analysis. For both of these tasks, the Ostorlab tool was utilised. Ostorlab is a well-known software-as-aservice (SaaS) product to review the security and privacy of mobile apps. Note that for the sake of the reproducibility of the results, we used the free-to-use "community" edition of the tool. To our knowledge, the same tool has been exploited in the context of similar researches [58,59].

Signer Cert., APKiD, Network Security
Each APK is signed by the developer using a specific cryptographic hash function, say, SHA-1, and APK signature scheme version, say, v3. As observed from the second column of Table 5, seven apps indicate a different hash algorithm (SHA256) than the one actually used (SHA-1) to sign the app. Precisely, the algorithm in parenthesis indicates the hash algorithm declared in the manifest file, while the actual algorithm used is shown at the left. On top of this, five more apps, i.e., 12 in total, have used SHA-1. Nevertheless, NIST deprecated the use of SHA-1 and suppressed its use for digital signatures in 2011 and 2013, respectively, [60]. If the app has been signed with the use of SHA-1 (or MD5), collisions may be possible. This means that apps signed with the corresponding weak algorithm are prone to attacks, including hijacking the app with phony updates or granting permissions to a malicious app. For example, the attacker may be able to repackage the app after including malicious code in it. Then, given that the signature validates, they could phish users to install the repacked app instead of the legitimate one.
Another vulnerability, which is rooted in improper signature usage, is known as "Janus" (CVE-2017-13156) [61]. Namely, Janus can be exploited if the v1 signature scheme (JAR signing) is used along with Android v5.0 (API 21) to v7.0 (API 25). Specifically, Janus leverages on the possibility of adding extra bytes to APK and DEX files, without affecting the signature. As shown in Table 5, all but one (30) of the examined apps are vulnerable to Janus, namely they were signed under scheme v1 and support Android v6.
As presented in the forth column of Table 5, network security analysis pinpointed three high severity vulnerabilities. The first and more serious, namely "insecure base configuration to permit clear text traffic to all domains", means that the app is configured to possibly allow unencrypted network communications. The second, is similar to the previous one; the difference is that the scope of domains is narrower. The last vulnerability, titled "Domain is configured to trust user installed certificates" may allow an assailant to successfully exercise a man-in-the-middle (MitM) attack and decrypt the network traffic. It is to be noted that this triplet of issues refer to the network security configuration xml file an app may designate through a special entry <application android:networkSecurityConfig=""> in its manifest file under the <application> tag [62]. Seven apps were found to be susceptible to the first vulnerability, while the rest two apply to a single (different) app.
The Android app identifier (APKiD) [63] offers information about how an APK was built. Precisely, APKiD is used by a plethora of static analysis tools to identify packers, i.e., agents created by packing engines used to protect the software, protectors, obfuscations, etc. From them, the most interesting one is packers [64], originally made to safeguard the intellectual property of apps. However, nowadays, Android packers like Baidu, Bangcle, Ijiami, and others are used extensively by malware coders given that reverse engineering tools are typically unable to unpack and inspect concealed payloads within packed apps. As seen in Table 5, only one app was found to leverage packing mechanisms.

CWEs
This subsection succinctly details on all the high severity CWEs that are pertinent to each app. Notably, from them, CWE-89 currently belongs to the list of top 25 most dangerous software weaknesses [65], while CWE-295 and CWE-532 occupy positions 28 and 33 in the extended list, respectively.
• CWE-250: It is known as "Execution with unnecessary privileges". Typically, it means that the app may request root access privileges. Therefore, the app is potentially able to disable any security checks that will be performed by the Android operating system (OS), which resembles the case of having a rooted device. Three apps were found to be susceptible to this weakness. • CWE-330: The "Use of insufficiently random values" vulnerability is related to the generation of predictable random values inside the app. This issue occurs if the app uses an insecure random number generator. In OWASP top 10 mobile risks list (OWASP-10), this weakness is placed in the fifth position, namely "insufficient cryptography". Surprisingly, all the examined apps suffer from this weakness. • CWE-276: This CWE, namely "Incorrect default permissions", occurs if the app is granted unneeded read/write permissions. So, any affected file can be potentially read/written from anyone. With reference to OWASP-10, this weakness is classified under M2, namely, "insecure data storage". As seen from Table 5, all apps were vulnerable to this weakness for at least one of the following reasons. The first, is related to the creation of a temp file, which may contain sensitive data. This is a major issue, since anyone can access folders that contain temp files, say, "/data/local/tmp/*". The second pertains to the fact that the app requests (read/write) access to the external storage. • CWE-532: This weakness, namely, "Insertion of sensitive information into log file", emerges when a production app has enabled logging information to a file. While this feature may be helpful during the development stage of an app, it must be striped away before the app becomes publicly available. Put simply, an attacker could read these files and acquire any private information stored on them. All apps but one were vulnerable to this issue. • CWE-312: It is known as "Cleartext storage of sensitive information", and is classified as M9 in OWASP-10. Naturally, when sensitive information, say, a username and/or password, are stored in cleartext form, anyone can read them. In some cases, this information may be stored inside the code of the app, e.g., in a configuration file. As observed from Table 5, only four apps were immune to this weakness. • CWE-89: This extremely dangerous weakness, titled "Improper neutralization of special elements used in an SQL command ('SQL Injection')" is classified as M7 in OWASP-10. It occurs when the app does not sanitise or improperly sanitises input stemming from an upstream component, say, from a Web form for user authentication. All but four apps were found to be potentially vulnerable to this issue. • CWE-327: It is referred to as "Use of a broken or risky cryptographic algorithm", and it belongs to M5 ("Insufficient Cryptography") of OWASP-10. This weakness relates to the usage of obsolete or risky encryption or hash algorithms. As seen in Table 5, all but two apps may potentially use at least one obsolete hash algorithm, namely MD5 or SHA-1, and nine of them support AES-ECB. • CWE-295: This weakness titled "Improper certificate validation" is classified under M3 ("Insecure Communication") in OWASP-10. This happens when the app is configured to trust an insecure or self-signed or any kind of certificate. As already mentioned, this situation may allow assailants to instigate MitM attacks. Two of the examined apps suffer from this weakness due to an insecure implementation of TLS. • CWE-749: It is known as "Exposed dangerous method or function", and it belongs to M1 ("Improper Platform Usage") of OWASP-10. This weakness can weaponise several serious vulnerabilities, which each time depend on the underlying vulnerable function. Specifically, more than half (17) of the apps were found to offer an insecure WebView implementation. The latter is used to display web content as part of an activity layout. In presence of this weakness, an attacker could possibly mount a MitM attack or even execute a Cross Site Scripting (XSS) injection. For more details regarding this issue, the interested reader may refer to the "WebView" section of [66]. • CWE-919: This weakness titled "Weaknesses in Mobile Applications" is directly related to CWE-749. Both of them tackle the same issue, but for a different matter. In our case, we observed that nearly one-quarter (7) of the examined apps have enabled the remote WebView debugging. That is, debug mode must be disabled before deploying a production application, otherwise anyone who can access an unlocked mobile device can easily obtain the app's data. • CWE-780: This weakness is known as "Use of RSA Algorithm without OAEP". It means that the software employs RSA without encompassing Optimal Asymmetric Encryption Padding (OAEP), which in turn might undermine the encryption. Specifi-cally, OAEP is typically used with RSA (RSA-OAEP) for offering resistance against adaptive chosen ciphertext attacks. As seen in Table 5, only one app is susceptible to this weakness.
Overall, the analysis yielded an unexpectedly number of apps being potentially susceptible to high value common weaknesses. For instance, the vast majority, if not all, i.e., 27 to 31 apps, were found to be prone to six of the considered weaknesses, while 17 of them to CWE-749. One can also easily discern a pretty much identical pattern of weaknesses among apps that belong to the same manufacturer group. As with many other software and hardware products, despite the fact that keeping up to date with common and impactful weaknesses aids in preventing security vulnerabilities and mitigating risk, this situation bespeaks a rather low priority on security features.

Tracker Analysis
This subsection is devoted to third-party trackers that may be utilised by each app. Specifically, MobSF uses the open source Exodus-Privacy [67] webapp to analyse any detected tracker in the app's source code. The focus is on six diverse tracker categories. The first one is "crash reporters". Such a tracker concentrates on the crashes that may occur during the normal operation of the app. Upon a crash event, they send a notification message to the developers, informing them about the respective error. The next category is referred to as "analytics". They gather all possible information regarding the usage of the app by the users, say, the time each user spent in the app, which features they used, and so on. On the other hand, the main purpose of "profiling" type of trackers is to gather multiple information regarding the user. Then, they attempt to profile the user with the aim of achieving and optimising personalised advertising. Another category of trackers is referred to as "identification". These can use the gathered information with the purpose of ultimately matching a digital (user) identity with the real person. The penultimate category of trackers is "ads". These trackers are specialised in serving personalised advertisements to the users. The last category is referred to as "location". By using location services along with the data provided by different sensors of the mobile device, the app, and subsequently the trackers, can obtain the geographical location of the user. After that, the user can be targeted with location-based ads. For details about the third-party trackers issue in the mobile ecosystem, the interested reader can refer to the works in [68][69][70].
App analysis revealed that 30 apps use at least Firebase or other Google analytics service as a method to measure users' engagement with them. Given that Google analytics can be considered as the price floor in third-party tracking, all eight apps which only embrace this type of trackers or no tracker at all (Porsche Connect) are reported to have zero trackers in Table 6. Nevertheless, we did not treat the same way third-party crash reporters, because it has been observed that they were used along with analytics, which monitor the user's behavior. Details on the 24 unique third-party trackers used by these apps are given in the following. Moreover, Table 6 sets out the distribution of these trackers per app.
• T1: AltBeacon [71] is a specification and open-source library for proximity beacon implementations. It is used to notify the app when a BLE beacon appears or disappears. Furthermore, it may allow Android devices to transmit beacons in the background. Exodus did not provide any category for this tracker. Nevertheless, through beaconing and geofencing technologies, an app can possibly acquire the location of users and target them with location-based ads [72]. So, we categorise this tracker as a location one. • T2: Appdynamics [73] is a platform that enables one to monitor and analyse mobile device data. Exodus categorised this tracker as an analytics and profiling one.
is a mobile measurement and deep linking platform. According to Exodus, this tracker is categorised as analytics. However, among others, Branch collects the IP address of the device, and its fingerprint, including identity ID, hardware ID, brand, model, screen DPI and height.  T1  T2  T3  T4  T5  T6  T7  T8  T9  T10  T11  T12  T13  T14  T15  T16  T17  T18  T19  T20  T21  T22  T23  T24  Total Volvo On Call -- is self-defined as a multi-channel customer engagement platform. Based on the Exodus output, LeanPlum is able to use messaging, mobile automation of marketing, app personalization, A/B testing (also known as split testing), and analytics. Based on this, Exodus classifies Leanplum as an analytics, location, and profiling tracker. Furthermore, the privacy policy contained in the LeanPlum website is not clear on whether it also applies to their mobile tracker or not. Namely, the provided policy mentions that they can perform data sharing, do a third-party data collection, and gather personal information. • T5: HockeyApp [76] is a subsidiary of Microsoft corporation. It is mainly used for building, testing, releasing, and monitoring apps, including the reporting of crash reports in real-time. Exodus categorised this tracker as crash reporter. • T6: Demdex [77] is a solution for audience management and it is part of Adobe's advertising ecosystem. According to [78], Demdex "captures behavioral data on behalf of Websites and advertisers and stores it in a 'behavioral data bank' ". Exodus categorised Demdex as an analytics tracker, noting that it can perform "cross-device identification", which is targeting users across different devices through profiling. Furthermore, it is able to apply "geotargeting and location-based targeting", using different technologies, such as GPS, beacons, etc. Lastly, it can gather "Real-time geoand location-based targeting" data from a running app. • T7: Microsoft Visual Studio App Center Crashes [79] creates an automatic report, which includes any necessary information related to an app crash. When the user re-opens the app, this report is sent to the App Center. In this respect, this tracker is categorised as crash reporter.  [85] is a tracker destined to handle the communication step of marketing campaigns. It can manage audiences based on their mobile contacts and send targeted and personalised push notifications or email messages to the user of the mobile device. When a campaign is location-based, it can geofence and transmit beacon-based proximity marketing messages to its audience. It also tracks users engagement with the app using analytics. We categorised it as an adv and analytics one.
• T18: Adobe Experience Cloud [86] is an all-in-one cloud tool, which provides with a collection of solutions, regarding analytics, and advertising. We categorise this tracker as an analytics and ads one. Summarising the above and with reference to Table 6, more than the two-thirds (23) of the apps were found to use at least one third-party tracker. Furthermore, the number of profiling and location trackers across all apps are significant, reaching a total of six and five different trackers, respectively. On the plus side, 19 apps utilise a limited number of trackers, i.e., one to three. Moreover, apps belonging to the same manufacturer group may present an identical distribution of trackers. For instance, the former observation stands true for the PSA and Tata groups, but only partially for the BMW, FCA, and VW ones.

Manifest Analysis
Based on coarse static analysis given in Section 4, we noted the use of several dangerous permissions from a privacy-invasive perspective. To deepen the analysis, we utilised MobSF to scrutinise the manifest file of each app, and possibly reveal any latent weaknesses. The focus here is on services, activities, and broadcast receivers. All of them, employ intents and intent-filters. Based on Android developer guide, all of the aforementioned components, but the intents, must be declared in the manifest file of an app [94].
Intents are basically message objects, which are used for either intra-or inter-app communication. They have three main usages, namely start an activity, initiate a service, and deliver a broadcast. There are two types of intents, namely explicit and implicit. The former is used to handle messages within the app, while the latter to transfer messages towards another capable app. On the other hand, an intent filter is an expression in an app's manifest file that determines the kind of intents the component would wish to obtain. In this respect, intent-filters are responsible for handling any implicit intent, e.g., broadcast receivers, and capturing system-oriented broadcast messages, where the specific broadcast message is included inside an intent. This means that for an app to receive such a broadcast message, it must declare in its manifest file a matching intent-filter.
The service app component can perform operations without needing a user interface (UI), e.g., transferring files from one app to another without involving the user. There are three types of services; foreground, which is discernible to the user, background, which is not directly perceivable by the user, and bound, which is used to bound a specific service with an app. Lastly, activities is a key component of any Android app. An activity, comprises a single, specific thing the end-user can perform, and it usually involves a UI. For example, when a user opens an app, the main activity is typically executed.
We observed that several of these three types of components did not declare a permission in the respective manifest file. That is, according to Android Developer security tips [66], when a component is declared in the manifest file is by default enabled to communicate with other apps. To curtail this functionality, the developer must declare among others a permission to this component. Next, any other app must possess the same permission for being able to communicate with this component. Note that this is a minimum security measure, meaning it is not enough to fully secure the component, but only reduce the magnitude of the issue. Therefore, the omission to at least provide the right permission to such a component is identified as a high severity weakness.
Specifically, the results obtained per app are summarised in Table 7. The left part of the table splits the components of interest in two categories, namely intent-filter on and off. The first (on) means that the app can send and also receive intents which target that specific component. When the received intent contains a malicious content, it can potentially compromise that app, e.g., bypass authentication [95]. The second (off) means that the app can potentially only send any information that will be requested from another app. Meaning that it is possible for the app to unwillingly leak sensitive information to an attacker. From Table 7, it is inferred that in either of these two categories and across all the three components, a considerable number of apps, i.e., 19, 24, 22 and 18, 17, 15, respectively, neglect to declare the appropriate permissions. While some similarities in the numbers can be observed among apps that pertain to the same automotive group, they do not seem to apply as a rule. Exception to this are the Mitsubishi and Tata groups.
Moreover, it was observed that the apps of the PSA and Tata groups have enabled a task affinity for a specific activity in which the intent-filter was disabled. Task affinity [96] is related to an activity that will be re-parented to, using the proper attribute (allowTaskReparenting). Furthermore, the task affinity is connected with the main activity which will be launched first when the FLAG_ACTIVITY_NEW_TASK is called. Note that all apps have the same task affinity by default. This is a major issue because an attacker could potentially capture and read intents that are transferred between activities. For example, CVE-2020-0096, dubbed as "Standhogg 2.0", can potentially exploit this issue in unpatched Android OS v8, 8.1, and 9.
The last three columns of Table 7 recapitulate information on a trinity of other important aspects related to the contents of the manifest file, namely, Content, Launch, and Cleartext. The first is related to the content provider [97], namely, an app component that interacts with a repository. Content providers are handy for apps that wish to provide data to other apps. This means that the content provider must permit other apps to access its data; if no permissions are set for a content provider component in the <provider> manifest element, any other app can access this content provider for both reading and writing. It is perceived that six apps were potentially vulnerable to this issue, neglecting to set the right permissions in their manifest file.
Launch on the other hand, refers to the Launch mode, i.e., the launchMode attribute in the <activity> manifest element, which determines how an activity should be launched. One app was found to specify a different Launch mode for the main activity, which should be launched with the "default" mode. Precisely, this app has set the launch mode to singleTask/singleInstance [98], meaning that it should be instantiated only once (singleton). Under the singleTask mode, the app can handle and control other activities, which have the standard configuration (singleTop activities). Moreover, an activity with the singleInstance launch mode is always the root activity of a task and no other activities will be created in the same task. As already mentioned, this issue is related to the task affinity as well. For more information on how an attack can take advantage of this matter, the interested reader can refer to [99].
Lastly, cleartext, that is, the android:usesCleartextTraffic flag in the <application> manifest element designates whether the app intends to use cleartext network traffic, including cleartext HTTP. As seen from Table 7, nine apps allow this functionality. Naturally, this may compromise the privacy of the end-user.
As a side note, for all the above mentioned three aspects, an alike distribution is perceived for apps classified under the same manufacturer group.

Shared Library Analysis
This stage of analysis pertains to the shared libraries (with the extension .so) an app may employ. These libraries are usually written in C and compiled with the Android native development kit (NDK) toolset [100]. Android exploits this logic for achieving better performance and reusing existing C libraries, without translating them to Java. Such libraries are loaded into memory at runtime. Security and privacy issues with the use of shared libraries have been already tracked down and clearly pinpointed in the Android literature [101][102][103].
To investigate if and to what degree this issue applies to the examined apps, we present the number of potentially vulnerable shared libraries per app in Figure 2. Again, we only consider high severity potential vulnerabilities, which pertain to four exploit mitigation techniques, namely, no-execute (NX), position-independent executable (PIE), Stack Canary, and relocation read-only (RELRO). These countermeasures, more accurately referred to as memory corruption mitigation techniques, are specific to C language, and if neglected may leave room for memory-based exploits, which inexorably migrate to the affected Android app. As seen from Figure 2, 27 out of the 31 analysed apps were found to incorporate shared libraries that overlook at least two of the above mentioned remedies. Actually, two of them have libraries that disregard all four of them, while 12 all but one. Moreover, apps belonging to certain manufacturer groups, i.e., PSA and Tata, demonstrate an identical pattern.
An OS that endorses the NX bit-a feature of the memory management unit of some CPUs-may tag specific sections of the memory as non-executable, meaning that the CPU will deny to execute any code residing in that region. If the NX bit is not set on the library, an attacker may be able to mount a buffer overflow. That is, frequently, such attacks place code in a program's data region or stack, and subsequently jump to it. However, if all writable addresses are non-executable (through the -z noexecstack compiler flag), such an attack is blocked. As observed from Figure 2, a couple of apps were found to incorporate libraries that allow for executable writable addresses in memory. PIEs on the other hand are executable binaries which are made from position-independent codes (PICs). The latter are used by shared libraries for loading their code into memory at runtime, without overlapping with other shared libraries that already exist there. Actually, this is a common mechanism to harden Executable and Linkable Format (ELF) binaries.
As seen from the same figure, 27 apps incorporate shared libraries which were built without enabling the position independent code flag (-f PIC). This could be exploited by an attacker, forcing the app to jump to a specific part of the memory that contains malicious code.
The use of Stack Canaries is a well-known defense against memory corruption attacks; it is a value placed on the stack with the intention to be overwritten by a stack buffer that overflows to the return address. If this protection is disabled, the app is prone to stack buffer overflow attacks, say, those that aim to overwrite certain parts of memory with malicious code. Our results show that 14 apps embrace shared libraries which neglect this defence.
Relocation Read-Only (RELRO) is another mechanism to harden ELF binaries by rendering some binary sections read-only. Precisely, RELRO ensures that the Global Offset Table (GOT)-a lookup table used by a dynamically linked ELF binary to dynamically resolve functions that are located in shared libraries-cannot be overwritten. An example of such an attack is given in [104]. With reference to Figure 2, at least one library in each app does not make use of the RELRO defence.

Outdated Software Components Analysis
For both the desktop and mobile platforms, third-party components, say, libraries, comprise one of the cornerstones of modern software development. However, as already mentioned in Section 5.5, the benefit of reusing third-party code may be largely canceled out, if that code is buggy or outdated. This may augment the attack surface of the app by far and expose end-users to security and privacy risks stemming from those external software components. Indeed, the importance of updatability of such Libraries on the Android platform has been repeatedly pinpointed in the literature [105,106]. Simply put, it has been shown that many Android apps do not update their third-party libraries, remaining vulnerable to a range of Common Vulnerabilities and Exposures (CVE).
As mentioned earlier, to look closer into this issue under the perspective of the examined apps, we employed the Ostorlab tool. The outcomes of this type of analysis per app are summarised in Table 8. As observed from the table, nearly the two-thirds of the apps (19) make use of one at least outdated library. However, as already stated, such a shortcoming is tightly connected to one or more CVEs, meaning that the respective app is susceptible to publicly disclosed security flaws. Given the plethora of the involved CVEs, in the following, we succinctly describe such issues per shared library by just enumerating the relevant CVEs. The interested reader may in addition consult the respective CVE page in [107]. For brevity, we also omit references to very well-know libraries like the Python one.  Total  16  15  5  6  5  2  4  4  2  1  1  19 Specifically, Table 8 reveals that more than half of the apps, 16 and 15, respectively, utilise at least one outdated version of the well-known SQLite [108] and OpenSSL Finally yet importantly, apps that belong to the same group seem to have an identical (BMW, Mitsubishi, FCA, PSA) or approximately identical (VW) pattern regarding the use of outdated libraries.

Taint Analysis
Static taint analysis, i.e., a form of information-flow analysis, has been performed with the aid of the Ostorlab tool. Typically, this type of examination can pinpoint data leakage type of problems in the examined code. This normally refers to a variety of user or other kind of input sanitisation glitches, that may facilitate intent injection, SQL injection, password tracking, or even buffer overflows. To do so, taint analysis uses a script, which tags every private data of interest, known as the source. By following each source throughout the code as a flow, the analysis may reveal every code snippet that potentially has a leakage, the so-called sink.
After analysing all the apps, we grouped the relevant problems into three categories, namely password tracking, intent leakage, and data leakage. Regarding the first, we observed that the relevant apps used unsanitised code in their WebView login form. This issue can be exploited by an aggressor to steal any available credentials or even mount a SQL injection. On the other hand, as already mentioned in Section 5.4, intent leakage is related to three major components of the Android OS; broadcast receivers, services, and activities. Basically, properly sanitising these components in the manifest file, means less leaked intents. Lastly, data leakage has to do mainly with the usage of external storage or unsanitised data that the app receives from other apps or the system per se, and especially the WRITE_EXTERNAL_STORAGE (P12) permission, which however is employed by all but three of the apps as shown in Table 2. An important remark here is that taint analysis may produce a considerable number of false positives. Therefore, to reduce the error factor, the results of this subsection have been cross-checked with those of manifest analysis given in Section 5.4, and we kept only the intersection of the two sets based on the findings of taint analysis. Overall, an approximately 55% of the taint analysis results were found to be common between the two sets.
The results of applying taint analysis to each of the available apps are illustrated in Figure 3. The numbers inside the stacked bars designate the quantity of the issues that fall under the same category per app. As observed, the two-thirds (20) of the examined apps were found to be prone to at least one of the categories of the problems pertaining to this type of analysis. Specifically, all of these apps but three present at least one issue that is classified as data leakage, nine of them issues that belong to the intent leakage category, and only two that exhibit password tracking issues. Last but not least, a straightforward observation is that apps that belong to the same automotive group present the same issues more or less. For instance, all apps destined to the BMW group demonstrate the same distribution of issues across the same categories. The same situation applies to the PSA and TATA groups, and also partially to the VW one.

App Exploration
This section details on the obtained results after dynamically testing the different apps from the perspective of the end-user, typically the car owner. Given that it is practically unworkable for one to possess or control even in the short-term so many different vehicles, this scrutiny is confined to the owner-to-app and/or owner-to-car registration (or pairing) phases and not to the app functionality in general. Namely, this phase has been carried out manually and involves the creation of a new user account, typically from inside the app or through redirection to a website. This normally requires the user to accept terms and conditions (T&C) along with other policies, if any, and enter basic identification information, including their name and surname, country, city, email address, a username and password, etc. Where applicable, it can also include registration of the vehicle against the owner, i.e., the owner is required to provide some kind of ownership-proof.
For this purpose, a Xiaomi Redmi Note 8 Pro smartphone, running on Android 10 has been employed. For the apps that required, typically after the user registration phase, a VIN as input, we tested specific vendor VINs that we either generated using a custom-made Python script or acquired from the internet by searching relevant websites [118]. In their majority, these apps requested a full 17-character VIN, except a limited number of cases where a partial VIN was needed. Where applicable, a maximum number of 25 VINs have been tested against the app.
All in all, as shown in Tables 10 and 9, this subsection compares the apps based on 20 criteria split into two categories, namely security issues and VIN authentication methods. The specific results per app are given in the following. Table 9. Dynamic analysis security issues. S1: Acceptance of internet or randomly generated VIN, S2: Weak password policy, S3: User registration through website, S4: Weak PIN policy, S5: Informational errors to the user, S6: No password confirmation field, S7: Non-informative or improper error messages, S8: User session is kept alive after terminating the app, S9: Acceptance of disposable temporary e-mail address, S10: One-time 6-digit code.
App Name S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 Volvo On Call -   Total  7  10  4  3  1  10  3  8  1  7 Volvo On Call: User registration is done through a website at https://volvoid.eu. volvocars.com/Account/initaccount?market=GB&language=en (accessed on 20 February 2021). The user has to enter an email address, password, along with a confirmation field, first and last name, country, and language. Regarding the password, the app accepted "111111wW". After registering, we managed to successfully add a vehicle with a VIN obtained from the internet. Then, the app informs the user to wait for a PIN to be sent to the email address provided. No such email was received though. The app provides a T&C and a privacy policy.
BMW Connected: It uses a distinct authentication method to connect the app with the vehicle. Specifically, after user registration, performed from inside the app, it requests the last seven characters of VIN. Then, the app informs the user that the car dashboard shows a 6-number PIN, which needs to be entered in the app. It was observed that with every wrong answer, the app directs the user back to entering the VIN; no upper bound to the number of wrong answers seems to apply. User registration requires entering the country, email, password, confirm password, first and last name, title, and a 4-digit PIN, where it accepted "1234". An 8-character password policy is enforced, but it is rather weak as the app accepted the "111111QQ" one. After user registration, the app presents the user with a privacy policy. It was noticed that the app keeps the previous session alive, so subsequent user logins are done by means of the 4-digit PIN. Another interesting point is that after entering a VIN that was already paired with another user account, the app asks the current user if they wish to change region; however, by accepting it, the app asks for vehicle 6-number PIN verification.
My BMW: First, the user is required to accept T&C. Before registering a newly created user account, the app informs the user that an account should have been already created on their behalf by their local car dealer. In the relevant registration form, the app provides a password confirmation field and accepted "1111111a" as a password. Next, an email address validation step is required. After login, the user is prompted to enter a 4-digit PIN; the app accepted the "1234" value. Next, the app requested accepting two diverse T&C and two different privacy policies and granting specific permissions, namely calendar, camera/photos, location, and notifications. We tried more than 20 VINs, but no one of them was accepted. After reopening the app, it requested either for fingerprint access or the 4-digit PIN, meaning that no password was required. This functionality applies to the "MINI Connected" app as well.
MINI Connected: User registration is performed from inside the app. Regarding password policy, it required at least a combination of eight letters, numbers, and/or symbols, accepting the "123456ab" one. Furthermore, it did not offer any option for language, automatically providing the language pertinent to the user's location. While the registration process was successful, user login failed after displaying the message "An error occurred while authenticating. Please try again." In a subsequent attempt to log in, the app requested accepting T&C and a privacy policy. After that, we were requested to provide a 4-digit PIN and validate it; the "1234" was accepted. Then, the app asked for several permissions, as given in Table 2. We tried to login/logout many times to the app. The result was to be asked every time to accept T&C and provide a PIN. To register a vehicle, the app required the last seven characters of VIN. Based on the vendor, we tried 25 different combinations, but no one of them was accepted.
MINI: It is identical to the "My BMW" one. In fact, we logged in using the BMW account credentials. Moreover, the user is required to enter a 4-digit PIN and accept T&C.
Mercedes me: User registration is performed from inside the app. First, the user needs to enter an email address, which is verified by means of a 6-digit code. Next, they have to choose a country, and provide their title, first and last name, and date of birth, but not a password. We managed to pre-register two VINs with this app. For one of them, as a next step, the app required the owner to get authorised at the premises of a local Mercedes car dealer. For the other, probably because the respective VIN corresponded to an unsupported model, the app did not request for car dealer authentication, but it did not let the user to proceed with any other operation. The app provides a T&C and a privacy policy. Given that user registration is password-less, for them to reconnect to the app, they need to provide their email, get a 6-digit code, and enter it into the app; this means that the app enforces a kind of one-time password policy.
My Alfa Connect: It requires a paid user subscription for letting them to further interact with it. That is, the owner needs to first perform an activation step at an Alfa Romeo dealership. Then, the user receives an email which contains a hyperlink, enabling them to finish registering the account and activating the relevant services on both the app and the https://www.alfaromeoconnect.eu/ (accessed on 20 February 2021) website. Given that the registration procedure requires personal interaction, it was infeasible to analyse this app further. Currently, the app supports only two 2020 models. Neither a T&C nor a privacy policy statement is provided by the app at startup. FIAT: It is very similar to the "My Alfa Connect" one. In the app's description in Google Play, it is mentioned that "FIAT mobile app and Uconnect Services are available for FIAT vehicles equipped with the new Uconnect Box". So, as with the My Alfa Connect case, it was unattainable to proceed any further with this app.
My Uconnect: After starting the app, the user is informed that it currently supports specific models, which are equipped with the "Uconnect infotainment system". Lastly, similar to the "Uconnect LIVE" one, this app required a paid subscription.
Uconnect LIVE: This app supports FCA vehicles, including those from Abarth and Lancia. The app does not have a strong password policy; simple passwords like "12345678" are accepted. On the plus side, the app needed physical access to the vehicle for being fully enabled. Physical access can be achieved by means of USB or Bluetooth, with the preferred one being over Bluetooth. On top of that, this app requires a paid subscription.
MyFerrari: It requires a validation directly from Ferrari for authenticating the owner and connecting the app to the vehicle. Specifically, for registering, the app redirects the user to a Ferrari webpage at https://www.ferrari.com/en-US/auto/owner-reg (accessed on 20 February 2021). There, we did provide all the necessary (fake) personal information, but as a further step, the owner is required to get in contact with a local Ferrari dealer. In the user registration form, the password policy was at least 8-characters, one uppercase, one lowercase, one digit, and one special character. Another observation is that upon starting the app it asks the user to accept granting a location permission. The app provides the user with a T&C and a privacy policy.
FordPass: At start, the app asks for country information, and whether the user permits the use of cookies. However, even if the latter option is denied, the app proceeds and allows them to create an account. In this step, the user has to accept the T&C and a privacy policy. It was observed that the app accepted "12345678" as a password. Next, the user has to provide a 4-digit PIN, and as with other similar apps, the "1234" value was accepted. Note that the app does not require the user to validate their email address. After that, the user is presented with various marketing options, but the app let them to refuse all. As a safety measurement, the app also required the user to confirm that they are not driving at that moment. After completing the above mentioned steps, we were able to provide a VIN. From a total of 10 VINs, seven were accepted. For vehicle models prior to 2012, the app only displayed service maintenance data. However, for newer vehicle models, an additional authorisation step popped up. That is, the user has to self "authorise", the specific vehicle (to be controlled by her), except the vehicle is already authorised by another user. In the latter case, the user can either send a request to the already authorised user to grant her an additional authorization or deauthorise all users who have previously gain authorization to the same vehicle; we however did not try the latter option for avoiding causing any problem to legitimate users. Lastly, we noticed that a paid subscription to specific services, including traffic avoidance information can be performed from inside the app.
MITSUBISHI RC: At start, the app requires accepting T&C and a privacy policy. It also asks to be granted a couple of permissions, namely Location and Bluetooth. Then, the user needs to connect the app to the vehicle, i.e., turn on the engine and select to register from the dashboard via Bluetooth. Given the requirement of having physical access to the vehicle, we could not test the app any further. Currently, the app supports only two vehicle models.
OUTLANDER PHEV RC: It is similar to the "MITSUBISHI RC" one. The only differences are that this app is compatible with another Mitsubishi model, and the pairing process with the vehicle is done over WiFi.
NissanConnect: User registration is done from inside the app. The relevant form requires an 8-character password and includes a password confirmation field. The password "eE#11111" has been accepted. Next, the app required the user to accept T&C and validate their email address. After login, the user can register a VIN. The app itself does not inform the user about the supported models. However, according to information in Google Play, it is currently compatible with only three models manufactured from 2019 onward. Although we entered 25 VINs, no one has been accepted by the app.
My Citroën: User registration is performed from inside the app. The displayed form contains fields for entering email, password, confirm password, title, first and last name. A password of at least eight characters is required, and the app accepted the "1111111q" one. After creating a user account, the app asks for a VIN. The app did accept a VIN, and we were able to learn the periodic car maintenance data for that vehicle.
MyDS: It is very similar to the "My Citroën" one. We were able to be registered as users using the password "1234567a". The registration process also requires a VIN, but after 25 attempts, no VIN was accepted.
myOpel: It has the same graphical user interface (GUI) as that of the "My Citroën" app, and user registration is performed the same way too. After some tries, the app accepted two VINs, and we were able to see the corresponding vehicle's maintenance data, both for already completed services and future ones. The app provides the user with a T&C and a privacy policy.
MYPEUGEOT: It offers a similar GUI as with the My Citroën app. First, the app requested to accept the T&C and a privacy policy. Then, user registration required providing personal details, namely email, first and last name, password, and title. Password policy is identical to My Citroën. After email verification, one can log in to the app. We successfully registered one VIN, enabling us to see that vehicle's service maintenance data.
Jaguar InControl: Connecting the app to the vehicle requires a paid user subscription along with a pre-installed hardware component. Regarding its GUI it is almost identical to the "Land Rover Incontrol" one. After opening, the app requested to be granted a location permission. During user registration, which is done externally in an official Jaguar webpage at https://incontrol.jaguar.com/jaguar-portal-owner-web/select (accessed on 20 February 2021), the user needs to provide the country, first and last name, email, and password. The app accepted "qwertyQ1" as a password. After email verification, the user is transferred again to the webpage, where they can login. After login, they are presented with another, more detailed registration form, which also asks for a 4-digit PIN, accepting the"1234" value. After that, from inside the webpage, the user needs to add a vehicle, namely a VIN. The privacy policy, requires double acceptance, meaning ticking the corresponding boxes and also opening the documents and pressing "accept".
Land Rover InControl: It also requires a paid user subscription. During the user registration phase, the app redirects them to the Land Rover's official webpage at https:// incontrol.landrover.com/jlr-portal-owner-web/request-account (accessed on 20 February 2021). The second phase is similar to the "BMW Connected" app, namely the app requested only the last eight characters of VIN. After 25 attempts, no randomly generated partial VIN was accepted. User password policy is identical to that of the "Jaguar Incontrol".
Tesla: This app offers only a login option, meaning that the user should have been already registered elsewhere. Tesla provides a registration service at https://auth.tesla. com/login (accessed on 20 February 2021), so we attempted to register through this website. The relevant form accepted the password "1234567q". After registering, we were able to login to the app, but no other option, such as registering a VIN, was enabled. It seems that Tesla pre-authenticates the owner, that is, provides each car owner with a specific account upon purchasing the vehicle.
MyT: User registration is done from inside the app. It requires providing personal information, including email address, first and last name, and a password along with its verification field. The app accepted the string "111111a@"as a password. Next, it displayed a T&C and a privacy policy. After email address validation, we were able to log in to the app and successfully entering a VIN. Nevertheless, no information has been made available for the corresponding vehicle. Instead, the app informs the user that such information will be accessible after they verify ownership against a local dealer: "This vehicle needs to be verified and only after this functionality is available". Note that when purchasing a Toyota car, typically, the car dealer will register the buyer as the owner of the car, automatically granting them access to the so-called connected services. myAudi: It has a similar GUI to that of the MyŠKODA app. As a first step, the app asks for user's registration in terms of email address (acting as username) and password. This is done from inside the app. In this stage, the app mandated an 8-character password and accepted "11111112"; no password confirmation field were present in the form. After, the user can log in and enter their personal details, i.e., first and last name and accept the displayed marketing policy, which however is not a requirement. Next, the app asks for a VIN. After some tries, we succeeded pre-registering a couple of VINs with the app. However, to complete registration, the app requires the user to bring all the necessary documents to a local Audi dealership, who is responsible to confirm the ownership of the specific vehicle and provide them with access to the app.
My Bentley: First, the app informs the user regarding the related vehicle models it supports. Then, the user registration phase initiates from inside the app. After providing all the personal data, i.e., user location, car vendor, first and last name, email, phone number, and address, the app requested a VIN. From the 25 inputted VINs no one was accepted, probably because the supported by the app models are new. The app presents the user with a T&C and a privacy policy.
Lamborghini Unica: A user invitation code is required for them to be able to create an account and log in against the app. This code can be obtained upon user request through the app to Lamborghini. The request mandates filling in several fields, namely local dealer, VIN, and ownership documents. The app presents the user with a T&C and a privacy policy.
Porsche Connect: At start, it requires an online registration and verification phase, which is done against a website at https://login.porsche.com/auth/gb/en_GB/registration (accessed on 20 February 2021). In the relevant web form, the owner must upload all the vehicle ownership documents along with personal information, including an email address along with the corresponding verification field, first and last name, and address. The string "11111wW@" was accepted as a password. After that, the app notifies the user that they will be informed by email about the outcome. User login is not done via the app, but through the same website. After trying to log in, we redirected back to the app getting the error "There is no vehicle stored for your Porsche ID. Please register your Connect-enabled vehicle at My Porsche. (error: 9_MP_EL).". The app provides the user with a T&C and a privacy policy.
SEAT CONNECT: The app first asks for an email address. After validation, user registration is required. The app accepted "12345671" or "11111112" as a user password, but not the "12345678" one. Then a 4-digit user PIN was required; as with all other similar apps the "1234" value was accepted. We were able to pre-register one VIN with this app. Nevertheless, the app requires physical access to the vehicle for completing the user registration phase. That is, the user must possess a device named Dataplug [119], which is connected to the OBD port of the vehicle. The app displayed a T&C and a privacy policy.
MyŠKODA: For initiating the user registration phase, the app requested for an email address. Then, from inside the app, the user needs to enter a password; no confirmation field was present, and the app accepted "11111112". Next, the user is presented with a T&C and a privacy policy, and also needs to validate their email address. After login, the user has to accept or deny a marketing policy, which however its acceptance was mandatory for the app to proceed. Afterwards, the app displayed the error message "Your connection is not secure! Personal information may be leaked!", with error code "ERROR_CONNECTION_NOT_SECURED". After manually checking the code of this app, we realised that it creates several http connections, i.e., new HttpURLConnection object instances are obtained by invoking the openConnection() method on a URL object. The app accepted two VINs and informed us that one of these vehicles "is equipped with connectivity digital service technology". After that, the app requested once more the user to accept the marketing policy, and in a next step, to provide a country, language, and first and last name. Once again, the user has to accept T&C and a privacy policy, and consent to the marketing policy, which however this time can be denied. While some VINs were registered successfully, the app did not provide access to any piece of data related to these vehicles.
ŠKODA Connect LITE: It presents the same GUI as the "MyŠKODA" one. For using the app, the user must possess a dataplug. This app showed a strange behavior. After we attempted to connect our smartphone over Bluetooth with another, not of the same group, vehicle, this app started asking for authorization in the background. Precisely, the app popped up the message "Authorized workshops are being updated", but at that time, the smartphone had no internet connection of any kind, so this action can be regarded as an attempt to gain unauthorised access. By manually terminating this service after the vehicle's Bluetooth was disabled, made it to reappear, showing the same message, after approximately 5 min.
We Connect: It is similar to the "SEAT CONNECT", also requiring a 4-digit PIN. After login, we tried unsuccessfully to register a VIN, sometimes getting the error "Something went wrong. VIN backend unknown". Furthermore, as with the "SEAT CONNECT", it seems that this app requires a dataplug to the vehicle for completing the owner's registration phase.

Discussion
Based on the review done in the previous subsection, the following key observations can be drawn out. First off, with reference to Table 9, nearly one-third of the apps accepted a randomly generated or internet-acquired VIN. Even worse, for those of them that do not require any kind of ownership-proof, it was possible to see information regarding vehicle maintenance, which, although not critical, is certainly privacy-invasive. Second, for most apps, user registration does not mandate a strong password creation policy [120]; this situation stands true for almost the two-thirds of the apps. In addition, nine apps do not provide a password confirmation field, which may lock the user out of their account and subsequently force them to execute a password recovery process. However, if, say, the latter is exercised over email, then its security directly depends on the security of the email service. Interestingly, one app supports password-less user authentication via a one-time code sent over email. This option may seem more secure, but it again relies on the security of the email service, which naturally cannot be guaranteed [121]. Furthermore, the user will be unable to login to the app in absence of an internet connection or if their email service is down.
Third, a limited, but not insignificant number of apps, perform user registration externally to the app. This may seem harmless, but it is additionally always dependent on the security level of the website per se. Where a 4-digit PIN is required as a supplementary user authentication method, feeble PINs are accepted. This is of major importance because anyone who is able to snatch an unlocked phone can potentially access the app by simply entering "1234". More than two-thirds of the apps keep the active session alive even if the app is terminated by the user, without logging out. Obviously, this issue is directly associated with the use of weak PINs. Not less important, during user registration, all apps did accept a disposable temporary e-mail address, like the ones provided by Temp Mail [122]. Clearly, such a practice should not be accepted, because, among others, it is used as a method to create phony accounts or bypass areas that require a registered account. Put simply, the use of a disposable email address in the context of such an app is a clear sign of a user who is likely to engage in shady behavior.
On the other hand, regarding Table 10, which focuses on VIN-to-owner registration issues, one can infer the following. Some apps apply stricter methods to verify proof of ownership. Precisely, four of them demand in-person contact, while others in addition support on-line registration. Paid subscription is also mandated by seven apps. Secondly, three apps ask for a partial VIN, but this is not a limitation because the same apps demand additional authentication tokens, which in turn require physical access to the vehicle. Nearly one-third of the apps accept registering VINs that belong to unsupported models. Surely, this functionality is without a purpose and may yield unforeseen problems. Furthermore, a number of apps seem to accept the same VIN to be registered with different user (email) accounts, but this is done without verification. Naturally, this issue is directly linked to S1 of Table 9.
Lastly, similar to the other types of analysis, apps that belong to the same manufacturer group present the same or nearly the same characteristics. This is evident in both Tables 9 and 10.

Conclusions
The connected vehicle app development is a relatively new field, and the demand for such apps is expected to mushroom in the next few years. Furthermore, as the connected car landscape is still at a nascent stage, developers may be tempted toward adapting already existing apps to connected vehicles, instead of designing and engineering new ones. On top of that, the reality has proven that security and privacy aspects are often not properly prioritised by software developers and vendors who rather concentrate on the functional ones. However, as a guidepost, security must not be left anymore as an afterthought. Instead, one must proactively secure software-think for instance about security and privacy by design-while they are building it. Especially, focusing on the Android ecosystem, there are already several noteworthy standards and best practices for developing secure software, including the Android developer website, the OWASP mobile top 10 project, the CERT Android secure coding standard, and the JSSEC Android application secure design/secure coding guidebook.
In this context, the results of this work, which stem from the whole population of official single-vehicle management apps offered currently in Europe, can be used as a basis and reference for not only alerting and motivating the involved parties to proactively improve the security robustness of their products and conduct frequent vulnerability assessments, but also to raise the security awareness of end-users. From a security viewpoint, it was demonstrated that a significant number of apps remain susceptible to several high or even critical severity weakness and vulnerabilities, which are due to sundry reasons, including misconfigurations, secure coding negligence, and latent or overt security flaws that relate to the use of third-party software. Overall, at least 87% of the apps were found to be potentially exposed to six out of the total 11 CWEs, with about 80% of them to utilise a couple of obsolete cryptographic hash algorithms. Moreover, roughly 93% and 87% of apps exhibited more than one issue in their manifest file and shared libraries, respectively. At the same time, outdated software component exploration and taint analysis revealed that almost two-thirds of the apps have at least one issue. Furthermore, from a privacy standpoint, there exist several hiccups that emanate from the over-claim of sensitive permissions and API calls, the inclusion of trackers, and frail user-or vehicle-to-app registration practices. While apparently all these aspects target on increasing app's functionality and improve user experience on the app as a whole, they are potentially rendering it more privacy-intrusive at the same time; approximately 42% of apps incorporate at least two third-party trackers. On the positive side, a critical mass of apps demonstrated well-architected security practices, especially in relation to the VIN-to-owner registration process, which is a sensitive and essential matter; namely, only 18% of the apps indicated a potential issue in regard to the VIN authentication method.
Future work can concentrate on other types of either official or third-party automotive smartphone apps, say, infotainment, and possibly embrace more criteria and angles of examination, say, usability, a full-scale dynamic analysis, general data protection regulation (GDPR) level of compliance, and so forth.
Funding: This research received no external funding. Data Availability Statement: Not Applicable, the study does not report any data.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: