Next Article in Journal
Risk Measures in Simulation-Based Business Valuation: Classification of Risk Measures in Risk Axiom Systems and Application in Valuation Practice
Next Article in Special Issue
Designing Stress Tests for UK Fast-Growing Firms and Fintech
Previous Article in Journal / Special Issue
Customer Due Diligence in the FinTech Era: A Bibliometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regulating Robo-Advisors in Insurance Distribution: Lessons from the Insurance Distribution Directive and the AI Act

1
Department of Insurance and Risk Management, University of Malta, MSD 2080 Msida, Malta
2
Department of Finance and Accounting, Latvijas Universitāte, LV-1586 Rīga, Latvia
3
Department of Legal Studies, Catholic University of the Sacred Heart, 20122 Milan, Italy
4
Legal Tech Lab, University of Helsinki, Yliopistonkatu 4, 00100 Helsinki, Finland
*
Author to whom correspondence should be addressed.
Risks 2023, 11(1), 12; https://doi.org/10.3390/risks11010012
Submission received: 30 October 2022 / Revised: 22 December 2022 / Accepted: 23 December 2022 / Published: 4 January 2023

Abstract

:
Insurance distributors are increasingly using robo-advisors for a variety of tasks, ranging from facilitating communication with customers to providing substantive advice. Like many other AI-empowered applications, robo-advisors have the potential to pose substantial risks that should be regulated and corrected by legal instruments. In this article, we attempt to discuss the regulation of robo-advisors from the perspective of the Insurance Distribution Directive and the draft AI Act. We ask two questions for each. (1) From a positive legal perspective, what obligations are imposed on insurance distributors by the legislation when they deploy robo-advisors in their business? (2) From a normative perspective, are the incumbent provisions within that legislation effective at ensuring the ethical and responsible use of robo-advisors? Our results show that neither the Insurance Distribution Directive nor the AI Act adequately address the emerging risks associated with robo-advisors. The rules implicated by them regarding the use of robo-advisors for insurance distribution are inconsistent, disproportionate, and implicit. Legislators shall further address these issues, and authorities such as EIOPA and national competent authorities must also participate by providing concrete guidelines.

1. Introduction

Insurance contracts between consumers and insurers are mainly concluded via insurance intermediaries (Marano 2019). Consumers and insurers rely on insurance intermediaries to be matched with each other. The quality of advice given by insurance intermediaries will also have a direct impact on the conclusion of an insurance contract. In the Digital Age, insurance technologies (InsurTech) are becoming an enabler for insurance undertakings and intermediaries to distribute insurance products. Currently, the revolution is being led by robo-advisors, an intelligent interface which has been a success in the domain of investment.
Robo-advisors are typically developed as an automated interface to mimic customer interaction with human advisors (Maume 2021). The process by which customers reach out to insurance intermediaries and receive advice is fully digitised. The technologies behind robo-advisors, however, can be quite different (Ostrowska and Balcerowski 2020). On the one hand, many robo-advisors in insurance distribution are powered by rule-based algorithms, which are designed to deliver standardised answers and generate predefined actions. Rule-based robo-advisors can free insurance intermediaries from repetitive, mundane work. The typical applications are virtual assistants and chatbots, which are used to facilitate communication between insurance distributors and customers (EIOPA 2021). Customers can expect a quicker respond to their cases even outside regular working hours. In comparison, machine learning-based robo-advisors are increasingly employed by insurance intermediaries. Compared with rule-based systems, such robo-advisors are truly subversive enablers, which can produce inferential advice and improve their advice overtime. With the help of learning-based systems, by capturing behavioural data of customers, insurance intermediaries have a better understanding of the risk profile of their customers, which gives them the ability to further tailor insurance for customers in line with the models trained by data (EIOPA 2021). Customers can also anticipate a fast and affordable approach to personalised advice (EBA et al. 2015) without endless communication with insurance distributors. Considering the potential of the improvements offered by robo-advisors, many traditional offline insurers and distributors are keen on investing in tech companies that develop robo-advisors. For instance, Allianz has invested in Moneyfarm to explore ways of distributing insurance digitally (RapidValue Solutions 2017).
Despite the benefits offered by robo-advisors, the risks they pose should not be overlooked (Baker and Dellaert 2018). Inaccurate advice provided by these intelligent systems may result in some financial losses, or in the worst-case scenario, unfairly barring a specific group of customers from accessing insurance or causing them to pay a significantly higher premium. Furthermore, the potential harmful consequences of robo-advice to customers are amplified by the scale made possible by this automation (Baker and Dellaert 2018). Experimental research has revealed the existence of distrust when people are exposed to algorithmic decisions, a condition known as algorithmic aversion (Dietvorst et al. 2015). Recent survey results have shown that a large number of people are still in favour of professional advisors (44.3%) or self-searching (41.9%) rather than robo-advisors (9.9%) when purchasing life insurance products (SCOR 2022). Therefore, it is necessary to implement necessary regulatory measures to ensure that robo-advisors are trustworthy (EIOPA 2021).
From a legal perspective, insurance intermediaries may take multiple roles in the Digital Age, serving as not only the insurance distributors that customers recognise but also the deployers of AI systems that could affect the performance of intelligent systems. Therefore, the dual role played by insurance intermediaries implies that their activities would be regulated by two important regulations: the Insurance Distribution Directive (IDD) (European Commission 2016) and the upcoming Artificial Intelligence Act (AIA) (European Commission 2021). However, the extent to which insurance intermediaries are subject to these regulations is still debatable. If this issue remains unsolved, insurance intermediaries could be discouraged from adopting innovations in their operations. International companies, in particular, may avoid the European market. Against this background, in this article, we aim to provide the basis for a discussion of how insurance distributors that deploy robo-advisors in their practice will be regulated by the IDD and the AIA. On that basis, we further conduct a critical analysis of whether the incumbent rules are adequate to address the risk posed by robo-advisors and, more importantly, what measures can be proposed to correct the problems.

2. Regulating Robo-Advisors from an Insurance Distribution Perspective

In this section, we discuss the regulation of robo-advisors in terms of the IDD. Although digital transformation is not central to the IDD rules, such rules are principle-based and therefore potentially applicable to the regulation of most issues arising from that transformation.
The IDD covers “the activities of advising on, proposing, or carrying out other work preparatory to the conclusion of contracts of insurance, of concluding such contracts”, and applies to the natural and legal persons carrying out such activities, i.e., the insurance distributors. In addition, the IDD does not describe in detail the manner in which distributors approach their customers, either via an automated system or other tools, including the traditional face-to-face relationship (Köhne and Brömmelmeyer 2018). Hence, distributing insurance through robo-advisors is within the scope of the IDD (Marano 2019), and the legal issues will be twofold.
Firstly, from a positive legal analysis, how will robo-advisors be regulated under the IDD? Secondly, from a normative perspective, are these regulations provided by the IDD sufficient and reasonable? If the answer to the second question is negative, then there is a third question on policymaking: how can we reform the IDD to make it responsive to the risks posed by robo-advisors?

2.1. How Does the IDD Regulate Robo-Advisors?

The IDD sets out standards on distributors (insurance undertakings and intermediaries) and advice, and those providing robo-advice are within its scope. However, the IDD does not apply to ancillary insurance intermediaries who meet certain conditions; therefore, robo-advice may fall into such an exception.
Indeed, the IDD does not provide detail about whether the insurance distributor uses automatic processes, or the tool(s) used to approach potential customers. Thus, a legal or natural person providing robo-advice through algorithms must be authorised to carry out the distribution activity if this person meets the conditions for being included in the definition of insurance distribution (Marano 2019). As a result, this person must comply with the organisational and business conduct rules established to distribute insurance products.
Regarding the organisational rules, the IDD requires the relevant persons within the management structure of insurance intermediaries and insurance undertakings who are responsible for distribution to demonstrate the knowledge and ability necessary for the performance of their duties. Due to the fact that it typically relates to the distributor’s organisation, this criterion applies regardless of whether distribution is carried out through robo-advice or the conventional method (Marano 2019).
Regarding the business conduct rules, they include advice, which is defined as “the provision of a personal recommendation to a customer, either upon their request or at the initiative of the insurance distributor, in respect of one or more insurance contracts” (Article 2(1)(15)). Such advice can be ”basic” (Article 20(1)) or “advanced” (Article 20(3)), or, in the case of insurance-based investment products (IBIPs), provided on an ongoing basis (Article 29 (1)(a)). The standards on advice do not provide detail on how it is provided, thus including robo-advice. If advice is not provided, however, any contract proposed must be in line with the customer’s insurance demands and needs (Article 30(1)). Therefore, a wholly or partially automated sale process must comply with customers’ demands and needs even if the distributor does not provide customers with the above advice.
In addition, distributors must comply with the general principle set forth in the IDD requiring distributors to act honestly, fairly, and professionally, in accordance with the best interests of their customer. This principle applies to the relationship between distributors and customers but also affects how insurers manufacture their products and select distributors to bring such products to the market. Therefore, the business conduct rules become organisational rules and vice versa (Marano 2021a).
Indeed, the IDD sets out standards on manufacturing insurance products requiring, among other things, to (i) ensure that the products embed a value for the selected target market, (ii) select appropriate distribution channels, including the appropriateness of the manufacturer’s and distributor’s distribution strategy, and (iii) monitor distribution to the target market.
Finally, the fact that advice on IBIPs is given in whole or in part via an automated or semiautomated system does not absolve the insurance intermediary or insurance undertaking from its responsibilities (Marano 2019). Such systems provide personal investment recommendations, which should be based on a suitability assessment. This standard is consistent with the above general principle and attempts to ensure recommendations—to the customer or potential customer—that these products are suitable for that person and products are provided in accordance with the person’s risk tolerance and ability to bear losses.
On the other hand, the scope of the IDD includes not only insurance undertakings or intermediaries but also other market participants who sell insurance products on an ancillary basis, such as travel agents and car rental companies, unless they meet the conditions for exemption (Recital 8), i.e., where the premium for each contract does not exceed a certain amount and the risks covered are limited (Recital 15). Distribution platforms and ecosystems are becoming increasingly popular, allowing people who qualify as ancillary insurance intermediaries in these environments to generate an overall significant premium income. For the exemption, however, it does not matter whether the total amount of premiums was collected, whether the sales process included advice, or whether a robo-advisor provided the advice.
Ultimately, a positive legal analysis reveals that the IDD includes robo-advisors in its scope and provides rules regarding the distributor’s organisation, the quality of the advice, and the distributor’s diligence and responsibility. However, this set of rules does not apply if distributors meet the requirements for exemption from the IDD, including those providing robo-advice. It is now necessary to check whether these regulations in the IDD are sufficient and reasonable from a regulatory perspective.

2.2. Regulatory Gaps in the IDD on Robo-Advice

This principle was developed by European policymakers based on several meanings over time. Indeed, regulations cannot prevent the use of new technologies. Therefore, regulators must adapt rules designed for a “physical” relational and documentary environment to the new “digital” environment. Nonetheless, if the new technologies support the same activity (distribution), regulating this activity must ensure a levelling of the playing field and avoid regulatory arbitrage. Thus, the use of new technologies for insurance distribution cannot be the pretext to benefit from looser rules if the risks arising from that activity are the same (EIOPA 2019).
On the other hand, technological neutrality cannot prevent setting rules to deal with threats arising from these technologies (Greenberg 2016). An assessment of the IDD regulatory framework must consider the threats peculiar to such technologies. If the new technologies can reduce or neutralise some risks, they can increase the relevance of others (OECD 2018), and the IDD may not have taken this into account adequately.
The IDD mainly sets out principle-based organisational and business conduct rules. EIOPA should be more explicit regarding robo-advisors to ensure national competent authorities interpret such standards more convergently. The need to supplement the IDD rules is limited if these rules are interpreted in line with technological neutrality, even if some changes still seem necessary to deal with the issues raised from robo-advice.
As mentioned, the IDD rules allow some issues to be resolved if the rules are properly interpreted.
Robo-advice to customers is the outcome of the activity performed by algorithms, and it serves as the first stage in the process of concluding an insurance contract. Because the quality of the advice depends on the architecture of the algorithm only, this architecture is part of the organisation of the distributors providing robo-advice (Marano 2019).
The IDD requests relevant persons within the management structure of insurance distributors who are responsible for distribution to demonstrate the knowledge and ability necessary for the performance of their duties (Article 10(2)(5)). Thus, Home Member States shall ensure that distributors and their employees possess appropriate knowledge and ability to complete their tasks and perform their duties adequately and comply with continuing professional training and development requirements to maintain an adequate level of performance corresponding to the role they perform and the relevant market (Article 10(2)(1) and (2)). Moreover, insurance (and reinsurance) intermediaries shall demonstrate compliance with the relevant professional knowledge and competence requirements set out in Annex I of the IDD (Article 10(2)(6)).
The Australian Securities and Investments Commission offers a helpful list of what supervisors should expect in terms of adequate human resources and technological resources of entities licensed to provide robo-advice. Regarding distribution via robo-advice, relevant persons must demonstrate that they have the following skills: (i) a thorough knowledge of the technology and algorithms used to provide robo-advice on the insurance products; and (ii) the capacity to review the automated advice generated by algorithms (Australian Securities & Investments Commission 2016).
Although similar requirements are not explicitly mentioned in the IDD or in EIOPA’s guidance, these skills and knowledge are essential to demonstrate that relevant persons can complete their tasks and perform their duties adequately if they are carrying out insurance distribution through robo-advice. Moreover, these skills and knowledge must be included in the continuing professional training and developments requirements to ensure persons maintain an adequate level of performance if the supply of robo-advice is included in the activities for which they are responsible.
If distributors do not understand the technology and algorithms used to provide robo-advice on insurance products, they could likely act contrary to the general principle that they must always act honestly, fairly, and professionally in the best interests of their customers. Indeed, they may not be aware of the criteria based on which robo-advice is provided, nor may they be able to intervene in the algorithm’s outcome. Distributors remain responsible for violations, even if they outsourced the creation of the algorithm generating the robo-advice. However, POG rules, as well as product intervention powers of the supervisory authorities, aim to anticipate customer protection. The sanction is a last resort remedy as customer protection requires preventing harmful behaviour, which should be identified before marketing insurance products by manufacturers, distributors, and supervisors.
The general principle above applies to the manufacture of insurance products. It could be adversely affected if manufacturers do not assess whether the robo-advice provided by their distributors is consistent with the suggested distribution strategy. Indeed, in the case of robo-advice, manufacturers must monitor how algorithms process their products when they are distributed by the intermediaries that manufacturers have selected as being adequate for distributing through this tool. If intermediaries want to fulfil their obligations to distribute the product in the best interests of their clients or to track and inform the manufacturer of the proper distribution of the products, they must be aware of it as well (Marano 2019).
At the same time, supervisors cannot ignore how the algorithm results are generated. Otherwise, they could not adequately supervise compliance with the POG rules. Supervisory authorities, therefore, must check the reasons leading to the selection of intermediaries who intend to carry out distribution through robo-advice, or the reasons leading to the use of robo-advice in the case of direct distribution by insurance undertakings (Marano 2019). Supervisors must also make sure that the ongoing awareness product matches the interests and demands of the target market that are built into its design, as well as the information flow between manufacturers and distributors regarding the distribution of products through robo-advice.
If greater awareness of these aspects can lead EIOPA to issue guidelines and, in any case, the national authorities to act in the same direction, other problems deriving from robo-advice can only be solved with changes to the IDD. Such issues refer to the liability regime and the exemption for ancillary insurance intermediaries.
Although providing advice does not diminish distributors’ responsibility, when provided in whole or part through an automated or semiautomated system, this rule applies to IBIPs only. This regulatory choice probably derives from the fact that robo-advice has begun to be provided for these products and financial ones. However, the reduction in brokerage commissions and the increase in competition in the quality of the service provided to the customer increasingly lead to the extension of consultancy to other insurance products—life and non-life—also through robo-advice.
The EU legislator intended to reaffirm the principle above of full responsibility in Regulation 2019/1238 of 20 June 2019 on a pan-European Personal Pensions Product (PEPP) (Article 34(5), European Commission 2019). Thus, the lack of inclusion of insurance products other than IBIPs in this principle does not appear justified (European Commission 2017). The principle of technological neutrality cannot tolerate the use of technology as the pretext for reducing the liability deriving from misbehaviours supported or generated by this technology. It may be that national legislations or courts fill this gap. Nonetheless, the need for harmonisation in customer protection calls for the amendment of the IDD, considering that distributors can carry out the robo-advice and, in general, online distribution on a cross-border basis more easily than traditional distribution.
Finally, where a premium/risk size threshold is met, persons practising insurance distribution as an ancillary activity are exempt from the IDD. In this case, an insurance undertaking or insurance intermediary that is exempted from the requirements set out in the IDD and carrying out the distribution activity through an ancillary insurance intermediary should make sure some of the requirements are met.
This regulatory choice re-designs the supervisory chain in relation to distribution. The supervisory authority must rely on insurers’ supervision of the exempted distributors, so customers lose the protection provided by direct supervision of intermediaries by the authority. Nonetheless, EIOPA expects supervisors to pay particular attention to ancillary intermediaries during POG assessment to understand how manufacturers monitor these intermediaries while considering their ancillary nature and possible potential risks (EIOPA 2020).
The premium paid (size) and the risk covered (risk) by each insurance contract are the criteria used for exemption from the IDD rules. The overall number (scale) of contracts that each ancillary insurance intermediary can distribute is not taken into account by these criteria. Considering size and nature as adequate criteria, more than scale, is coherent with the IDD’s regulatory framework which did not put digital transformation at the core of its rules. However, when the threshold includes online distribution and, thus, robo-advice, the scale’s irrelevance raises concerns about customer protection and distribution risk management (Marano 2021b).
Such distribution enables insurers to reach an indefinite number of people and facilitate their cross-border activities. If corrective action is not taken, standardising and automating the relationship between distributors and customers makes it possible for the same mistake to be made indefinitely. Collaboration between insurers and some distributors does not occur within uniform rules according to which both entities are subject to supervision by the same authority. The capacity of insurers to have an understanding with these distributors of how they must work together to manage these risks is necessary for compliance with the IDD rules. This ability depends on the bargaining power of the parties involved. It does not derive from a legal obligation with which these intermediaries must comply. If the balance of power is favourable to the intermediary, the latter could prefer insurers that are less likely to manage distribution risks by imposing charges on intermediaries (Marano 2021b).
The exception from the IDD requirements will ultimately likely increase distribution risks for insurers and be more harmful to customers in terms of online activities than face-to-face activities, which is what the EU legislature has primarily taken into account. Thus, a proposal to exclude ancillary insurance intermediaries from the exemption has already been advanced if they distribute online (Marano 2021b). The rationale behind such a proposal applies to robo-advice, being this kind of online activity. The principle of proportionality pretends not to exceed the limits of what is appropriate and necessary to achieve the objectives pursued by the legislation (Marano 2021b). Such a principle is behind the above exemption. If the principle of proportionality would justify diminished customer protection, the scale attainable by robo-advisors (and online distributors) could be incompatible with an exemption from the obligations set by the IDD for insurance intermediaries, as the latter often carries out distribution activities on a much smaller scale.
At the same time, thresholds can be introduced to balance the administrative burden for compliance with the IDD rules and the size of the online activity, thus keeping exempted-only online distributors whose risk is comparable to those carrying out the activity in a traditional way. EU law is already addressing concerns related to the size of online platforms. The Digital Services Act (DSA) and Digital Markets Act (DMA) are legislative proposals that identify thresholds by which to apply the obligations to online platforms and providers of platform services within their scope. The criteria used to determine these thresholds consist of the number of average monthly active recipients of the service in the Union (Art. 25 DMA), the yearly active business users, and the annual EEA turnover or the market capitalisation (Art. 3 DMA). These thresholds are therefore determined based on dimensional criteria that assume the existence of a risk to be neutralised when the threshold itself is exceeded.
On the other hand, the upcoming Artificial Intelligence Act (the draft AIA) classifies AI systems because of a risk scale that determines the obligations of those who use them. Thus, this criterion could be taken into consideration—at least theoretically—to replace or supplement the merely dimensional ones that determine the exemption threshold, at least if this criterion proves to be adequate to all insurance intermediaries/distributors in dealing with (and reducing) the risks arising from robo-advisors.

3. Regulating Robo-Advisors from an AI Regulation Perspective

The increasing use of AI and the substantial risks it can pose to consumers have prompted EU legislators to put forward a specific AI regulation (the draft AIA). Section 3 discusses how the draft AIA will influence insurance distributors. Firstly, we introduce the regulatory requirements provided by the draft AIA for users (deployers) of AI systems (Section 3.1). Then (in Section 3.2), we focus on the issue of the extent to which insurance intermediaries being the deployer (or the “user” as defined by the drafted AIA) of robo-advisors will be subject to specific obligations in line with the drafted AIA. We conclude this section (Section 3.3) by providing some critical analysis to assess whether the proposed obligations for insurance intermediaries are proportionate to reducing the risk posed by robo-advisors.

3.1. Regulatory Requirements for Users of AI Systems

The risks posed by AI applications have received a lot of attention at the EU level. In 2019, an Ethical Guideline on Trustworthy AI was published by the Commission (HLEG 2019). According to the Guideline, several requirements are considered essential to ensure that an AI system is trustworthy. Thus, in 2021, the Commission presented a draft AIA, the main aim of which was to draft the principles that had been framed by the ethical principles and to ensure that consumers across the EU are protected at the same level. In December 2022, the Council greenlighted the AIA, allowing it to be subjected to further discussion in the European Parliament and reaching a final agreement.
The definition of AI provided by the draft AIA is extremely broad (Veale and Borgesius 2021). A system will be considered an AI system if it makes use of one of the following three technologies: machine learning approaches, logic- and knowledge-based approaches, or statistical approaches (Annex I). A variety of parties are covered by the draft AIA, which include not only providers who are responsible of developing and manufacturing the AI system but also downstream actors who introduce an AI system into an EU territory (i.e., the importer) or deploy the AI system in a concrete scenario (i.e., the user). Their obligations, however, could result in huge differences. Pursuant to the drafted AIA, the obligations allocated to providers are onerous. In comparison, deployers and other parties have a much lower level of regulatory burden.
The obligation that a party shall have is not only determined by its role but is also proportionate to the level of risk that the AI system provided or operated by them poses. In this regard, a risk-based approach is adopted by the draft AIA to apply different regulatory requirements to AI systems with different levels of risk. The risks produced by different applications of AI systems are categorised into four groups: unacceptable risk, high risk, limited risk, and minimal risk. The regulatory requirement for a particular AI system relies thereby on the issue of which category it falls into.
Certain AI systems are considered to pose unacceptable risks when they are applied in certain practices that pose a risk to fundamental rights on a large scale. Such applications relate to manipulation, social scoring, and some uses of the real-time biometric systems in public (Article 5). The draft AIA will prohibit these AI systems from being placed in the Single Market. As a result, there are supposed to be no users of such AI systems, meaning that it is not necessary to provide regulatory requirements for them.
AI systems that pose high risks are considered to have a significantly adverse impact on the health and safety of human beings and may violate fundamental rights to some extent. A variety of applications are within the scope of high-risk AI systems, which include not only traditional systems under the already-existing New Legislative Framework (NLF) that are upgraded with AI systems but also new standalone ones that are listed in Annex III. Since the whole of society, including consumers, can benefit from related applications, AI systems in this category shall not be totally prohibited. Instead, they can be placed on the market on the condition that they have complied with a comprehensive set of regulatory requirements. The providers of high-risk AI systems are subject to a series of obligations to ensure that the system is trustworthy when placed on the market (Article 16). To prove that all requirements are satisfied, high-risk AI systems have to undergo a conformity assessment (Article 19). Users of AI systems with high risks are also expected to fulfil some duties (Article 29). Such duties ensure that users can properly oversee the operation of the AI systems. Such obligations include using high-risk AI systems in accordance with the instructions, making sure that the input data are relevant for the intended purpose, informing providers when a risk is identified, and ensuring that logs continue to be automatically generated. It is noteworthy that users may be regarded as providers and thereby comply with requirements set for the latter, given that they make substantial modifications to the function or purpose of an AI system (Article 28). In this case, since the activity of users is no longer merely deploying the AI system, it is reasonable to impose the same requirements on them as those imposed on providers.
AI systems with limited risks refer to the applications intended to interact with natural persons. Chatbot is a typical application in this group. The draft AIA requires providers and users to meet certain transparency obligations given the difference of applications. Specifically, providers are obliged to design such AI systems in a way that natural persons are aware when they are interacting with AI rather than humans (Article 52(1)). Users, in comparison, are responsible for disclosing the use of emotional AI or biometric categorisation and the use of deepfake to end-users who might be influenced by such applications. Beyond these certain applications, however, the draft AI Act does not require users to be subject to any transparency measures.
If an AI system is designed or used for none of the abovementioned purposes, it will be regarded as a system with minimal risk. No regulatory obligations are required by the draft AIA for their producers or users.

3.2. Regulatory Requirements for Insurance Intermediaries under the Draft AIA

As already mentioned, the definition of AI system is rather broad according to the draft AIA. Robo-advisors, either functioning as rule-based or advanced machine learning algorithms, will thereby be indifferently recognised as AI systems, and they will be subject to the same regulatory requirements when adopting the same practice.
The next core issue is what level of risk robo-advisors will externalise. The answer to this question concerns the function that a robo-advisor provides in a concrete scenario. In the following paragraphs, we examine more closely the risk posed by robo-advisors and the obligations of operators when robo-advisors are used for different purposes in the process of insurance distribution.

3.2.1. Do Robo-Advisors Incur Unacceptable Risk?

According to the draft AIA, subliminal techniques which can distort a person’s behaviour without awareness in a manner that causes or is likely to cause physical or mental harm will be banned in the Single Market (Article 5(1)(a)). The intention of this Article is based on the belief that AI should not be developed or used to distort human autonomy. The scope of the prohibition, however, can be blurry in practice. It is noted that there exists a consequence requirement (i.e., cause or likely to cause physical or mental harm) for banning AI systems that employ subliminal techniques. In this sense, if the loss is a pure economic loss, an AI system will not be banned even if it employs subliminal techniques. Robo-advisors are such applications, the harm resulting from which is neither physical nor psychological but pure financial loss. Specifically, robo-advisors may utilise subliminal techniques to disadvantage consumers by inducing them to purchase an unsuitable insurance product or pay higher premiums (Strzelczyk 2017). Such detriment represents a purely economic loss rather than an infliction on substantive rights, such as protected rights.
Therefore, even if a robo-advisor can unconsciously manipulate customers or exploit their specific vulnerabilities to enter a contract, the harm requirement will exempt providers and insurance intermediaries from bearing regulatory obligations under the draft AIA. In this regard, the manipulation resulting from using robo-advisors will not be entirely banned by the draft AIA. Instead, such behaviour may constitute an aggressive commercial practice, which is regulated by the Unfair Commercial Practice Directive (European Commission 2005).

3.2.2. Do Robo-Advisors Incur High Risk?

As indicated in Section 3.1, AI systems can be categorised as high-risk if they are used as safety components of the products that are already regulated by the NLF or constitute standalone systems for certain purposes. In the case of robo-advisors, they can be regarded as high-risk AI systems when credit institutions employ AI systems for the purpose of creditworthiness evaluation (Annex III). However, there are controversies.
Access to certain essential private and public services directly determines the standard of living of natural persons, so AI systems should not be utilised to discriminate particular groups of people to unfairly disadvantage them from enjoying these essential services (Zuiderveen Borgesius 2020). What is not yet clear according to the draft AIA, however, is whether access to insurance is part of the “essential services”, or whether insurance intermediaries constitute credit institutions. Insurance intermediaries can use an online questionnaire to collect information from customers for further profiling (Bianchi and Briere 2021). The draft AIA indicates that the definition of “credit institution” is consistent with Directive 2013/36/EU (European Commission 2013a), which further refers to Regulation (EU) No 575/2013 (European Commission 2013b). In this sense, insurance undertakings and intermediaries will not be defined as credit institutions. However, the draft AIA also attempts to explain “essential services” in a broad manner, which includes services with respect to housing, electricity, and telecommunications (Recital 37). Taking this into account, it is not clear whether specific services used by insurance intermediaries (e.g., risk assessment) can be explained as a kind of creditworthiness evaluation (BETTER FINANCE 2021).
Therefore, if there is no further interpretation that confirms whether insurance undertakings and intermediaries are included in Annex III of the draft AIA, we may have to infer that the activity of providing and using robo-advisors with a function of creditworthiness evaluation for distributing insurance is not within the scope of AI applications with high risks. If this is true, providers as well as insurance intermediaries (as the users) will not be subject to any proposed obligations for high-risk AI systems.
From a risk regulation perspective, however, treating risk management that occurs in insurance distribution and creditworthiness evaluation taken by credit institutions (e.g., banks) differently is not reasonable (Wendehorst 2021). Discrimination also happens when robo-advisors are used for determining access to insurance or the premium of people from different groups (Zuiderveen Borgesius 2020). Even if personalising pricing for insurance based on protected characteristics (e.g., gender or race) has been prohibited by non-discrimination laws (Xenidis and Senden 2019), with the adoption of learning-based systems, an indirect discriminatory result can continue to exist (Zuiderveen Borgesius 2018). Based on the literature analysed, due to the correlation between various types of data, the vulnerability of a person can still be utilised in a way that developers are not intended to (Prince and Schwarcz 2019). To avoid this side effect, it is necessary to place robo-advisors with a function of risk assessment on a list of high-risk AI applications. By doing so, providers of robo-advisors will be subject to comprehensive regulatory obligations, especially considering human oversight. Insurance intermediaries are also subject to specific requirements. For example, they will not be permitted to collect excessive input data, which is irrelevant to the intended purpose of robo-advisors (Article 29(3)).

3.2.3. Do Robo-Advisors Incur Limited Risk?

Apart from the uncertain regulatory landscape for robo-advisors as mentioned above, there is relatively little doubt that robo-advisors, which are deployed for insurance distribution, can be categorised as AI systems with limited risks due to their essential function of interacting with customers.
In this circumstance, providers are obliged to comply with the transparency obligations as required by the draft AIA, whereas the users of robo-advisors will not be subject to any obligations unless the robo-advisors hold certain attributes (e.g., emotion recognition, biometric categorisation, and deepfake) that can deceive customers without them realising that they are interacting with AI systems (Article 52). In other words, insurance intermediaries that are users of robo-advisors will not be subject to any obligations at all. This remark also applies to insurance undertakings when they directly provide advice to customers with robo-advisors. Insurance distribution shall not be left behind when it comes to the obligation of transparency. Even if the draft AIA fails to provide such an obligation to insurers, authorities in the EU (such as EIOPA) and national competent authorities should fill this gap by providing guidance to make sure robo-advisors are trustworthy (EIOPA 2021).

3.3. Discussion

We can now summarise here our key findings on how insurance intermediaries who adopt robo-advisors to distribute insurance will be regulated according to the draft AIA. In general, insurance intermediaries that adopt robo-advisors to support insurance distribution are expected to meet few obligations under the draft AIA. Most of them will be regarded as the users of AI systems with limited risk and they thereby do not need to meet any obligations according to the draft AIA. Even if their activities touch upon manipulation with subliminal techniques, which can lead to large economic losses, or risk assessment, which may discriminate against specific vulnerable groups of people from access services, their practice would not be categorised as an unacceptable risk or a high risk. As a result, they are likely to escape the relevant obligations set by the draft AIA.
Therefore, it is surprising that replacing human advisors with robo-advisors seems to add no obligations to insurance intermediaries beyond the incumbent regulations, even if such applications could result in substantial risks. In other words, the regulation pertaining to AI is not proportionate to the risk that can be posed by AI. Robo-advisors, especially those with a learning capacity, can no longer be equalised as traditional chatbots, so that it is sensible that they should be subject to different sets of regulatory requirements. Insurance intermediaries, as the party determining whether to adopt robo-advisors, deciding how to use the application, and later maintaining the overall performance of the system, should be obliged to take achievable measures to ensure the quality of the robo-advisor. In other words, the draft AIA, as a horizontal approach to all AI systems, may fail to identify the emerging risks and problems in the process of insurance distribution. In a letter written to co-legislators on the draft AIA, Petra Hielkema, Chairperson of EIOPA, correctly pointed out that “the AI Act should identify the relevance of the use of AI in the financial sector, and in particular in the insurance sector, but leave further specification of the AI framework to sectorial legislation” (EIOPA 2022). The regulation of AI in a specific sector should be further developed on a sectoral basis. In the sphere of insurance distribution, EIOPA as well as national competent authorities should take the role of monitoring AI systems that are used for insurance distribution. They should further establish the requirements of transparency and non-discrimination for insurance distributors to make sure that AI systems are used ethically. Finally, they can take the lead in ensuring that relevant rules will not overlap or be too complex (EIOPA 2021).

4. Conclusions

The increasing use of robo-advisors can disadvantage consumers, and requires insurance distributors to act as deployers of such intelligent devices to comply with incremental regulatory requirements. In this article, we discussed two possible areas where we may rely on insurance distributors to reduce the risk posed by robo-advisors.
On the one hand, we explored the role of the IDD in regulating robo-advisors. The IDD includes the robo-advisor in its scope and provides rules regarding a distributor’s organisation, the quality of the advice, and the distributor’s diligence and responsibility. Furthermore, the IDD allows some issues to be resolved if the rules are properly interpreted. However, the IDD does not apply if distributors meet the requirements to be exempted, and this exemption includes (ancillary) insurance intermediaries providing robo-advice. Thus, a revision of the IDD should reconsider such exemption for online activities due to the scale that these activities can achieve, including robo-advice. Although the purely dimensional thresholds may be “rigid” with respect to the actual recurrence of the risk to customers to be neutralised, thresholds can be used to align the above proposal with the principle of proportionality.
On the other hand, we discussed how the coming AIA could reduce the risk in insurance distribution when robo-advisors are deployed. Our finding shows that the duty conferred on relevant stakeholders (not only providers but also deployers such as insurance undertakings and intermediaries) is surprisingly disproportionate to the incremental risk generated by robo-advisors. To fill this regulatory gap, we suggest that robo-advisors with a risk-assessment function that are used in insurance distribution be regarded as high-risk AI systems with no difference from the creditworthiness evaluation of other credit institutions. By doing so, providers who develop robo-advisors will be required to respect a comprehensive set of conformity assessments. Insurance distributors are also forced to comply with certain regulatory requirements (regarding input data, risk reports, and logs), but such requirements are proportionate to their role as a deployer and not beyond their capacities. These requirements should be further clarified by the AIA or IDD in a consistent manner, and authorities at the EU and domestic level should contribute by providing guidelines and monitoring the participants.

Author Contributions

Conceptualisation, P.M. and S.L.; writing—original draft preparation, P.M. and S.L.; writing—review and editing, P.M. and S.L.; visualisation, S.L.; supervision, P.M. and S.L.; project administration, S.L.; funding acquisition, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Academy of Finland, grant number 330884.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to express their appreciation to the valuable comments from three anonymous reviewers and to the assistance provided by the editorial team.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Australian Securities & Investments Commission. 2016. Providing Digital Financial Product Advice to Retail Clients. Available online: https://asic.gov.au/regulatory-resources/find-a-document/regulatory-guides/rg-255-providing-digital-financial-product-advice-to-retail-clients (accessed on 10 September 2022).
  2. Baker, Tom, and Benedict Dellaert. 2018. Regulating Robo Advice Across the Financial Services Industry. Iowa Law Review 103: 713. [Google Scholar] [CrossRef] [Green Version]
  3. BETTER FINANCE. 2021. Feedback on the EU Commission proposal on Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Available online: https://betterfinance.eu/publication/artificial-intelligence/ (accessed on 10 September 2022).
  4. Bianchi, Milo, and Marie Briere. 2021. Robo-Advising: Less AI and More XAI? Available online: https://ssrn.com/abstract=3825110 (accessed on 10 September 2022).
  5. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144: 114. [Google Scholar] [CrossRef] [Green Version]
  6. EBA, EIOPA, and ESMA. 2015. Joint Committee Discussion Paper on Automation in Financial Advice. Available online: https://www.esma.europa.eu/sites/default/files/library/jc_2015_080_discussion_paper_on_automation_in_financial_advice.pdf (accessed on 10 September 2022).
  7. EIOPA. 2019. Report on Best Practices on Licensing Requirements, Peer-to-Peer Insurance and the Principle of Proportionality in an Insurtech Context. Available online: https://register.eiopa.europa.eu/Publications/EIOPA%20Best%20practices%20on%20licencing%20March%202019.pdf (accessed on 10 September 2022).
  8. EIOPA. 2020. EIOPA’s Approach to the Supervision of Product Oversight and Governance. Available online: https://www.eiopa.europa.eu/content/eiopa-approach-supervision-product-oversight-and-governance_en (accessed on 10 September 2022).
  9. EIOPA. 2021. Artificial Intelligence Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in the European Insurance Sector: A Report from EIOPA’s Consultative Expert Group. Available online: https://www.eiopa.europa.eu/document-library/report/artificial-intelligence-governance-principles-towards-ethical-and_en (accessed on 10 September 2022).
  10. EIOPA. 2022. Letter to Co-Legislators on the Artificial Intelligence Act. Available online: https://www.eiopa.europa.eu/document-library/letter/letter-co-legislators-artificial-intelligence-act (accessed on 10 September 2022).
  11. European Commission. 2005. Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (‘Unfair Commercial Practices Directive’). OJ L 149: 22–39. [Google Scholar]
  12. European Commission. 2013a. Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC. OJ L 176: 338–436. [Google Scholar]
  13. European Commission. 2013b. Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit institutions and investment firms and amending Regulation (EU) No 648/2012. OJ L 176: 1–337. [Google Scholar]
  14. European Commission. 2016. Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution. OJ L 26: 19–59. [Google Scholar]
  15. European Commission. 2017. Commission Delegated Regulation (EU) 2017/2359 of 21 September 2017 supplementing the IDD with regards to information requirements and conduct of business rules applicable to the distribution of IBIPs. OJ L 341: 8–18. [Google Scholar]
  16. European Commission. 2019. Regulation (EU) 2019/1238 of the European Parliament and of the Council of 20 June 2019 on a pan-European Personal Pension Product (PEPP). OJ L 198: 1–63. [Google Scholar]
  17. European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM/2021/206 Final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 (accessed on 10 September 2022).
  18. Greenberg, Brad A. 2016. Rethinking Technology Neutrality. Minnesota Law Review 100: 1495–562. [Google Scholar]
  19. HLEG (High-Level Expert Group). 2019. Ethics Guidelines for Trustworthy AI. Available online: https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html (accessed on 10 September 2022).
  20. Köhne, Thomas, and Christoph Brömmelmeyer. 2018. The new insurance distribution regulation in the EU—A critical assessment from a legal and economic perspective. The Geneva Papers on Risk and Insurance—Issues and Practice 43: 704–39. [Google Scholar] [CrossRef]
  21. Marano, Pierpaolo. 2019. Navigating InsurTech: The digital intermediaries of insurance products and customer protection in the EU. Maastricht Journal of European and Comparative Law 26: 294–315. [Google Scholar] [CrossRef]
  22. Marano, Pierpaolo. 2021a. The Contribution of Product Oversight and Governance (POG) to the Single Market: A Set of Organisational Rules for Business Conduct. In Insurance Distribution Directive. Edited by P. Marano and K. Noussia. A Legal Analysis. Berlin/Heidelberg: Springer, p. 62ff. [Google Scholar]
  23. Marano, Pierpaolo. 2021b. Management of Distribution Risks and Digital Transformation of Insurance Distribution—A Regulatory Gap in the IDD. Risks 9: 143. [Google Scholar] [CrossRef]
  24. Maume, Philipp. 2021. Robo-Advisors: How Do They Fit in the Existing EU Regulatory Framework, in Particular with Regard to Investor Protection? Publication for the Committee on Economic and Monetary Affairs, Policy Department for Economic, Scientific and Quality of Life Policies. Luxembourg: European Parliament. [Google Scholar]
  25. OECD. 2018. Financial Consumer Protection Approaches in the Digital Age. Available online: https://www.oecd.org/finance/G20-OECD-Policy-Guidance-Financial-Consumer-Protection-Digital-Age-2018.pdf (accessed on 10 September 2022).
  26. Ostrowska, Marta, and Maciej Balcerowski. 2020. The Idea of Robotic Insurance Mediation in the Light of the European Union Law. In InsurTech: A Legal and Regulatory View. Cham: Springer. [Google Scholar]
  27. Prince, Anya E. R., and Daniel Schwarcz. 2019. Proxy discrimination in the age of artificial intelligence and big data. Iowa Law Review 105: 1257. [Google Scholar]
  28. RapidValue Solutions. 2017. The Rise of Robo-Advice in Insurance Companies. Available online: https://rapidvalue.medium.com/the-rise-of-robo-advice-in-insurance-companies-ef21d2ed8a0 (accessed on 10 September 2022).
  29. SCOR. 2022. Consumer Survey on Robo-Advice for Life Insurance. Available online: https://www.scor.com/en/expert-views/consumer-survey-robo-advice-life-insurance (accessed on 10 September 2022).
  30. Strzelczyk, Bret E. 2017. Rise of the machines: The legal implications for investor protection with the rise of Robo-Advisors. DePaul Business and Commercial Law Journal 16: 54. [Google Scholar]
  31. Veale, Michael, and Frederik Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International 22: 97–112. [Google Scholar] [CrossRef]
  32. Wendehorst, Christiane. 2021. The Proposal for an Artificial Intelligence Act COM (2021) 206 from a Consumer Policy Perspective. Available online: https://www.sozialministerium.at/dam/sozialministeriumat/Anlagen/Themen/Konsumentenschutz/Konsumentenpolitik/The-Proposal-for-an-Artificial-Intelligence-Act-COM2021-206-from-a-Consumer-Policy-Perspective_dec2021__pdfUA.pdf (accessed on 10 September 2022).
  33. Xenidis, Raphaële, and Linda Senden. 2019. EU non-discrimination law in the era of artificial intelligence: Mapping the challenges of algorithmic discrimination. In General Principles of EU Law and the EU Digital Order. Edited by Ulf Bernitz, Xavier Groussot, Jaan Paju, Sybe de Vries and Sybe De Vries. Alphen aan den Rijn: Kluwer Law International, pp. 151–82. [Google Scholar]
  34. Zuiderveen Borgesius, Frederik. 2018. Discrimination, Artificial Intelligence, and Algorithmic Decision-Making. Council of Europe, Directorate General of Democracy. Available online: https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decisionmaking/1680925d73 (accessed on 10 September 2022).
  35. Zuiderveen Borgesius, Frederik. 2020. Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights 24: 1572–93. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marano, P.; Li, S. Regulating Robo-Advisors in Insurance Distribution: Lessons from the Insurance Distribution Directive and the AI Act. Risks 2023, 11, 12. https://doi.org/10.3390/risks11010012

AMA Style

Marano P, Li S. Regulating Robo-Advisors in Insurance Distribution: Lessons from the Insurance Distribution Directive and the AI Act. Risks. 2023; 11(1):12. https://doi.org/10.3390/risks11010012

Chicago/Turabian Style

Marano, Pierpaolo, and Shu Li. 2023. "Regulating Robo-Advisors in Insurance Distribution: Lessons from the Insurance Distribution Directive and the AI Act" Risks 11, no. 1: 12. https://doi.org/10.3390/risks11010012

APA Style

Marano, P., & Li, S. (2023). Regulating Robo-Advisors in Insurance Distribution: Lessons from the Insurance Distribution Directive and the AI Act. Risks, 11(1), 12. https://doi.org/10.3390/risks11010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop