Next Article in Journal
Heterogeneous Effects of ICT across Multiple Economic Development in Chinese Cities: A Spatial Quantile Regression Model
Next Article in Special Issue
Online Sustainability Reporting and Firm Performance: Lessons Learned from Text Mining
Previous Article in Journal
Sustainable Use of the Environment, Planetary Boundaries and Market Power
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tacit Collusion on Steroids: The Potential Risks for Competition Resulting from the Use of Algorithm Technology by Companies

by
Christophe Samuel Hutchinson
1,*,
Gulnara Fliurovna Ruchkina
2 and
Sergei Guerasimovich Pavlikov
1
1
Department of Legal Regulation of Economic Activity, Financial University under the Government of the Russian Federation, 125167 Moscow, Russia
2
Law Faculty, Financial University under the Government of the Russian Federation, 125167 Moscow, Russia
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(2), 951; https://doi.org/10.3390/su13020951
Submission received: 18 December 2020 / Revised: 8 January 2021 / Accepted: 14 January 2021 / Published: 19 January 2021

Abstract

:
Digitalization has a growing impact on everyone’s life. It influences the way consumers purchase products, read online news, access multimedia content, and even meet or interact socially. At the core of digital products lies algorithm technology, decision-making software capable of fulfilling multiple tasks: data mining, result ranking, user matching, dynamic pricing, product recommendations, and ads targeting, among others. Notwithstanding the perceived benefits of algorithms for the economy, the question has been raised of whether the use of algorithms by businesses might have countervailing effects on competition. Although any anti-competitive behavior typically observed in traditional markets can be implemented by this technology, a particular issue highlighted in discussions between researchers and practitioners is the concern that algorithms might foster collusion. Because of their capacity to increase market transparency and the frequency of interactions between competing firms, they can be used to facilitate parallel collusive behavior while dispensing competing firms with the need for explicit communication. Consequently, it is not excluded that algorithms will be used in the years to come to obtain the effects of a cartel without the need to enter into restrictive agreements or to engage in concerted practices. We evaluate the collusion risks associated with the use of algorithms and discuss whether the “agreement for antitrust purposes” concept needs revisiting. The more firms made use of types of algorithms that enable direct and indirect communication between the competitors, the more likely those companies may be considered liable.

1. Introduction

Digitalization has a growing impact on everyone’s life. It influences the way consumers purchase products, read online news, access multimedia content, and even meet or interact socially. At the core of digital products lies algorithm technology, decision-making software capable of fulfilling multiple tasks: search result ranking, user matching, dynamic pricing, data mining, product recommendations, and ads targeting, among others.
Algorithms bring economic benefits. On the demand side, they allow a better organization of the information and quicker and more effective access to it. They help consumers to see and act on rapidly changing prices of online services, such as the sales of sports tickets, taxi fares, or hotel bookings. They enable end-users to be informed on dimensions of competition other than prices, such as clients’ preferences and quality. On the supply side, algorithms can facilitate innovations by unleashing new business models, reducing search and production costs, allowing for the personalization of products and services, and the fast adjustment of stocks and prices to changes in market conditions.
Notwithstanding the perceived benefits of algorithms for the economy, it has been discussed whether the use of algorithms by businesses might have adverse effects on competition. Although any anti-competitive behavior typically observed in traditional markets can be implemented by this technology [1], a particular issue highlighted in discussions between researchers and practitioners is the concern that algorithms might foster collusion [2,3,4,5,6,7,8]. Because of their capacity to increase market transparency and the frequency of interactions between competing firms, they can be used to facilitate parallel collusive behavior while dispensing competing firms with the need for explicit communication. Consequently, it is not excluded that in the years to come algorithms will be used to obtain the effects of a cartel without the need to enter into restrictive agreements between them or to engage in concerted practices. Against this backdrop, the question has been raised as to whether the concept of “agreement for antitrust purposes” as defined in US and EU competition rules, needs to be reconsidered [9].
While acknowledging the benefits of algorithms for the economy, this contribution addresses the potential risks for competition resulting from the use of this technology by companies. It first elaborates on the notion of algorithms as well as on the different types and fields of application of algorithms (Section 2). It subsequently focuses on the collusion risks associated with the use of decision-making software (Section 3) and concludes on the question of the liability of companies for the anti-competitive conduct of the algorithms they use (Section 4).

2. Algorithms-Notion, Types, and Fields of Application

2.1. Definition of an Algorithm

Although the term “algorithm” has been around for some time [10], there is no consensus on its precise meaning. It can be defined as “a sequence of simple and/or well-defined operations that should be performed in an exact order to carry out a certain task or class of tasks or to solve a certain problem or class of problems” [10] (p. 1). Within this broad definition, algorithms are sometimes compared to cooking recipes; the inputs being the ingredients, the algorithm’s operations the elementary cooking operations, and the end result the meal [11].
However, some algorithms are able to solve not only one but several distinct problems, achieving at the same time a certain degree of abstraction [12]. Subsequently, the term “algorithm” could designate on the one hand a standardized method of solving a certain type of problem and on the other hand the fact of coding this method in a computer language.
Limited to computer science, an algorithm can be defined as Knuth put it, “as any well-defined calculation procedure which takes a value, or a set of values, as input and produces some value, or a set of values, as output”, i.e., “a sequence of calculation steps which transform the input into output” [13] (p. 5).
Notwithstanding this broad definition, this article elaborates on digital algorithms likely to have economic consequences and, more particularly, anti-competitive effects on competition.
In order to evaluate the respective potential anti-competitive impact of each type of algorithm, it seems useful to classify them according to the type of task they perform.

2.2. Typology of Algorithms According to the Tasks They Perform

Algorithms can help businesses create and improve products and services in a number of ways. For instance, they enable companies to identify the most relevant results for a particular search, make personalized shopping recommendations to users based on past purchase information and browsing history, recommend articles to a reader based on his online media browsing history or match customers’ requests and offers via matching algorithms, as it is often the case now in the area of the sharing economy [14]. Companies can also use pricing algorithms, as shown in Table 1, to quickly react to changes in competitors’ prices and to adjust supply conditions such as stock availability and production capacity to fluctuations in market demand.
Taking into account the great diversity of tasks which can be performed by algorithms, this paper focuses on those which, in our view, pose the greatest potential risks for competition.

2.2.1. Monitoring and Data Collection Algorithms

Algorithms can help businesses to collect data related to buyer preferences [18] or to competitors through the use of scraping [19]. The e-commerce sector inquiry conducted by the European Commission between June 2015 and March 2016 found out that “53% of the respondent retailers track the online prices of competitors, out of which 67% use automatic software programs for that purpose” [20] (paragraph 149).
As shown in Figure 1, competition concerns can arise when the data collected by this kind of algorithm is combined with the use of a pricing program that allows for the automatic setting of an agreed price between competitors, but which triggers a price war when a company deviates from it.

2.2.2. Parallel Algorithms

Companies that use these algorithms unilaterally design and implement them in order not only to monitor prices set by competitors but also to automatize their decision-making process so that their prices react immediately to any change in market conditions.
Parallel algorithms have been used in various business areas, such as online air tickets, hotel reservations, or taxi fares, among others, to efficiently adjust prices to changes in supply and demand. According to the European Commission’s e-commerce sector inquiry mentioned above, a “majority of retailers who use software to track prices then adjust their own prices to those of their competitors (78%)” [20] (paragraph 149).
As shown in Figure 2, a collusive outcome could be obtained if the majority of companies present on the relevant market use algorithms to react in real time to the prices set by the market leader above the competitive level (“follow-the-leader strategy”).

2.2.3. Signaling Algorithms

Signaling enable firms to send signals to indicate an intention to collude. Although signaling can be observed in any market, it nevertheless has some drawbacks. For example, whenever a company raises its price with a collusive intent, if most rivals either do not receive the signal or intentionally decide not to respond to it, the signaling company is exposed to a drop in sales and profits.
Awareness of such a risk could cause companies to wait for competitors to signal, potentially leading to delays or even coordination failure [21]. The cost of signaling may be reduced or eliminated by the use of algorithms thanks to their ability to perform “instantaneous price changes in the middle of the night” [9] (p. 30), which can be interpreted by the competing firms’ powerful analytical tools as signals but not always by consumers who usually do not buy at that moment of the day.
Figure 3 shows how a signaling algorithm works. First, each company constantly sends new price increase offers and at the same time monitors the signals sent by competitors. At the end of the negotiation, the economic operators send the same signal and fix the price agreed between themselves, which remains in place until a new successful negotiation takes place.
The very rapid interactive actions that eventually converge towards a common price enabled by the use of signaling algorithms can be compared to a negotiation process between business owners implementing a collusive agreement. This type of practice has given rise to concern among a number of competition authorities [22].

3. Collusion Risks Potentially Associated with the Use of Pricing Algorithms

Although, as previously seen, the use of modern algorithms has many advantages for economic operators; it nevertheless involves risks, in particular for competition due to the fact that it can facilitate a collusive outcome.
This section will first outline the different possible definitions of the notion of collusion and assess the potential impact of algorithms on market conditions likely to lead to a collusive outcome. It will then focus on the ways algorithms may be used by companies to help them solve the coordination problem without communicating and emphasize the impact of this technology on the factors which may affect the stability of the collusion, such as market transparency and frequency of interactions. In this context, the question has been raised of whether the notion of an “antitrust agreement” as defined in US and European competition law should be reconsidered.

3.1. Collusion—Concepts and Definitions

Collusion is generally defined as a market result whereby companies achieve through forms of coordination in quantities and prices higher profits than those that they would achieve under normal competition. As Harrington/Harker point out, “collusion is when firms use strategies that embody a reward–punishment scheme which rewards a firm for abiding by the supra-competitive outcome and punishes it for departing from it” [23] (p. 331).
Two forms of collusion are distinguished:
  • Explicit collusion, which relates to anti-competitive behaviors made possible by explicit agreements, whether written or oral. An agreement on the optimal level of price or production is generally for competitors the most direct way to obtain a collusive result.
  • Tacit collusion, which concerns forms of anti-competitive coordination which can be achieved without an explicit agreement, but which competitors can maintain by recognizing their reciprocal interdependence. In such a background, the anti-competitive result is obtained by each participant who determines their own profit maximization strategy independently of their competitors.
Between explicit collusion, which is always prohibited by competition rules, and tacit collusion or conscious parallelism, which is legal as long as it does not consist of coordination between competitors, there is a grey area of behavior of firms which goes beyond conscious parallelism but at the same time does not imply an express agreement between rivals.
Such a situation occurs frequently in oligopolistic markets where competitors are able to coordinate on prices and increase the likelihood of a tacitly collusive outcome and to punish participants who deviate from the oligopoly’s policy [24].
EU law distinguishes explicit collusion from tacit one. The European Court of Justiec (ECJ) considers that an “agreement” or an explicit collusion within the meaning of Article 101 of the Treaty of Functioning of the European Union (TFEU) implies some form of communication and a sense of mutual engagement, so that the parties realize that they have reached a “meeting of minds” or a “concurrence of wills”.
In its Bayer ruling, the ECJ defined the notion of agreement as “the existence of a concurrence of wills between at least two parties, the form of which doesn’t matter as long as it constitutes the faithful expression the intention of the parties” [25].
With regard to the notion of “concerted practice”, the ECJ defines it in its Suiker judgment as “a form of coordination between companies which, without having been brought to the stage where an actual agreement has been concluded, knowingly substitutes for the risks competition practical cooperation between them in particular any direct or indirect contact between these operators by which a company can influence the behavior on the market of its actual or potential competitors or disclose to them its decisions or intentions concerning its own behavior on the market” [26] (paragraph 26).
The prohibition of such a form of coordination “does not deprive economic operators of the right to intelligently adapt to the existing and anticipated behavior of their competitors” [26] (paragraph 174) added the ECJ in the same ruling. “Each producer is free to modify its prices, taking into account, in doing so, the current and foreseeable behavior of its competitors” [27] (paragraph 118). This “smart adaptation” is known as “tacit collusion”.
From an economic standpoint, both explicit and tacit collusion can increase prices. From a legal one, what sets them apart is the element of communication between rivals, since Article 101 TFEU does not cover unilateral behavior. Whether the communication leads to an agreement or a concerted practice is unimportant, as those two notions “encompass forms of collusion having the same nature but which are distinguished only by their intensity and the forms in which they manifest themselves” [28] concluded the ECJ in its T-mobil judgment.
Applied to the field of algorithms, the condition of “intelligent adaptation” to the market conditions can be considered as fulfilled when such a type of software simply observes, analyzes, and reacts unilaterally to the publicly observable behavior of competitors’ algorithms. For example, the algorithms of two companies could, through repeated interactions, be able to “decode” each other’s pricing policy, thus allowing them to better anticipate the reaction of the other.
However, if the use of algorithms leads to some form of coordination which could influence the behavior on the market of a competitor or reveal their intentions on their future conduct, such behavior is likely to fall under Article 101 TFEU.
Such a situation can arise, for example, when companies outsource the design of a certain type of algorithm to the same IT companies and programmers. This can be the case in the “hub and spoke” [29] (p. 1787 et seq.) scenario, according to which a transaction platform between two or more user groups determines the prices of the transactions. Uber is based on the use of this type of platform, which enables it to offer passengers driving services performed by private individuals with their own vehicles [30]. One of the advantages of such a business model is that Uber avoids entering into an employment relationship, the drivers acting as independent contract partners. The fare is determined by Uber’s price algorithm which is used by all drivers. In addition to the distance to be covered and the vehicle class, the algorithm used by Uber takes into account demand fluctuations in real time and dynamically adjusts prices. Since the price is set by the same software used by all drivers, price competition between them is virtually impossible.
In another scenario known as “signaling”, described in Section 2.2.3, firms use algorithms to send signals on their intentions to collude on prices or quantities. One of the most widely used techniques in “signaling” is the price announcement made sufficiently in advance by a company to allow its competitors to adjust their prices accordingly. Such a situation occurred, for example, in the Container Shipping case [31] in which the European Commission considered that the sending of press releases to announce price increases could be prohibited under Article 101 TFEU. Signaling can also be used through the programming of “instantaneous price changes in the middle of the night” [9] (p.30), which allows a company to give a glimpse of its future prices to competitors equipped with sophisticated algorithms capable of decoding these stealthy price announcements without consumers even knowing about it.
In “parallel algorithms”, a third scenario, each firm independently implements a pricing algorithm that constantly monitors and adjusts prices according to changes in market conditions. This scenario can be combined with a “follow-the-leader” strategy, according to which companies make a parallel use of pricing algorithms to follow the price set by the market leader at supra-market level, as shown in Figure 2 above. Such a practice, which enables companies to coordinate their actions without communication, may also raise some competition concerns under Article 101 TFEU.

3.2. Factors Likely to Increase the Stability of Collusion in Algorithm-Driven Markets

In its report on the “Economics of tacit collusion” for the Directorate-General for Competition of the European Commission, the team of economists led by Professor Jean Tirole identified three factors most likely to have an impact on the stability of a collusion [32]; namely, market transparency, the number of competitors present on the relevant market, and the frequency of interactions between rivals.
  • Number of firms
The greater the number of companies present on the relevant market, the more difficult it is for them to reach an agreement. Hence the fact that a relatively concentrated market is more likely to lead to a collusive outcome [33]. This does not seem to be the case in digital markets, where the ability of algorithms to quickly analyze a large quantity of data and to monitor the behavior of a large number of companies facilitates coordination. In other words, the use of algorithms makes collusion possible in less highly concentrated markets.
  • Market transparency
The more transparent a market, the more the probability of collusion increases [32] as the deviations from the common line of action of cartel members are less attractive on transparent markets. The transparency of a market can be further increased by the use of algorithms because of their ability to collect and process much larger amounts of data than those obtained by conventional methods.
  • Frequency of interactions
The more interactions between competitors on a market, the easier it is to maintain collusion as deviations from the common line of the cartel participants can be detected and sanctioned more quickly [32]. In traditional markets, price adjustments take time and are often costly. In digital markets, on the contrary, the use of algorithms makes it possible to adjust prices and quantities very quickly to changes in market conditions.
Algorithms, as we have just shown in this section, can facilitate collusive behavior due to their ability to increase market transparency and the frequency of interactions between rivals. At the same time, their parallel use can relieve competitors of the need to communicate directly with each other. It cannot be ruled out, that under these conditions, algorithms may enable companies to obtain the effect of a cartel without restrictive agreements or concerted practices. In this context, the question arises as to whether the concept of an “agreement for antitrust purposes”, as defined by American and European competition laws, should be reconsidered.

3.3. Does the Notion of “Agreement for Antitrust Purposes” Need Revisiting?

Most countries with an anti-trust legislation make the application of provisions aimed at combating collusive outcomes conditional on the identification of an “agreement” between competitors. The interpretation given by these laws of the concept of agreement varies from one jurisdiction to another.
In the United States, an agreement is referred to by Section 1 of the Sherman Act under multiple terms, including “contract”, “combination in the form of a trust”, and “conspiracy”. According to the United States Supreme Court, an agreement does necessarily have to be explicit or formal for it to be considered anti-competitive. It is enough for it to involve “a unity of purpose or a common conception and understanding, or a meeting of minds [34]” as well as “a conscious commitment to a common program [35]”. Parallel behavior could possibly be covered by such a broad definition. In this case, the courts require proof of coordination between the parties and not just oligopolistic interdependence (so-called “plus factors”). One of the “plus factors” required by courts is that the parties have communicated their intention to act in a certain way.
As previously seen, US and EU competition rules can only be applied to algorithms if they have the capacity to reach and enforce a common policy through some form of a “meeting of minds”. The question arises whether such a condition could be met when algorithms engage in explicit collusion although they were not programmed to do so.
Self-learning algorithms [36] may enable such an outcome because of their ability to amend their own-decision rules on the basis of past experience and their powerful predictive capacity. Self-learning algorithms are, for example, used in self-driving cars.
The concept of a “black box”, as shown in Figure 4, simplistically illustrates how a self-learning algorithm operates. It can be compared to a human brain processing raw data in a complex, fast, and accurate way, and delivering an optimal output without revealing the relevant characteristics which were behind the decision process.
Thanks to self-learning algorithms, managers can shift business decisions from humans to computers, thus avoiding explicit communication during the initiation and implementation stages of the collusion as well as the burden of any structure that might be considered by authorities as facilitating collusion practices.
Taking into account the fact that there has not been any case law so far on communication between rivals made possible by the use of self-learning algorithms, it seems premature to draw conclusions on whether the concept of “meeting of minds” as it was elaborated in US and EU competition laws can be applied to algorithmic interactions.
However, a clearer and broader definition of the notion of “agreement” encompassing the possibility of explicit collusion due to communication between self-learning algorithms could reduce uncertainty by helping companies understand which practices are illegal and which are not.
Since it may be challenging for some competition authorities to prove under the legal standards that parallel behavior of algorithms constitutes an agreement restricting competition, they still have the option of relying on the notion of “unfair competition”, as shown in Table 2, to address some of the concerns related to algorithmic collusion.

4. Discussion: Liability for Anti-Competitive Behavior Caused by Algorithms

As shown in Table 3, liability for collusion depends on the type of use of algorithms by companies. The more the conduct involved is likely to lead to direct or indirect communication between competitors, the higher the probability it may qualify as an “agreement for antitrust purposes” in the meaning of EU and US competition laws.
In the event of parallel use of algorithms, already described in Section 3.1, such a probability seems low. According to this scenario, each economic operator unilaterally designs the algorithm that will allow them to receive predictable results and react in a given way to changing market conditions, without agreeing on anything with their competitors. Each company pursues its own interest in the development and the use of its algorithm and may be aware of the existence of other algorithms used by rivals. However, no agreement was put in place. Nevertheless, such a parallel use of algorithms could have, through their interdependent actions, adverse effects on competition. However, in its judgment in the case Zuchner/Bayerishe Vereinsbank [41], the ECJ ruled as long as the parallel use by competitors of algorithms is done without prior direct or indirect contact between them, it may constitute an intelligent adaptation to the existing or anticipated behavior of competitors and therefore falls outside the scope of Article 101 of the TFEU.
The use of signaling algorithms could be of a greater concern from a competition law standpoint. According to this scenario, firms send signals on their intentions to collude on prices and quantities. Unilateral announcements which are “genuinely public” [42] (paragraph 63) do not in themselves constitute a concerted practice. In its decision on the Container Shipping case, the European Commission considered, however, that a concerted practice cannot be excluded in situations “where such an announcement is followed by public announcements by competitors” [31] (paragraph 45). Such a pattern of behavior may be observed in the case of “snapshot price changes during the middle of the night” [9] (p. 30), to which Article 101 TFEU may apply if they “give competitors insight into each other’s future prices while not being useful for customers since they are not booking yet” [32] (paragraph 80).
An even greater risk of communication between competitors leading to prohibited concerted practices exists in the “hub and spoke” [29] (p. 1787 et seq.) scenario in which competitors (the spokes) use the same developer (the hub) to determine market price and react to changes in the market. As the use of the same algorithm by competing companies can lead to price fixing, they and the developers who provide the algorithms they use may face cartel charges. The ECJ admitted in its E-Turas [43] ruling that the use of an online platform can indeed facilitate the possibility of the existence of a collusive hub-and-spokes structure. The alleged collusion in that case was enabled by the E-Turas online booking system which was commonly used between 30 travel agents. A message sent by this system asked them to cap their discounted rates for travel booking. The Court of Justice ruled that companies can be held liable for their participation in a concerted practice when they independently subscribe to the use of a multiplayer third-party platform algorithm that seeks to achieve anti-competitive results. Nevertheless, had the travel agents publicly distanced themselves from this message or informed competition law authorities, their responsibility could probably have been waived. However, the E-Turas case illustrates how online platforms can facilitate collusion amongst competitors, even without any direct contact between them.
Concerted practices become even harder to prevent if companies implement autonomously acting black box algorithms, as shown in Figure 4 mentioned above, which, although provided with abstract and/or limited instructions by their operators, use self-learning algorithms to automatically set prices and other decision variables.
For some authors [44,45], the introduction and use by companies of black box algorithms authorized to make decisions leading to collusive results should engage the responsibility of these companies in the same way as if these behaviors had been committed by their employees. According to this approach, regardless of whether firms delegate decision-making to employees or to algorithms, they are submitted to the same rules. Such an approach has the advantage of fostering legal consistency.
For other authors, the liability of companies for the anti-competitive behavior of their self-learning algorithms should only be taken into account if these companies have breached a reasonable standard of prudence and predictability. A point of view shared by Janka/Uhsler [46] and Salaschek/Serafimova [47] who, by referring to the decisions of the ECJ in AC-Treuhand [48] and VM Remont [49], compare the responsibility of a company using algorithms to acts performed by an independent third party. As for Ezrachi/Stucke, they contemplate limiting the liability of a company for the behavior of its algorithm. They consider that a company would violate the prohibition provided for in Article 101 TFEU only if it failed to perform a necessary intervention after becoming aware of coordination. It is therefore between these two approaches that the standards for assessing a company’s responsibility for collusive algorithmic behavior may vary.
However, during a conference held in Paris on 6 November 2019, Isabelle da Silva, president of the French Competition Authority, stated that companies should be held responsible for the actions of the algorithms they use, including when these are provided by third parties, thus suggesting that the French competition authorities may lean towards a strict approach regarding black box algorithms [50]. This supports the position adopted by EU Commissioner Vestager, who emphasized that “companies can’t escape responsibility for collusion by hiding behind a computer program.… And businesses also need to know that when they decide to use an automated system, they will be held responsible for what it does. So they had better know how that system works” [51].

5. Conclusions

As previously seen, algorithms bring economic benefits to market players. On the demand side, they help consumers to act on the rapidly changing prices of online services, such as the sales of sports tickets, taxi fares, or hotel bookings. On the supply side, they reduce research and production costs, allow for the personalization of products and services, the optimization of inventory, and the setting of optimal prices that effectively respond to market changes. Despite the general consensus on the perceived benefits of algorithms for the economy, the question of whether and to what extent the use of algorithms by companies could have adverse effects on competition remains highly debated. Because of their capacity to increase market transparency and frequency of interactions, companies using them can circumvent the problem of coordination without directly communicating with each other. This can be done in different ways—first, through “tacit algorithmic collusion” according to which companies make parallel use of algorithms that constantly monitor and adjust prices according to changes in market conditions, including to the evolution of the price set by the market leader at supra-market level, as shown in Figure 2. Second, through “signaling”, which enables firms to send signals on their intentions to collude on prices or quantities through press releases or more insidiously through “instantaneous price changes in the middle of the night”, thus giving competitors a glimpse of each other’s future prices. In addition, finally, through the “hub and spoke” scenario, according to which competing firms follow the same policy due to the fact that they rely on the algorithms provided by the same IT companies or programmers. Taking into account the fact that algorithms can thus be used to facilitate parallel collusive behavior, they may at the same time dispense competing firms with the need for agreements to achieve the effect of a cartel. Against this backdrop, some authors have raised the question of whether the concept of “agreement for antitrust purposes” as defined in US and EU competition laws, needs to be reconsidered. Since it may be challenging for some competition authorities to prove under the current legal standards that parallel behavior constitutes an agreement restricting competition, they may rely, as the FTC does, on the notion of “unfair competition”, as shown in Table 2, according to which defendants can be charged for collusive behavior in case the enforcement agency proves they were aware of their actions’ natural and probably anticompetitive consequences. Such an approach though might be of a limited reach in the case of self-learning algorithms, which, because of their ability to amend their own-decision rules on the base of past experience, enable companies’ managers to shift business decisions from humans to computers, thus avoiding charges of explicit communication. The fact that there has been no case of communication between rivals made possible by the use of self-learning algorithms yet does not mean that such a possibility will not exist in the future. Hence, there is the necessity for the legislator and antitrust agencies to find ways to hold companies accountable for the anti-competitive effects of the self-learning algorithms they use. A first approach would be to treat algorithmic behavior similarly to an employees’ actions. A second approach suggests taking into account the liability of a company for the anti-competitive behavior of its algorithm only if the firm has breached a reasonable standard of prudence and predictability. It is therefore between these two approaches that the standards for assessing a company’s responsibility for collusive algorithmic behavior may vary. However, enforcement agencies such as, for instance, the French Competition Authority and the EU Commission consider that firms should be held responsible for the actions of the algorithms they use, including those provided by third parties. Therefore, it is high time for companies to start thinking about how they could ensure antitrust compliance when using algorithms.

Author Contributions

Conceptualization, C.S.H.; methodology, C.S.H.; validation, G.F.R.; formal analysis, S.G.P.; investigation, C.S.H.; resources, C.S.H.; data curation, C.S.H.; writing—original draft preparation, C.S.H.; writing—review and editing, C.S.H.; visualization, C.S.H.; supervision, S.G.P.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Exclude this statement as the study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Whish, R.; Bailey, D. Not only collusion but also, for instance, price discrimination, e.g. selling or purchasing “different units of good or service at prices not directly corresponding to differences in the cost of supplying them”. In Competition Law, 9th ed.; Oxford University Press: Oxford, UK, 2018; p. 292. [Google Scholar]
  2. Ezrachi, A.; Stucke, M.E. Algorithmic Collusion: Problems and Counter-Measures; OECD: Paris, France, 2017. [Google Scholar]
  3. Haucap, J. Die heimlichen Kartelle im Netz. WiWo 2018, 1, 74. [Google Scholar]
  4. Mehra, S.K. Antitrust and the robot-seller: Competition in the time of algorithms. Minn. L. Rev. 2016, 100, 1323–1375. [Google Scholar]
  5. Kaseberg, T.; Von Kalben, J. Herausforderungen der Kunstilch Intelligenz fur die Wettbewerbspolitik. WuW 2018, 68, 2–8. [Google Scholar]
  6. Oxera. When Algorithms Set Prices: Winners and Losers. 2017. Available online: https://www.oxera.com/wp-content/uploads/2018/07/When-algorithms-set-prices.pdf.pdf (accessed on 23 August 2020).
  7. Pereira, V. Algorithm-Driven Collusion: Pouring Old Wine into New Bottles or New Wine into Fresh Wineskins. Eur. Compet. Law Rev. 2018, 39, 212–227. [Google Scholar]
  8. Roman, V.D. Digital Market and pricing algorithms—A dynamic approach towards horizontal competition. Eur. Compet. Law Rev. 2018, 1, 37–45. [Google Scholar]
  9. OECD. Algorithms and Collusion: Competition Policy in the Digital Age; OECD: Paris, France, 2017; p. 30. Available online: http://www.oecd.org/competition/algorithms-collusion-competition-policy-in-the-digital-age.htm (accessed on 6 August 2020).
  10. Knuth, D. The Art of Computer Programming: Volume 1. Fundamental Algorithms, 3rd ed.; Addison-Wesley Professional: Boston, MA, USA, 1997; p. 1. [Google Scholar]
  11. Lindsay, A.; McCarthy, E. Do we need to prevent pricing algorithms cooking up markets? Eur. Compet. Law Rev. 2017, 12, 533. [Google Scholar]
  12. Garey, M.; Johnson, D. Computers and Intractability: A Guide to the Theory of NP-Completeness; W.H Freeman: New York, NY, USA, 1979; p. 4. [Google Scholar]
  13. Cormen, T.; Leiserson, C.; Rivest, R.; Stein, C. Introduction to Algorithms, 3rd ed.; The MIT Press: Cambridge, MA, USA, 2009; p. 5. [Google Scholar]
  14. Schriek, M.; Safeti, H.; Siddiqui, S.; Pflugler, M.; Wiesche, C.; Krcmar, H. A matching algorithm for dynamic ridesharing. In Proceedings of the International Scientific Conference on Mobility and Transport: Transforming Urban Mobility, Munich, Germany, 6–7 June 2016; Elsevier: Amsterdam, The Netherlands, 2016; pp. 272–285. Available online: http://excell-mobility-il17.in.tum.de/wp-content/uploads/2017/01/A-Matching-Algorithm-for-Dynamic-Ridesharing.pdf (accessed on 13 August 2020).
  15. Chen, L.; Mislove, A.; Wilson, C. An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace. In Proceedings of the 25th International Conference on World Wide Web, Montreal, QC, Canada, 11–15 April 2016; International World Wide Web Conferences Steering Committee: Geneva, Switzerland, 2016; pp. 1339–1349. Available online: https://mislove.org/publications/Amazon-WWW.pdf (accessed on 6 August 2020).
  16. Weiss, R.; Mehrotta, A. Online dynamic pricing: Efficiency, equity and the future of e-commerce. Va. J. L. Tech. 2001, 11, 1–10. [Google Scholar]
  17. OCDE. Personalized Pricing in the Digital Era; OECD: Paris, France, 2018; p. 9. Available online: https://one.oecd.org/document/DAF/COMP(2018)13/en/pdf (accessed on 23 August 2020).
  18. ADLC, Opinion no 18-A-03 of 06.03.18 on Data Processing in the Online Advertising Sector for a Discussion on the Number and Sophistication of Algorithms Dedicated to Personal Data Gathering for Publicity Purposes. Available online: https://www.autoritedelaconcurrence.fr/sites/default/files/integral_texts/2019-10/avis18a03_en_.pdf (accessed on 6 August 2020).
  19. Scraping Is a Method for Crawling Web Sites and Automatically Extracting Structured Data on It. For instance Scrapy Is a Python Open Source Package to Scrape Data from Websites. Available online: https://scrapy.org/ (accessed on 7 August 2020).
  20. European Commision. Commision Staff Working Document—Final Report on the E-Commerce Sector Inquiry. Available online: http://www.ecommercesectorinquiry.com/files/sector_inquiry_final_report_en.pdf (accessed on 6 August 2020).
  21. Harrington, J.E., Jr.; Zhao, W. Signaling and Tacit Collusion in an Infinitely Repeated Prisoners’ Dilemna. Math. Soc. Sci. 2012, 64, 277–289. [Google Scholar] [CrossRef] [Green Version]
  22. Garyali, K. Is the Competition Regime Ready to Take on the AI Decision Maker? CMS London. Available online: https://cms.law/en/gbr/publication/is-the-competition-regime-ready-to-take-on-the-ai-decision-maker (accessed on 21 August 2020).
  23. Harrington, J.E.; Harker, P.T. Developing Competition Law for Collusion by Autonomous Artificial Agents. J. Compet. Law Econ. 2018, 14, 331. [Google Scholar] [CrossRef]
  24. OECD. Roundtable on Facilitating Practices in Oligopolies. 2007. Available online: www.oecd.org/daf/competition/41472165.pdf (accessed on 8 August 2020).
  25. ECJ. Case T-41/96 Bayer, ECLI: EU: T2000:242, Paragraph 69. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A61996TJ0041 (accessed on 13 August 2020).
  26. ECJ. Case 40/73 Suiker Unie v. Commission, ECLI: EU: C: 1975:174, Paragraphs 26 and 174. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A61973CJ0040 (accessed on 15 August 2020).
  27. ECJ. Case 48/69 Imperial Chemical Industries, ECLI: C: 1972:70, Paragraph 118. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A61969CJ0048 (accessed on 14 August 2020).
  28. ECJ. Case C-8/08 T-Mobile, ECLI: EU: C: 2009: 343, Paragraph 23. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=ecli:ECLI:EU:C:2009:343 (accessed on 16 August 2020).
  29. Ezrachi, A.; Stucke, M.E. Artificial Intelligence & Collusion. Univ. Ill. Law Rev. 2017, 1775, 1787, 1804. [Google Scholar]
  30. Monopolkommission. Hauptgutachten, Wettbewerb 2016; Nomos Verlagsgesellschaft: Baden, Germany, 2017; p. 1242. [Google Scholar]
  31. European Commission. Decision of 07.07.16 (Container Shipping), Case AT. 39850, Paragraph 80. Available online: https://ec.europa.eu/competition/antitrust/cases/dec_docs/39850/39850_3377_3.pdf (accessed on 12 August 2020).
  32. Ivaldi, M.; Jullien, B.; Rey, P.; Seabright, P.; Tirole, J. The Economics of Tacit Collusion, Final Report for DG Competition, March 2003. Available online: https://ec.europa.eu/competition/mergers/studies_reports/the_economics_of_tacit_collusion_en.pdf (accessed on 13 August 2020).
  33. Hellwig, M.; Huschelrath, K. Cartel Cases and the Cartel Enforcement Process in the European Union 2001–2015: A Quantitative Assessment. ZEW Discussion Paper No.16-063. Mannheim, Germany, September 2016. Available online: http://ftp.zew.de/pub/zew-docs/dp/dp16063.pdf (accessed on 18 August 2020).
  34. Interstate Circuit, Inc. v. United States, 306 U.S. 208 (1939). Available online: https://supreme.justia.com/cases/federal/us/306/208/ (accessed on 13 August 2020).
  35. Monsanto Co.v. Spray-Rite Serv. Corp., 465 U.S. 752, 768. 1984. Available online: https://www.lexisnexis.com/community/casebrief/p/casebrief-monsanto-co-v-spray-rite-serv-corp (accessed on 13 August 2020).
  36. Mitchell, T. Machine Learning; McGraw-Hill Higher Education: New York, NY, USA, 1997; p. 3. [Google Scholar]
  37. FTC v. Sperry & Hutchinson Co., 405 U.S. 233. 1972. Available online: https://supreme.justia.com/cases/federal/us/405/233/ (accessed on 13 August 2020).
  38. OECD. Roundtable on Competition Enforcement in Oligopolistic Markets. 2015. Available online: https://one.oecd.org/document/DAF/COMP(2015)2/en/pdf (accessed on 13 August 2020).
  39. Ethyl Corp. v. Federal Trade Commission: United States Court of Appeals for the Second Circuit 729 F.2d 128 (1984). Available online: https://www.quimbee.com/cases/ethyl-corp-v-federal-trade-commission (accessed on 13 August 2020).
  40. Ezrachi, A.E.; Stucke, M.E. Two Artificial Neural Networks Meet in an Online Hub and Change the Future (of Competition, Markey Dynamics and Society). CPI, April 2017 Posted by the Social Science Research Network. Available online: https://www.competitionpolicyinternational.com/two-artificial-neural-networks-meet-in-an-online-hub-and-change-the-future-of-competition-market-dynamics-and-society/ (accessed on 17 August 2020).
  41. ECJ. Zuchner v Bayerische Vereinsbank, Judgment of 14.07.1981, Case C-172-80. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A61980CJ0172 (accessed on 21 August 2020).
  42. Commission, Horizontal Guidelines, Paragraph 63. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A52011XC0114%2804%29 (accessed on 22 August 2020).
  43. ‘Eturas’ UAB and Others v. Lietuvos Respublikos Konkurencijos Taryba CaseC-74/14, Judgment of the Court of 21 January 2016. Available online: http://curia.europa.eu/juris/liste.jsf?&num=C-74/14 (accessed on 8 August 2020).
  44. Dohrn, D.; Huck, L. Algorithmus als “Kartellgehilfe?”-Kartellrechtliche Compliance in Zeitalter der Digitalierung. D B 2018, 173, 173–178. [Google Scholar]
  45. Wolf, M. Algorithmengestuzte Preissetzung im Online-Einzelhandel als abgestimmte Verhaltensweise. NZ Kartel. 2019, 1, 2–6. [Google Scholar]
  46. Janka, S.; Uhsler, F. Antitrust 4.0. Eur. Compet. Law Rev. 2018, 121, 112. [Google Scholar]
  47. Salaschek, U.; Serafimova, M. Preissetzungsalgorithmen im Lichte von Art.101 AEUV. WuW 2018, 1, 8–15. [Google Scholar]
  48. ECJ. AC—Treuhand v Commission, Judgment of 22.10.15, Case C-194/14 P. Available online: http://curia.europa.eu/juris/document/document.jsf?docid=170304&doclang=EN (accessed on 24 August 2020).
  49. ECJ. VM Remont v Konkurences Padome, Judgment of 21.07.16, Case C-542/14, Par.27 et Seq. Available online: http://curia.europa.eu/juris/liste.jsf?language=en&num=C-542/14 (accessed on 26 August 2020).
  50. Paroche, E.; Ritz, C.; Levy, V. Algorithms in the Spotlight of Antitrust Authorities, Hogan Lovells. Antitrust Competition and Economic Regulation Quarterly Newsletter, Autumn 2019. Available online: https://www.hoganlovells.com/~/media/hogan-lovells/global/knowledge/publications/files/acer-newsletters-autumn-2019.pdf?la=en (accessed on 16 August 2020).
  51. Vestager, M. Proceedings of the Bundeskartellamt 18th Conference on Competition, Berlin, Germany, 16 March 2017; Available online: https://wayback.archive-it.org/12090/20191129221651/https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/bundeskartellamt-18th-conference-competition-berlin-16-march-2017_en (accessed on 19 September 2020).
Figure 1. Illustration of monitoring algorithm.
Figure 1. Illustration of monitoring algorithm.
Sustainability 13 00951 g001
Figure 2. Illustration of parallel algorithms.
Figure 2. Illustration of parallel algorithms.
Sustainability 13 00951 g002
Figure 3. Signaling algorithms.
Figure 3. Signaling algorithms.
Sustainability 13 00951 g003
Figure 4. Collusion resulting from self-learning algorithms.
Figure 4. Collusion resulting from self-learning algorithms.
Sustainability 13 00951 g004
Table 1. Dynamic pricing algorithms.
Table 1. Dynamic pricing algorithms.
Dynamic pricing algorithms are able to process a larger quantity of data and to react faster than standard pricing strategies to any change in market conditions. Very often, companies use such algorithms for price setting based on other available offers. A research paper on the algorithmic pricing of third-party sellers on Amazon Marketplace [15] found that algorithmic repricing strategies based on competitors’ prices were used for the online sale of best-selling products [15]. Businesses may also use pricing algorithms to manage stock availability or production assets to allow a more efficient allocation of resources. Taking into account the aforementioned functions and many others, pricing algorithms have been widely credited for improving market efficiency [16]. For instance, the air transport and hotel sectors have been using pricing algorithms for quite some time to quickly adjust prices of online plane tickets and hotel room bookings to changes in supply and demand [17].
Nevertheless, there might be some countervailing effects. In the offline world, many firms monitor their competitors’ prices. This can be done through the observation of competitors’ prices by the company’s employees or by purchasing price-tracking data from specialized suppliers or through interviews with customers who provide feedback information on the competitors’ best offers. However, these price monitoring methods can be expensive, are not always efficient, and above all do not allow price adjustment in “real time”. This explains why it might be difficult for a firm of the brick-and-mortar business environment to interpret the behavior of its competitors or of customers. For example, a company might not understand an invitation made by competitors to a collusion. Online prices are, on the contrary, easily accessible and highly transparent. Besides, the use of an automatic price monitoring algorithm allows its users to monitor the evolution of competitors’ prices at a lower cost and to adjust its prices accordingly almost in “real time”. Thanks to their ability to collect a greater amount of information on competitors’ prices, to accelerate collusive behavior, and to sanction deviations from collusive market outcomes more quickly, dynamic pricing algorithms may allow companies to sustain supra competitive prices more efficiently than humans.
Table 2. Unfair competition standards and Section 5 US Federal Trade Commission (FTC) Act.
Table 2. Unfair competition standards and Section 5 US Federal Trade Commission (FTC) Act.
In the United States, Section 5 of the Federal Trade Commission (FTC) Act gives this anti-trust agency the power to prohibit “unfair methods of competition”. Relying on a Supreme Court ruling stating that Section 5 extends beyond the Sherman Act and other US antitrust laws [37], the FTC has applied this provision to combat various anti-competitive behaviors that are difficult to prosecute under cartels or monopolization provisions. An example of such a conduct is the unilateral communication of information to competitors having anti-competitive effects, or the so-called “invitations to collude” [38]. Based on principles rather than rules [39], Section 5 gives the FTC some flexibility on which practices to tackle. For a conduct (such as the use of an algorithm) to fall under this section, the antitrust agency will have to demonstrate that (1) it causes or is likely to cause substantial harm to consumers, (2) it cannot be reasonably avoided by consumers, and (3) is not outweighed by countervailing benefits to consumers or competition. It has been suggested that if the FTC could demonstrate that, when developing their algorithms, defendants were either motivated to achieve an anti-competitive outcome or were aware of the anti-competitive consequences of their actions, it would be entitled to apply Section 5 in order to tackle algorithmic collusion [40].
Table 3. Summary of liability for collusion resulting from the use of algorithms by firms.
Table 3. Summary of liability for collusion resulting from the use of algorithms by firms.
Type of ScenarioParallel AlgorithmsSignaling AlgorithmsHub and SpokeSelf-Learning Algorithms
Tacit collusion risk YesYesYesYes
Liability for concerted practices prohibited under Article 101 TFEUNoYesYesPotential
Liability of the firm using algorithms NoYesYesPotential
Liability of an employee of the firm using algorithmsNoNoYesPotential
Liability of independent 3rd party algorithm developerNoNoYesPotential
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hutchinson, C.S.; Ruchkina, G.F.; Pavlikov, S.G. Tacit Collusion on Steroids: The Potential Risks for Competition Resulting from the Use of Algorithm Technology by Companies. Sustainability 2021, 13, 951. https://doi.org/10.3390/su13020951

AMA Style

Hutchinson CS, Ruchkina GF, Pavlikov SG. Tacit Collusion on Steroids: The Potential Risks for Competition Resulting from the Use of Algorithm Technology by Companies. Sustainability. 2021; 13(2):951. https://doi.org/10.3390/su13020951

Chicago/Turabian Style

Hutchinson, Christophe Samuel, Gulnara Fliurovna Ruchkina, and Sergei Guerasimovich Pavlikov. 2021. "Tacit Collusion on Steroids: The Potential Risks for Competition Resulting from the Use of Algorithm Technology by Companies" Sustainability 13, no. 2: 951. https://doi.org/10.3390/su13020951

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop