Next Article in Journal
Fast Identification of High Utility Itemsets from Candidates
Next Article in Special Issue
AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)
Previous Article in Journal
A Comparison of Emotion Annotation Approaches for Text
Previous Article in Special Issue
The Singularity Isn’t Simple! (However We Look at It) A Random Walk between Science Fiction and Science Fact
Article Menu

Export Article

Information 2018, 9(5), 118; doi:10.3390/info9050118

Article
When Robots Get Bored and Invent Team Sports: A More Suitable Test than the Turing Test?
Independent Researcher, V8V 1S9 Victoria, Canada
Received: 9 April 2018 / Accepted: 8 May 2018 / Published: 11 May 2018

Abstract

:
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for determining human-level intelligence: the invention of team sports by humanoid robots. The test is preferred because team sport activity is easily identified, uniquely human, and is suggested to emerge in basic, controllable conditions. To expect humanoid robots to self-organize, or invent, team sport as a function of human-level artificial intelligence, the following necessary conditions are proposed: humanoid robots must have the capacity to participate in cooperative-competitive interactions, instilled by algorithms for resource acquisition; they must possess or acquire sufficient stores of energetic resources that permit leisure time, thus reducing competition for scarce resources and increasing cooperative tendencies; and they must possess a heterogeneous range of energetic capacities. When present, these factors allow robot collectives to spontaneously invent team sport activities and thereby demonstrate one fundamental indicator of human-level intelligence.
Keywords:
artificial intelligence; Turing test; embodiment; competition; cooperation; self-organization; robots; heterogeneity; team sports

1. Introduction

In his book, “The singularity is near: when humans transcend biology” [1], Ray Kurzweil predicts that human-level artificial intelligence will be achieved roughly in the year 2029. Then, Kurzweil argues, computers will be able to pass the Turing test. The Turing test is met when humans, in conversation with artificial intelligence, will not be able to distinguish the artificially intelligent machine from a human [2]. This, says Kurzweil, will lead to a fundamental shift in mind, society and economics in about 2045—an event which he refers to as the singularity.
One criticism of the Turing test and Kurzweil’s prediction is that they focus too much on the human brain as the source of the measurable computational power of human intelligence, while largely ignoring or minimizing the influence of the human body on human intelligence. In this view, it is a mistake to measure human intelligence as a function of the computational capacity of the brain in the absence of a functional body: the totality of human intelligence can only be properly measured and simulated by locating any artificial intelligence within a functional body.
Similarly, the Turing test is increasingly viewed as insufficient for proving that artificial intelligence has achieved human-level intelligence [3,4]; the test has been criticized for focusing on problems of language and reasoning, while problems of motor skills and perception are absent [5]. Basic physical motor and perceptual skills require substantially greater computational capacity than tasks typically associated with higher-level intelligence such as playing checkers or solving problems on intelligence tests. This describes Moravec’s paradox [6] (p. 15) [7] (p. 29), and implies that the Turing test does not encompass even basic levels of individually expressed human intelligence.
One may conclude, therefore, that embodied humanoid robot intelligence is the appropriate analog of human-level artificial intelligence, and not merely computer computational power activated to solve abstract problems in the absence of a functional body.
In response to this view, Kurzweil says [1] (p. 260):
“… the real issue involved here is strong AI (artificial intelligence that exceeds human intelligence). The standard reason for emphasizing robotics in this formulation is that intelligence needs an embodiment, a physical presence to affect the world. I disagree with the emphasis on physical presence, however, for I believe that the central concern is intelligence. Intelligence will inherently find a way to influence the world, including creating its own means for embodiment and physical manipulation.”
Here Kurzweil rejects a critical conceptual aspect of embodiment: that human intelligence arises largely when humans exploit their physical environments and modify their behavior according to the ecological niche in which they interact collectively and compete for resources [8]. Contrary to Kurzweil’s argument, it is not enough evenfor humanoid robots to attain sufficient intelligence and the sensorimotor capacity required to engage in human-like physical activities, such as to play a team-sport like soccer. Indeed, it would be self-evident that humanoid robots playing soccer at human levels would possess sufficient computational power for many of the high-capacity sensorimotor functions that are indicated by Moravec’s paradox. Further, Kurzweil addresses any shortcomings in computational capacity that may be required for brain processes otherwise unaccounted for by his current projections:he posits that even if the computational capacity of computers is off by a factor of a billion, this would delay the singularity by only 21 years [1] (p. 123).
However, the problem is not strictly one of computational capacity, since massive computational capacity does not seem to guarantee human-level intelligence. Instead, the problem relates to the uniquely human traits of collective intelligence and self-organized behavior, which emerge in appropriate conditions and simultaneously drive the development of human intelligence, and clearly indicate that such intelligence is present.
Thus, rather than merely possessing sufficient capacity at the individual level to engage in uniquely human collective behaviors, for humanoid robots to establish human-level intelligence, they must demonstrate that they can self-organize human-level behaviors in appropriate ecological conditions. If these, or similar, human-like collective behaviors do not self-organize, then the test for human-level intelligence has not been met. Here we make the case for one such test: when robots self-organize or invent team sports. For a comprehensive definition and review of self-organizing systems, see [9].
Necessarily, such an invention will occur only after humanoid robots are programmed with the basic sensorimotor capacity to play team sports like soccer, because the converse cannot be true; i.e., robots must first be physically capable of playing soccer before they can invent a game in which they exploit those capacities. Since robots are projected to acquire the capacity to play soccer by 2050 [10]—obviously well beyond 2029—we may conclude that robots will not invent the game until sometime after 2050, and therefore robots will not pass the “team-sport test” in the near future. This does not mean that robots will never invent team sports; the key point is that the Turing test is inadequate for proving human-level intelligence, and that further benchmarks of robot intelligence are required to establish true human-level intelligence.
In this paper, we discuss what is meant by the “invention” of team sport and identify the necessary conditions for this to occur. The proposed test may be considered a first level or minimum threshold of easily observed and recognized human-level collective intelligence. As a minimum threshold, it does not preclude other tests that demonstrate the emergence of social, economic, and evolutionary collective behaviors at similar or greater levels of complexity. Thus, similar tests may be proposed requiring a variety of self-organized, human-like social and economic activities, or certain biological dynamics such as evolutionary processes [11].

2. Defining Human-Level Intelligence

It is helpful to set a framework for intelligence in the context of embodied and self-organized collective behavior. To this end, we adopt Winfield’s [12] (pp. 2–3) four components of embodied intelligence:
  • Morphological intelligence—“the physical behavior that emerges from the interaction of the body, its control systems and the environment”.
  • Swarm intelligence—collective behavior is distributed and decentralized.
  • Individual intelligence—“the ability to both respond (instinctively) to stimuli and, optionally, learn new—or adapt existing—behaviours through a process of trial and error”.
  • Social intelligence—“the kind of intelligence that allows animals or robots to learn from each other”.
Winfield [12] (p. 6) concludes that when considering the integration of these four components of intelligence, “the intelligence of intelligent robots falls far short of that of most animals”; i.e., no single component of embodied intelligence sufficiently encompasses the breadth of animal, let alone human, intelligence.
Under this broad framework, we consider the swarm and social components of humanoid robot intelligence, and identify a threshold level of complex behavior by which we may confidently declare that robots have achieved human-level swarm and social intelligence: the emergence, or invention, of team sport.
Further, we assume that team sport inherently involves human-level intelligence since, although many animals may have a sense of competition, they are well-understood not to engage in team sport [13] as subsequently defined. Therefore, it is not necessary to consider in detail other definitions and measures of intelligence. For a review of some 70 such definitions, see [14].

3. The Embodiment of Artificial Intelligence

We do not review the concept of embodiment in detail, although a brief overview is useful. For a full treatment of the embodiment argument, see [8].
The embodiment school of artificial intelligence is perhaps traced to Brooks [15] (p. 3), who argues that four basic properties of robot functionality distinguish robot intelligence from computer intelligence and its architectural constraints:
  • Situatedness—robots are located in the world.
  • Embodiment—robots have bodies in which they directly experience the world.
  • Intelligence—the source of intelligence derives largely from the physical coupling between the robot and the world.
  • Emergence—robot intelligence emerges from interactions among its system components, and with the world.
As Pfeifer and Bongard [8] argue, of these four properties, embodied intelligence emerges from the integrated brain and body; when we evaluate intelligence, the contribution of the functioning body to overall intelligence cannot be ignored. Similar proponents of this school include Clarke [16], whose “extended mind” hypothesis expands intelligence to be fundamentally connected with the tools that humans use to solve problems. Moreover, embodied intelligence implies the dynamical evolutionary and ecological processes that have led to the existing state of the human body [8] (pp. 178–213).
Thus, even if computationally equivalent to the capacity of the human brain, computer intelligence falls short of human-level intelligence in the absence of a wider perceptual and material integration with its physical environment. This has been argued in the context of a wide range of uniquely human characteristics that cannot be expected to arise in machine intelligence, even if machines are computationally equivalent to or greater than humans [17]. Cariani [18] makes a similar argument that without embodiment, computers have no mechanism by which to experience the external world and therefore cannot adapt and learn, evolve accordingly, and develop true human-level intelligence.
Similarly, Moravec [6] observes that a billion years of sensorimotor evolution is encoded in the human brain and implicitly also encoded in the brains of many animal species: these highly evolved processes require brain power far greater than that for abstract reasoning, which Moravec describes as “the thinnest veneer of human thought” (p. 15). By extension, embodied human-level intelligence must be located in humanoid robots in order to simulate human sensorimotor skills and to be capable of solving ordinary human problems. This includes those problems inherent in sports activities that are naturally solved through functional human physiology, such as by use of hands, or through the use of tools, as argued by Clarke [16].
Thus, if we accept the premise that the constrained and inanimate hardware of a computer—though it may contain a vast computational capacity—is insufficient to achieve the intelligence of embodied human intelligence, then any search for artificial human-level intelligence shifts in focus to humanoid robotic intelligence. The question is then whether a robot, embodying an artificial brain, can achieve human-level intelligence by integrating massive computational capacity together with a functional and dynamic body that experiences, reacts, adapts to, and learns from its interactions with other robots and their environments. Further, how will we recognize this integration when it occurs? We suggest that self-organized team sport is an easily-observed and identifiable collective human behavior that clearly indicates human-level intelligence.

4. The Transition from Team-Like Behavior to Bounded Competition and the Invention of Team Sport

Here team sport is defined as a contest between groups of participants, performed within specified spatial boundaries with specific rules of play that apply equally to both or all the participating teams (the “game”). The game objective is for one team to win the contest by accumulating some agreed-upon advantage that is greater than that accumulated by the opposing side, typically achieved by scoring goals. Game rules may vary in their constraints on the physical movements of the players, but in general the rules are minimally restrictive to permit high degrees of physical freedom by which participants rely upon their collective skill, strength, and cooperative strategy to best their opponents.
We refer to this kind of game as bounded competition. This excludes dyadic competitions (one-on-one) and competitions involving multiple competitors who do not cooperate with any team-mates, such as a 100 m sprint in which all the competitors compete within their own designated lanes and there is no expectation of cooperation. These types of competition are excluded in order to isolate for our analysis those games in which there is a clear cooperative element among team-members, in addition to the clearly competitive component of a contest between teams.
Bounded competition, when it exists, should be obvious to average adult human observers, although in some cases this may not always be so. For example, competitive mass-start bicycle racing involves, at one level, obvious team distinctions because team-members are clothed in identical racing attire, which is different between teams. However, at another level, less obvious cyclist team behavior occurs when alliances form temporarily among competitors from different teams or during events when not all participants are part of specified teams. These alliances may continually re-constitute among different cyclist combinations, and observers may not easily recognize when riders are working together or in opposition. This is due to cyclists’ continuous positional change within a peloton (group of cyclists), and the tactics that derive from the energy-saving advantages of cycling in positions behind competitors where power requirements are reduced. This kind of less obvious team behavior may be characterized as proto-team behavior, or team-like behavior. A further, more subtle form or characteristic of team behavior involves creative and novel pattern formations that emerge among team interactions [19].
Thus, there is an implied threshold between team-like or proto-team activity—which is part of a gradual multi-generational process originating in various social activities and may occur among many animal species—and uniquely human team sport. In this way, the invention of team sport describes a phase transition that occurs when team-like behavior becomes true team sport. In Section 6 we cite studies involving swarm proto-team behavior.
This transition may involve the simultaneous emergence of creative and novel pattern formation, as examined by Hristovski et al. [19]. Such novel pattern formation may pre-exist the invention of team sport and emerge alongside team-like behavior. However, like the amorphous team-like behavior among pelotons, the presence of team sport in these circumstances may not be obvious to the average adult human observer. Thus, one appropriate clear indication of human-level humanoid robot intelligence in this context is the emergence, or invention, of bounded competition in the form of actual team sport.
When team-sport emerges among robots as a product of self-organized processes, it is implied that robots are not asked to play sports by external human sources or programmed specifically for these processes. The self-organized process suggests that robots would develop the games themselves and find ways to agree collectively upon the rules of the game, constructed as a function of environmental constraints and individual robot kinematics. This kind of rule-formation is distinguishable from rules that are pre-programmed for the robots.

5. The Origins of Team Sport

Collective sporting activities are universal among humans [20], and are driven by underlying universal physical and physiological principles [21]. Sports in general involves complex organizational structures and dynamics that are both self-organized [22,23] and planned according to prescribed strategies and tactics.
Studies indicate several possible origins of team sport, including the impulse to play, or the need to practice hunting abilities ([24], and references therein); sport may achieve cultural objectives including the ritualistic, cultic and cathartic ([25], and references therein). There is a connection between sport and military practice and the discharge of aggressive urges, particularly among combative sports ([26], and references therein). Athletic success may also confer selective reproductive advantages [25].
Arguably, objectives such as practicing hunting abilities and military practice are more primitive than other more sophisticated cultural objectives, since hunting and war are rooted more directly in essential resource acquisition. By extracting the most primitive origins of team sport, we may infer that if robots are to invent team sport, the invention process is more likely to originate as an artifact of, or simultaneously with, basic resource or energy acquisition, rather than as an artifact of more developed ritualistic or other cultural complexities. Thus, we suggest that proto-sporting activities tend to emerge in the context of basic competitive and cooperative behaviors that are necessarily connected with the energetic requirements of the individuals involved.

6. The Emergence of Human Collective Behavior

Humans engage in complex collective activities in which novel behavior emerges, and, as posited, this kind of behavior must be observed in robots too before robots may be viewed as having achieved human-level intelligence. As Cariani [18] (p. 48) stated, “If we want our devices to be creative in any meaningful sense of the word, they must be capable of emergent behavior, of implementing functions we have not specified”. Thus, embodied intelligence must interact collectively such that novel patterns of behavior emerge that are uniquely human in nature. Furthermore, if human-level embodied intelligence is to be achieved, it is reasonable to expect intelligence to emerge in typical environments or conditions in which humans operate.
As noted, there are numerous social and economic contexts and environmental conditions in which these behaviors may be observed. Perhaps the most basic and universal of such conditions is resource scarcity, and the competition for those resources. Some progress in swarm robotics has been achieved to model collective behavior among robots [27] which includes solving resource allocation problems. Such advances include: group transport of objects [28], shortest-path finding [29], task allocation [30], energy foraging [31,32], and communication-based navigation [33]. In addition, basic elements of hierarchical cooperation have been demonstrated involving team-work [34], as has coordination between types of robot swarms [35,36]. For a review of progress in robot collective behavior, see Bayandir [37].
Overall, these advances remain at a comparatively primitive level, and are no more developed than the insect collective behaviors from which they are inspired [38], though even insects’ individual physical abilities currently surpass robot abilities of comparable size [37].
Given this, it seems that the invention of team sport by humanoid robots remains a distant prospect. However, it is also conceivable that robots will never invent team sport if appropriate conditions are not present, or if robots are not immersed within an appropriate physical environment that fosters the invention of team sport.

7. Necessary Conditions for the Emergence of Team-Sport

Since sports are played within arbitrary three-dimensional boundaries, an obvious necessary condition for sport is that participants exhibit some minimal level of kinematic agility to interact in three-dimensional space. On the other hand, one could argue that computers, absent of any kinematic functions, may well self-organize cooperative-competitive dynamics akin to that of team sports in a virtual environment within the confines of their hardware and across networks. Perhaps such games could involve clusters of networks (“teams”) and network boundaries (the “field of play”) that are arbitrarily agreed upon by thrill-seeking computers. As a possible precursor to this, advances are currently underway in computer-human collaborative networks [39].
However, even if this is possible, unless architecturally-constrained computers develop a way to reveal their game playing activities, it will be difficult from a human perspective to observe and record computer sports that are played strictly in abstract cyberspace. Also, any evidence of such virtual game playing may be difficult to extract. Further, unless computers engage in some form of resource competition in the first place—which although not inconceivable, would perhaps be difficult to find evidence for—it seems unlikely computers will invent game playing as an artifact of real-life competition for resources.
In addition to a minimal level of robot agility, the following necessary conditions are proposed as requisite for the emergence of robot team sport, although other conditions undoubtedly exist:
  • The intrinsic capacity of humanoid robots to compete and cooperate for resources.
  • Sufficient periods of leisure time during which robots engage in simulated or artificial resource gathering activities that represent a form of proto-team sport, leading to an eventual transition to actual team sport.
  • Heterogeneous robot energetic capacities.

7.1. The Capacity of Humanoid Robots to Compete and Cooperate for Resources

Biological organisms compete and cooperate for various kinds of resources [40] the most basic of which include food, water, and protection from harmful elements. Presently, humans supply these resources to computers in the form of electricity and hardware, and computers do not compete for these resources. Although computers by themselves do not generally possess physical mechanisms to install more hardware or to connect themselves to electrical outlets, robots certainly can be designed with these abilities and the capacity to forage for resources without requiring humans for resource inputs.
Robot science is currently sufficiently advanced for primitive levels of cooperative resource gathering [31], as noted. Foraging robots also participate in limited competition when more than one robot seeks the same energy source, except that the competitive response is benign and there appears to be no existing algorithm that causes competing robots to modify their movements toward resources to acquire it first or to fight for the resource [32].
It does not require much imagination to consider the consequences of more advanced competition among robots to acquire resources, and undoubtedly robots may be programmed to engage in fight and flight responses to acquire resources ahead of competitors. This by itself poses a challenge for robot ethicists and policy-makers, and arguably robots need not be programmed with these competitive capacities.
However, it is suggested that such a capacity is indeed a necessary pre-condition to the emergence of team sport, in addition to some intrinsic robot capacity to cooperate. Thus, without a natural propensity to compete and cooperate for resources, it is unlikely that team sport will emerge among robots as an artifact of that process.

7.2. Leisure Time as a Necessary Condition for the Emergence of Robot Team Sports

Broadly, in times of scarcity, people invest disproportionately greater durations of their time and energy in activities for basic survival. Similarly, when people experience elevated degrees of scarcity, they must conserve resources and cannot spare the high energetic costs of sporting activity. Thus, in conditions of great scarcity, it is reasonable to conclude that there is insufficient leisure time for organized sports to develop.
This connection between sports participation and relative affluence is revealed in literature from the United States and the United Kingdom [41]. Similarly, research has shown that having children reduces parents’ participation in sport [41,42] presumably because parents’ energy and time resources are exhausted during the process of child-rearing. Eberth and Smith [43] found that parents with children under two years old had negatively impacted participation in sports, while parents with children between ages two and 15, did not. Ruseski et al. [42] concluded that individuals with a higher income were more likely to participate in physical activity in general, but may spend less time engaged in that activity.
Downward and Riordan [41] suggest that in some cases, the form of employment and level of education are more influential than work hours and household income levels in determining sports participation. For instance, the authors showed that increased education favors increased income, but this results in work-time constraints on sport participation, and that sport participation tends to increase when time spent working is reduced and flexible. By contrast, the authors show that reductions in sport participation increase with age and with being the individual responsible for housekeeping; similar reductions in participation are shown when people undertake voluntary work or are semi-skilled.
Clearly, sports participation is a complex amalgam of variable social, cultural, and economic factors. Nonetheless, these factors indicate that sport participation is largely traceable to increased leisure time and energetic abundance. Considering a primitive socio-economic setting nearer to the origins of human species in evolutionary development, the key drivers of sports participation appear closely linked with abundant time and energy.
So, if humanoid robots are expected to invent team sport spontaneously as a function of their innate intelligence and without being programmed to do so, their resources cannot be entirely allocated toward resource gathering and other activities, and they must have sufficient energy to engage in the high-energy physical activities inherent in team sport. Put less formally, robots must be permitted “down-time” and be allowed to become sufficiently bored to figure out how else they might want to use their time and energy. Robots with human-level intelligence will not sit idly by.

7.3. The Heterogeneity Requirement and the Energetic Threshold for the Emergence of Team Sport

If team sports are largely traceable to primitive conditions for resource competition and cooperation, and tend to emerge in times of relative energetic abundance, what is the collective energetic threshold at which we might expect human-like cooperative-competitive activities to emerge and give rise to the invention of team sports?
To illustrate the threshold of energetic abundance or degree of leisure time at which team sport might emerge, we apply principles of bicycle peloton behavior. As previously stated, a peloton is a group of cyclists who, by riding in zones behind others, save energy by drafting. Cyclists at the front of the group encounter the highest energetic costs because they directly face the wind. Cyclists’ intrinsic abilities are heterogeneous and tend to span a narrow range [44,45].
In simulated pelotons [44], a threshold between a predominantly collective competitive state and a predominantly collective cooperative state occurs when weaker cyclists can sustain the pace set by the strongest cyclists only by exploiting energy-saving drafting positions. When the pace set by leading cyclists is too high for weaker cyclists to move to the front and to share the costliest front positions, cyclists are incapable of cooperating with other cyclists. In this state, cyclists are predominantly competitive as a matter of physiological necessity, during which time cyclists operate as “every man for himself”.
When the pace set by leading cyclists is sufficiently low, weaker cyclists can advance to the head of the peloton, and thereby cooperate by sharing the costliest front positions. In this state, cyclists are predominantly cooperative.
The threshold between the predominant competitive and cooperative states may be quantified as the difference between the power output of the pace-setting cyclist and the maximum sustainable power of the drafting cyclist, when that difference does not exceed the magnitude of energy saved by drafting [44]. In other words, a weaker cyclist can sustain the pace of a stronger one if she is no weaker than the equivalent reductions in power requirements afforded by drafting.
The presence of this threshold implies that if cyclists were entirely homogeneous in capacity, their collective state would be a constantly cooperative one in which they could always share the costliest positions, with no competitive tension. Conversely, if the difference between cyclists’ individual capacities were too high, cyclists would proceed in a constantly competitive state in isolation from each other. To illustrate the latter, consider the extreme differences in the energetic capacities of an ant and a bird in flight—the two simply cannot cooperate and each must fend for itself in isolation from the other.
By implication, the principle also applies to variability in expended energy: cyclists could be initially homogeneous in energetic or metabolic capacity, but expend their energetic resources asymmetrically. For instance, cyclists who spend a lot time facing the wind expend far more energy than those who occupy drafting positions, even if they are all equally strong. In effect, this causes a variable range of cyclists’ output capacities, thus increasing the likelihood for the emergence of the competitive phase.
Peloton simulations demonstrate this principle; when peloton speeds are comparatively low, cyclists collectively tend to share the costliest front positions randomly, whereas when peloton speeds are high and cyclists approach their sustainable power thresholds, front positions become dominated by stronger cyclists, as shown in Figure 1 [44].
The sharing of costly positions at low levels of energy expenditure leads to a general hypothesis that cooperation among heterogeneous agents tends to emerge in times of relative abundance; i.e., when individuals’ energetic resources are not maximally expended or strained. This is consistent with literature which suggests that participation in sports tends to occur during times of leisure or energetic abundance, as discussed.
We propose that this differential or heterogeneous range in energetic capacities, whether intrinsic to robots or generated over time through asymmetric energy consumption, is a necessary pre-condition to the emergence of the competitive-cooperative tension that is foundational to the invention of team sports.
In terms of collective humanoid robots, without data for the robots’ range of energetic capacities, the types of tasks they will undertake, and the energy required for those tasks, it is not possible to suggest where this collective competitive-cooperative threshold may lie. Nonetheless, we argue in principle that for robots to spontaneously invent team sports, their collective outputs must be below this threshold such that robots enjoy comparatively low energetic expenditure in a predominantly cooperative state.
Studies of heterogeneous robot swarms are a relatively recent area of robot study since, in past, the field has tended to focus on homogeneous swarms [46,47,48]. Ranjbar-Sahraeia et al. [49] distinguish between “soft” heterogeneity in robotics as describing the range of individual differences among a collective, and “hard” heterogeneity as differences in type. As the peloton analogy suggests, such soft differential energetic capacities among collective members are natural and foundational to the variations in competitive and cooperative drive that emerges as a function of resource scarcity. We may predict, therefore, that team sports are likely to emerge only among heterogeneous robot energetic capacities.

8. The Status of Robo-Soccer

Soccer is an obvious choice of team sport by which to test the skills of robots due to its accessibility and universality [50], and wide range of required physical skills including running at variable speeds, frequent directional changes, jumping, sliding, ball-handling with all parts of the body (including arms for throw-ins and for goalies), and highly coordinated team play.
The challenge of achieving robot team sport activity is well-recognized among the artificial intelligence community; to this end robot soccer events have been an annual international event since 1997 [51], and perhaps first formally proposed by Sahota and Mackworth [52].
The goal for researchers is to produce a robot team that can beat the best human team, which is thought to be achievable by 2050 [10]. As of 2015, there are many existing challenges to achieving this goal, including: robot running, high-kicks, jumping, controlled landings, walking/running over uneven surfaces, ball throwing (done by goalies, and from sidelines after the ball has gone out of play), ball receiving, in addition to integrated team organization [53]. Despite the currently primitive state of robot soccer, there is optimism that even by 2030, robots will be technically capable of playing against an “unprofessional human team” [53]. For a video of a 2017 humanoid robot soccer competition, see [54] and for an extended video that includes wheeled robots, see [55].
For robot soccer, robots are pre-programmed with either autonomous or centralized control. Autonomous robots are mutually independent with their own instructions about how to respond in given situations and based on local information gleaned from within robots’ field of view; centrally controlled robots respond based on a single strategy that applies to all robots according to globally-available information about the positions of all the other players [56].
For humanoid robots to master soccer is a clearly a monumental achievement, which makes great strides toward proving that embodied humanoid artificial intelligence is equivalent to human intelligence. As suggested, however, to establish equivalence between humanoid robot intelligence and human intelligence, it is not enough that robots are demonstrably capable of playing soccer at human levels. A further critical step in humanoid robot intelligence must exist alongside the physical mastery of soccer: humanoid robots must spontaneously invent the game of soccer, or similar games or team sports. Robots must decide, independently of human instructions, to make their own games that involve robot competition and cooperation. To show that robots are capable of playing soccer is merely a preceding first step to establish that if robots did invent soccer, they could in fact play it. Since human intelligence includes a predisposition to invent soccer, not just to play it, it is the invention of soccer, or its spontaneous self-organization, that is a critical test for robot intelligence.

9. Conclusions

Although computer computational capacity may well soon match and exceed the computational capacity of the human brain, we have argued that for artificial intelligence to match human intelligence, it must first be embodied. Secondly, it must develop collective behaviors through similar processes and in similar conditions as those by which humans have evolved. The basic necessary conditions include: variable resource scarcity and abundance during which robots develop an inherent propensity to compete and to cooperate; sufficient periods of leisure time in which robots may, without being programmed to do so, spontaneously invent team sports; and a heterogeneous range of robot energetic capacities.
Given these factors, and that the benchmarks for human intelligence have not yet been exhausted, a novel test is proposed for artificial intelligence to prove it has achieved human-level intelligence: the invention of team-sport test. In seeking to extend the test for artificial intelligence in this way, one may ask, is it not enough that robots acquire the computational capacity required to engage the myriad sensorimotor skills to play soccer at a human level, which is predicted to be achieved by 2050? Why shift the goal-posts one step farther and demand that robots not only demonstrate the ability to play team sports, but that robots must then invent team-sports?
Indeed, Kurzweil [1] (p. 292) has argued that “as long as there are discrepancies between human and machine performance—areas in which humans outperform machines—strong AI skeptics will seize on these differences.” Kurzweil [1] (p. 290) lists many examples of functions once thought to be only the domain of humans as now within the capacity of computers, including: diagnosing electrocardiograms, composing in the style of Bach, recognizing faces, guiding missiles, playing ping-pong and mastering chess, picking stocks, improvising jazz, proving important theorems, and understanding continuous speech.
Further, even if we accept a new test for the emergence of self-organized team-sports, is there any reason to believe the test will not be passed soon after Kurzweil’s predicted singularity if not contemporaneously to it? When we assess the comparatively primitive state of humanoid robots and their cooperative dynamics generally to the projected timeline for high-level robot soccer to occur around the year 2050, and a subsequent period of unknown duration to determine whether robots can invent team sport, it appears unlikely that robot intelligence will match human intelligence until well after 2050.
Still, it is impossible to deny the rapidity of technological advances, as Kurzweil [57] (pp. 30, 32) asserts, and so we may well expect that robots will indeed pass this test not long after robots acquire the ability to play human-level soccer, projected to occur by 2050. Yet the ethical question remains: is it desirable for humans to create the conditions in which robots have the capacity and the need to compete for scarce resources, given the undesirable implications of such a capacity? In one sense, the question is absurd, for it is easy to imagine that even if human robot makers control optimal robot energy levels without inducing robot competition and cooperation, eventually such control will be lost. On the other hand, at present, it remains possible for humans to control the conditions under which robots may self-organize team sports, simply by tuning the degrees of available resource abundance and robot heterogeneity. Under such present control, it remains possible that the team-sport test will never be achieved.

Acknowledgments

The author acknowledges the comments of three anonymous reviewers which helped to improve this paper, as well as the support of the editors.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Kurzweil, R. The Singularity is Near: When Humans Transcend Biology; Penguin Group: New York, NY, USA, 2005. [Google Scholar]
  2. Turing, A.M. Computing machinery and intelligence. Mind 1950, 49, 433–460. [Google Scholar] [CrossRef]
  3. You, J. Beyond the Turing Test. Science 2015, 347, 116. [Google Scholar] [CrossRef] [PubMed]
  4. Grosz, B. What question would Turing pose today? AI Mag. 2012, 33, 73–81. [Google Scholar] [CrossRef]
  5. Ortiz, C. Why we need a physically embodied Turing test and what it might look like. AI Mag. 2016, 37, 55–62. [Google Scholar] [CrossRef]
  6. Moravec, M. Mind Children: The Future of Robot and Human Intelligence; Harvard University Press: Cambridge, MA, USA, 2005. [Google Scholar]
  7. Minsky, M. The Society of Mind; Simon & Schuster: New York, NY, USA, 1986. [Google Scholar]
  8. Pfeifer, R.; Bongard, J. How the Body Shapes the Way We Think—A New View of Intelligence. In A Bradford Book; The MIT Press: Cambridge, MA, USA; London, UK, 2007. [Google Scholar]
  9. Gershenson, C.; Trianni, V.; Werfel, J.; Sayama, H. Self-Organization and Artificial Life: A Review. In Proceedings of the 2018 Conference on Artificial Life, Tokyo, Japan, 23–27 July 2018. (submitted). [Google Scholar]
  10. Kitano, H.; Asada, M. The RoboCup humanoid challenge as the millennium challenge for advanced robotics. Adv. Robot. 1998, 13, 723–736. [Google Scholar] [CrossRef]
  11. Nolfi, S.; Floreano, D. Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  12. Winfield, A. How intelligent is your intelligent robot? arXiv, 2017. [Google Scholar]
  13. Do Animals Have a Sense of Competition? 2018. Available online: https://gizmodo.com/do-animals-have-a-sense-of-competition-1823122780 (accessed on 29 April 2018).
  14. Legg, S.; Hutter, M. A collection of definitions of intelligence. Front. Artif. Intell. Appl. 2007, 157, 17. [Google Scholar]
  15. Brooks, R.A.; Stein, L.A. Building brains for bodies. Auton. Robots 1994, 1, 7–25. [Google Scholar] [CrossRef]
  16. Clark, A. Supersizing the Mind: Embodiment, Action, and Cognitive Extension; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  17. Braga, A.; Logan, R.K. The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence. Information 2017, 8, 156. [Google Scholar] [CrossRef]
  18. Cariani, P.A. On the Design of Devices with Emergent Semantic Functions. Ph.D. Thesis, State University of New York at Binghamton, New York, NY, USA, 1989. [Google Scholar]
  19. Hristovski, R. Constraints-induced emergence of functional novelty in complex neurobiological systems: A basis for creativity in sport. Nonlinear Dyn. Psychol. Life Sci. 2011, 15, 175–206. [Google Scholar]
  20. Brown, D.E. Human Universals; Temple University Press: Philadelphia, PA, USA, 1991. [Google Scholar]
  21. Glazier, P.S. Towards a grand unified theory of sports performance. Hum. Mov. Sci. 2017, 56, 184–189. [Google Scholar] [CrossRef] [PubMed]
  22. Araújo, D.; Davids, K. Team synergies in sport: Theory and measures. Front. Psychol. 2016, 7, 1449. [Google Scholar] [CrossRef] [PubMed]
  23. Balagué, N.; Torrents, C.; Hristovski, R.; Kelso, J.A. Sport science integration: An evolutionary synthesis. Eur. J. Sport Sci. 2017, 17, 51–62. [Google Scholar] [CrossRef] [PubMed]
  24. Scambler, G. Sport and Society: History, Power and Culture; Open University Press, McGraw-Hill Education: Berkshire, UK, 2005. [Google Scholar]
  25. Lombardo, M.P. On the evolution of sport. Evolut. Psychol. 2012, 10. [Google Scholar] [CrossRef]
  26. Sipes, R.G. War, sports and aggression: An empirical test of two rival theories. Am. Anthropol. 1973, 75, 64–86. [Google Scholar] [CrossRef]
  27. Trianni, V.; Dorigo, M. Self-organisation and communication in groups of simulated and physical robots. Biol. Cybern. 2006, 95, 213–231. [Google Scholar] [CrossRef] [PubMed]
  28. Gross, R.; Dorigo, M. Towards group transport by swarms of robots. Int. J. Bio-Inspired Comput. 2009, 1, 1–3. [Google Scholar] [CrossRef]
  29. Sperati, V.; Trianni, V.; Nolfi, S. Self-organised path formation in a swarm of robots. Swarm Intell. 2011, 5, 97–119. [Google Scholar] [CrossRef]
  30. Krieger, M.J.; Billeter, J.B.; Keller, L. Ant-like task allocation and recruitment in cooperative robots. Nature 2000, 406, 992. [Google Scholar] [CrossRef] [PubMed]
  31. Zedadra, O.; Seridi, H.; Jouandeau, N.; Fortino, G. Energy expenditure in multi-agent foraging: An empirical analysis. In Proceedings of the Federated Conference on Computer Science and Information Systems (FedCSIS) IEEE, Łódź, Poland, 13–16 September 2015; pp. 1773–1778. [Google Scholar]
  32. Liu, W.; Winfield, A.F. Modeling and optimization of adaptive foraging in swarm robotic systems. Int. J. Robot. Res. 2010, 29, 1743–1760. [Google Scholar] [CrossRef]
  33. Ducatelle, F.; Di Caro, G.A.; Pinciroli, C.; Mondada, F.; Gambardella, L. Communication assisted navigation in robotic swarms: Self-organization and cooperation. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 4981–4988. [Google Scholar]
  34. Nouyan, S.; Groß, R.; Bonani, M.; Mondada, F.; Dorigo, M. Teamwork in self-organized robot colonies. IEEE Trans. Evolut. Comput. 2009, 13, 695–711. [Google Scholar] [CrossRef]
  35. Ducatelle, F.; Di Caro, G.A.; Pinciroli, C.; Gambardella, L.M. Self-organized cooperation between robotic swarms. Swarm Intell. 2011, 5, 73. [Google Scholar] [CrossRef]
  36. Dorigo, M.; Floreano, D.; Gambardella, L.M.; Mondada, F.; Nolfi, S.; Baaboura, T.; Birattari, M.; Bonani, M.; Brambilla, M.; Brutschy, A.; et al. Swarmanoid: A novel concept for the study of heterogeneous robotic swarms. IEEE Robot. Autom. Mag. 2013, 20, 60–71. [Google Scholar] [CrossRef]
  37. Bayındır, L. A review of swarm robotics tasks. Neurocomputing 2016, 172, 292–321. [Google Scholar] [CrossRef]
  38. Mohan, Y.; Ponnambalam, S. An Extensive Review of Research in Swarm Robotics. In Proceedings of the World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009. [Google Scholar]
  39. Grosz, B.J. A multi-agent systems Turing challenge. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems, St. Paul, MN, USA, 6–10 May 2013. [Google Scholar]
  40. Nowak, M.A. Five rules for the evolution of cooperation. Science 2006, 314, 1560–1563. [Google Scholar] [CrossRef] [PubMed]
  41. Downward, P.; Riordan, J. Social interactions and the demand for sport: An economic analysis. Contemp. Econ. Policy 2007, 25, 518–537. [Google Scholar] [CrossRef]
  42. Ruseski, J.E.; Humphreys, B.R.; Hallmann, K.; Breuer, C. Family structure, time constraints, and sport participation. Eur. Rev. Aging Phys. Act. 2011, 8, 57–66. [Google Scholar] [CrossRef]
  43. Eberth, B.; Smith, M.D. Modelling the participation decision and duration of sporting activity in Scotland. Econ. Model. 2010, 27, 822–834. [Google Scholar] [CrossRef] [PubMed]
  44. Trenchard, H. The peloton superorganism and protocooperative behavior. Appl. Math. Comput. 2015, 270, 179–192. [Google Scholar] [CrossRef]
  45. Trenchard, H.; Ratamero, E.; Richardson, A.; Perc, M. A deceleration model for bicycle peloton dynamics and group sorting. Appl. Math. Comput. 2015, 251, 24–34. [Google Scholar] [CrossRef]
  46. Szwaykowska, K.; Romero, L.M.; Schwartz, I.B. Collective motions of heterogeneous swarms. IEEE Trans. Autom. Sci. Eng. 2015, 12, 810–818. [Google Scholar] [CrossRef]
  47. Gomes, J.; Mariano, P.; Christensen, A.L. Challenges in cooperative coevolution of physically heterogeneous robot teams. Nat. Comput. 2016, 1–18. [Google Scholar] [CrossRef]
  48. Yang, J.; Liu, Y.; Wu, Z.; Yao, M. The evolution of cooperative behaviours in physically heterogeneous multi-robot systems. Int. J. Adv. Robot. Syst. 2012, 9, 253. [Google Scholar] [CrossRef]
  49. Ranjbar-Sahraeia, B.; Alersa, S.; Stankováa, K.; Tuylsab, K.; Weissa, G. Toward Soft Heterogeneity in Robotic Swarms. In Proceedings of the 25th Benelux Conference on Artificial Intelligence (BNAIC), Delft, The Netherlands, 7–8 November 2013; pp. 384–385. [Google Scholar]
  50. Orejan, J. Football/Soccer: History and Tactics; McFarland: Glasgow, UK, 2011. [Google Scholar]
  51. Osawa, E.; Kitano, H.; Asada, M.; Kuniyoshi, Y.; Noda, I. RoboCup: The robot world cup initiative. In Proceedings of the Second International Conference on Multi-Agent Systems (ICMAS-1996), Kyoto, Japan, 9–13 December 1996; AAAI Press: Menlo Park, CA, USA, 1996. [Google Scholar]
  52. Sahota, M.K.; Mackworth, A.K. Can situated robots play soccer? In Proceedings of the 10th Biennial Conference of the Canadian Society for Computational Studies of Intelligence, Banff, AB, Canada, 16–20 May 1994; pp. 249–254. [Google Scholar]
  53. Gerndt, R.; Seifert, D.; Baltes, J.H.; Sadeghnejad, S.; Behnke, S. Humanoid robots in soccer: Robots versus humans in RoboCup 2050. IEEE Robot. Autom. Mag. 2015, 22, 147–154. [Google Scholar] [CrossRef]
  54. World Championship 2017 SPL Finals B-Human vs. Nao-Team HTWK. 2017. Available online: https://www.youtube.com/watch?v=4uYN_3gL4_Y Robo Soccer (accessed on 5 April 2018).
  55. RoboCup 2017. Available online: https://www.youtube.com/watch?time_continue=24395&v=BUxqFlrvkQk (accessed on 5 April 2018).
  56. Snášel, V.; Svatoň, V.; Martinovič, J.; Abraham, A. Optimization of Rules Selection for Robot Soccer Strategies. Int. J. Adv. Robot. Syst. 2014, 11, 13. [Google Scholar] [CrossRef]
  57. Kurzweil, R. The Age of Spiritual Machines; Viking: New York, NY, USA, 1999. [Google Scholar]
Figure 1. (a) Simulated cyclists travel at high speeds relative to their maximal capacities. The peloton self-organizes so that strongest simulated cyclists tend to spend the most time in highest cost front positions (R = 0.7547); (b) Simulated cyclists travel at medium speeds relative to their maximal capacities. Here there is no discernable preference for cyclists of any strength to dominate the front position. While they do not all share the highest cost position equally, sharing is more randomly distributed (R = 0.2825); (c) Simulated cyclists travel at low speeds relative to their maximal capacities. Like cyclists traveling at mid-speeds, there is no discernable preference for cyclists of any strength to dominate the front position (R = 0.2546). Image adapted from [44] (Figures 4–6).
Figure 1. (a) Simulated cyclists travel at high speeds relative to their maximal capacities. The peloton self-organizes so that strongest simulated cyclists tend to spend the most time in highest cost front positions (R = 0.7547); (b) Simulated cyclists travel at medium speeds relative to their maximal capacities. Here there is no discernable preference for cyclists of any strength to dominate the front position. While they do not all share the highest cost position equally, sharing is more randomly distributed (R = 0.2825); (c) Simulated cyclists travel at low speeds relative to their maximal capacities. Like cyclists traveling at mid-speeds, there is no discernable preference for cyclists of any strength to dominate the front position (R = 0.2546). Image adapted from [44] (Figures 4–6).
Information 09 00118 g001

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top