The results are divided into four parts: In part 1, we give quantitative data from our first search stage (non-downloaded stage) and second stage (downloaded material) on some terms used for disabled people and the focus of the coverage. In parts 2–4, we present the results of the content analysis of the downloaded material. In part 2 we focus on the tone of the AI/ML coverage. In part 3, we focus on the role, identity, and stake narrative of AI/ML on disabled people and in part 4 we focus on the presence of the terms “social good” and “for good”.
3.1. Part 1: Classification of Disabled People and Focus of Coverage
How a disabled person is classified often sets the stage for what a discourse focuses on. In the first step, we searched for three terms (patient, disabled people, people with disabilities) in academic literature, newspapers, and Twitter tweets.
The academic literature, newspapers, and Twitter tweets contained at least 20 times more content for the term “patient” than for the terms “disabled people” and “people with disabilities” together. With the term “patient”, we obtained 23,990 academic hits and 6154 newspaper hits. With the terms “disabled people” and “people with disabilities” together we obtained 1258 academic hits and 214 newspaper hits.
As to Twitter tweets, we found 2879 hits for the terms “disabled people” and “people with disabilities” from the beginning of Twitter till 17 August 2018. Whereas, the term “patient” generated 2700 hits from 1–17 August 2018 alone (all-time hit not obtained for “patient”). The terms “disabled people” and “people with disabilities” generated 119 hits for that time frame.
The vastly higher numbers for the term “patient” indicate that health is a major focus for AI/ML discourse covering disabled people in the academic literature, newspapers, and Twitter tweets.
In a second step we analyzed the downloaded 1540 academic abstracts, 234 full-text newspaper articles, and 2879 tweets obtained by using terms related to disabled people excluding the term “patient” (Figure 1
and Figure 2
), for health-related content (other content is dealt with in another section).
Within the 1540 academic abstracts, the following health-linked terms were mentioned: “health” (254 abstracts), “patient” (167), “therapy” (34), “rehabilitation” (171), “care” (220), “medical” (62), “clinical” (45), “treatment” (33), “disease” (41), “disorder” (44), “healthy” (26), “diagnos*” (36), “mental health” (17), and “healthcare”/”health care” (67).
As to the newspapers, although many of the health-related terms were mentioned many times in the 234 newspaper articles (“health” was mentioned 720 times, “care” (790), “healthcare”/”health care” (173), “patient” (110), and “disease” (88)), many of these hits were false positives meaning that most hits did not relate to content that covered disabled people and fewer of these hits were linked to disabled people in relation to AI/ML. We found five articles that covered the terms “healthcare”/”health care” in relation to disabled people and AI/ML, three for “rehabilitation”, two for “care”, and one article each for “treatment”, “therapy”, “disease”, and “disorder”.
Within the 2879 Twitter tweets, the following health-linked terms were mentioned: “health” (117 times), “patient” (2), “therapy” (4), “rehabilitation” (2), “care” (56), “medical” (5), “clinical” (0), “treatment” (0), “disease” (1), “disorder” (0), “healthy” (1), “diagnos*” (0), “mental health” (3), and “healthcare”/”health care” (12).
The findings suggest that health is still a major focus even in articles downloaded based on terms related to disabled people excluding the term “patient”.
3.2. Part 2: Tone of Coverage
The tone in the downloaded content from all three sources was predominantly techno-optimistic. We found no content covering the negative effects of AI/ML use by society on disabled people or negative effects of autonomous AI/ML on disabled people in the academic literature and newspapers and little in tweets. There are many terms such as “ethic”, “risk”, “challenge”, “barrier”, “problem’, and “negative”, which have the potential to present a differentiated picture of what impact AI/ML advancements could have for disabled people but were not used to convey existing, or potentially problematic societal issues disabled people might face. Other terms that could be used also to cover existing, or potentially problematic societal issues disabled people might face, were hardly present such as “justice” (3), “equity” (2), and “equality” (4).
3.2.1. Academic Literature
Most abstracts followed a techno-optimistic narrative, for example “During the last decades, people with disabilities have gained access to Human-Computer Interfaces (HCI); with a resultant impact on their societal inclusion and participation possibilities, standard HCI must therefore be made with care to avoid a possible reduction in this accessibility” [127
] (p. 247). The term “negative” was not used to indicate an impact of AI/ML on disabled people. “Challenge” was linked to the use of products to compensate for a ‘bodily deficiency’ [128
] but not to indicate societal changes enabled by AI products and processes that might pose challenges for disabled people. “At risk” was used to indicate medical and social consequences linked to the ‘disability’ [129
] or “at risk” to not have access to a product [130
]. “At risk” was not used within the context of disabled people being “at risk” of AI related products and processes. “Barrier” was used in the sense of not having access to the product [131
] or that technology eliminates barriers [132
], not that AI/ML generates societal or other negative barriers for disabled people. The focus of the term “problem” was on products helping to solve problems disabled people face due to their ‘disability’ [133
], and access to a new product was flagged as a problem. “Problem” was not used to indicate that AI/ML generates societal problems for disabled people. “Ethics” was only mentioned in four academic abstracts. In the first abstract, authors argued that ethical issues are often not covered if the focus is on the consumer angle [134
]. According to the second abstract of a paper that focused on the very issue of how ethics are covered in relation to disabled people and AI/ML, the conclusion was that very few articles exist [1
]. In the third abstract, the authors suggested that ethical problems appear when hearing computer scientists work on Sign Languages (SL) used by the deaf community [135
]. The fourth abstract, which focused on AI applied to robots for children with disabilities, acknowledged that there are ethical considerations around data needed by AI algorithms [136
] without mentioning them.
A techno-optimistic tone was present throughout all newspaper coverage. To give one example, “companies like Microsoft and Google try to harness the power of artificial intelligence to make life easier for people with disabilities” [137
] (p. B2). Terms such as “risk”, “challenge”, “barrier”, “problem”, and “negative” were not linked to disabled people. “Ethics” was mentioned once in which the ethical issue of whether to use invasive BCI or wait for non-invasive versions was highlighted, although the article is not clear whether this was about non-disabled people [138
]. If the focus was not on disabled people, articles often mentioned negative aspects of AI/ML such as Stephen Hawkins warning about AI [139
]. Many articles covered job loss by non-disabled people, for example “While numbers can vary wildly, one analysis says automation, robots and artificial intelligence (AI) have the potential to wipe out nearly 50 per cent of jobs around the globe over the next decade or two” [140
] (p.A6). Not one article covered the threat of AI/ML to disabled people such as in relation to job situations.
Within the 2879 tweets, the coverage was overwhelmingly techno-optimistic. Common phrases included “Empower people with disabilities” appeared in n = 439 tweets; “AI to help people with disabilities”, n = 414; “help disabled people”, n = 268; “AI to empower people with disabilities” n = 248, “Machine Learning Opens Up New Ways to Help Disabled People”, n = 170; “Artificial Intelligence Poised to Improve Lives of People With Disabilities”, n = 136; “AI can improve tech for people with disabilities” n = 74; or “AI can be a game changer for people with disabilities”, n = 14. There were n = 1739 tweets linked to the accessibility initiative of Microsoft using wording such as “AI can do more for people with disabilities” and “Microsoft is launching a $25 million initiative”, finishing the sentence with various versions of “to use Artificial Intelligence (AI) to build better technology for people with disabilities.”
The term “ethics” was mentioned 10 times; seven of which did not mention ethics explicitly in relation to AI and disabled people. One indicated that ethics needs to be tackled [141
]. Two tweets mentioned actual ethical issues [142
]. As to “barrier” in 18 tweets, 16 saw AI enabling technology to break down barriers. The term was used once to indicate newly generated problems for disabled people [144
]. “Challenge” was present in 49 tweets of which all were in regard to AI taking on the challenges disabled people face, such as “AI to help people with disabilities deal with challenges” [145
]. “Risk” was mentioned eight times with three seeing risks of more inequity for disabled people. The term “problem” was used in 11 tweets, six of which indicated the problem of AI use causing problems for disabled people such as problematic use of an algorithm [146
], problems around suicide [149
], personality tests [150
], and job hiring [151
3.3. Part 3: Role, Identity, and Stake Narrative
In the content downloaded from all sources, the data we found engaged with disabled people predominantly as therapeutic and non-therapeutic users.
Within the 1540 academic abstracts, the term “user” was employed 1643 times and the term “consumer” 29 times. Linked to the user angle was the presence of terms such as “design” (1141), “access” (1756), “accessibility” (803), and “usability” (195). Within the 1141 times the term “design” was used, all but eight focused on products envisioned specifically for disabled people. Of these eight, one gave a general overview of design for all and the convention on the rights of persons with disabilities [152
]; one covered the advancement on “access for all” for a part of Germany [153
]; one was a review of social computing (SC) for social inclusion [154
]; one made the case of access issues with the Prosperity4all platform [155
]; one was a review of ICT and emergency management research [156
]; and one was about urban design education [157
Within the 234 full-text newspaper articles, the term “user” was mentioned 91 times but only 36 times in relation to disabled people and three times in relation to disabled people and AI/ML (AI making hearing aids better once and AI and robotics, twice). The term “consumer’ was mentioned 41 times but only four times in relation to disabled people and not once in relation to disabled people and AI/ML. The term “design*” was mentioned 33 times in relation to disabled people and two times in relation to disabled people and AI/ML, one reporting on an autonomous homecare bot and one mentioning “AI for inclusive design”. “Access *” was mentioned 21 times in relation to disabled people; twice in conjunction with disabled people and AI covering the Microsoft AI for accessibility initiative and an accessibility sport hub chatbot that finds accessible sport programs and resources for disabled people.
Within the 2879 Twitter tweets, the term “user” was employed 17 times and the term “consumer” seven times. Linked to the user angle was the presence of terms such as “design” (161), “access” (989), “accessibility” (672), and “usability” (3).
In all sources, we did not find any discussions linked to AI/ML governance involving disabled people or disabled people as knowledge producers (outside of the consumer angle and being involved in development of AI/ML as consumers) (for tweet examples see [158
Two tweets questioned the helping narrative [161
]. We did not find any engagement with the potential negative impacts of AI/ML use by members of society and autonomous AI/ML action for disabled people.