Next Article in Journal
Rastafarianism in Bullet Tree Falls, Belize: Exploring the Effects of International Trends
Previous Article in Journal
The Policies of Provision of Assistive and Welfare Technology—A Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Coverage of Artificial Intelligence and Machine Learning within Academic Literature, Canadian Newspapers, and Twitter Tweets: The Case of Disabled People

Community Rehabilitation and Disability Studies, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB T2N4N1, Canada
*
Author to whom correspondence should be addressed.
Societies 2020, 10(1), 23; https://doi.org/10.3390/soc10010023
Submission received: 3 February 2020 / Revised: 20 February 2020 / Accepted: 24 February 2020 / Published: 27 February 2020

Abstract

:
Artificial intelligence (AI) and machine learning (ML) advancements increasingly impact society and AI/ML ethics and governance discourses have emerged. Various countries have established AI/ML strategies. “AI for good” and “AI for social good” are just two discourses that focus on using AI/ML in a positive way. Disabled people are impacted by AI/ML in many ways such as potential therapeutic and non-therapeutic users of AI/ML advanced products and processes and by the changing societal parameters enabled by AI/ML advancements. They are impacted by AI/ML ethics and governance discussions and discussions around the use of AI/ML for good and social good. Using identity, role, and stakeholder theories as our lenses, the aim of our scoping review is to identify and analyze to what extent, and how, AI/ML focused academic literature, Canadian newspapers, and Twitter tweets engage with disabled people. Performing manifest coding of the presence of the terms “AI”, or “artificial intelligence” or “machine learning” in conjunction with the term “patient”, or “disabled people” or “people with disabilities” we found that the term “patient” was used 20 times more than the terms “disabled people” and “people with disabilities” together to identify disabled people within the AI/ML literature covered. As to the downloaded 1540 academic abstracts, 234 full-text Canadian English language newspaper articles and 2879 tweets containing at least one of 58 terms used to depict disabled people (excluding the term patient) and the three AI terms, we found that health was one major focus, that the social good/for good discourse was not mentioned in relation to disabled people, that the tone of AI/ML coverage was mostly techno-optimistic and that disabled people were mostly engaged with in their role of being therapeutic or non-therapeutic users of AI/ML influenced products. Problems with AI/ML were mentioned in relation to the user having a bodily problem, the usability of AI/ML influenced technologies, and problems disabled people face accessing such technologies. Problems caused for disabled people by AI/ML advancements, such as changing occupational landscapes, were not mentioned. Disabled people were not covered as knowledge producers or influencers of AI/ML discourses including AI/ML governance and ethics discourses. Our findings suggest that AI/ML coverage must change, if disabled people are to become meaningful contributors to, and beneficiaries of, discussions around AI/ML.

1. Introduction

Artificial intelligence (AI) and machine learning (ML) are applied in many areas [1] such as medical technologies [2,3], big data [4], various neuro linked technologies [5,6,7,8,9,10,11,12,13,14,15], autonomous cars, drones, robotics, assistive technologies[16,17,18], gaming, urban design, various forms of surveillance, and the military. Many countries have AI strategies [19,20] and numerous societal and other implications of AI/ML have been identified [1,5,21,22,23,24,25,26,27,28]. Discourses under the header AI for “social good” [21,29,30,31,32,33,34,35,36] and other words with similar connotation such as “common good” [37] and “for good” [38] explicitly look at how to make a positive contribution to society. According to the Canadian Institute for Advanced Research (CIFAR), which coordinates the Canadian AI strategy, its “AI & Society program aims to examine the questions AI will pose for society involving diverse topics, including the economy, ethics, policymaking, philosophy and the law” [39]. How to govern AI (if possible, all the way from the anticipatory to the implementation stage) is one focus of the discussions around the societal impact of AI [39,40,41,42,43,44,45,46,47,48] and includes terms such as social governance [19].
Disabled people1 can be impacted by AI/ML-driven advancements in several ways:
(a)
as potential non-therapeutic users (consumer angle)
(b)
as potential therapeutic users
(c)
as potential diagnostic targets (diagnostics to prevent ‘disability’, or to judge ‘disability’)
(d)
by changing societal parameters caused by humans using AI/ML (military, changes in how humans interact, employers using it in the workplace, etc.)
(e)
AI/ML outperforming humans (e.g., workplace)
(f)
increasing autonomy of AI/ML (AI/ML judging disabled people)
As such, disabled people have a stake in AI/ML advancements and how they are governed. Furthermore, disabled people have many distinct roles to contribute to AI/ML advancement discussions in general and in particular to AI/ML ethics and governance discussions, such as therapeutic and non-therapeutic user, knowledge producer, knowledge consumer, influencer of the discourses, and victims. At the same time, it is noted that disabled people face particular barriers to participation, knowledge production, and knowledge consumption of governance discussions [49].
Given the wide reaching and diverse impacts of AI/ML on disabled people and the potential roles of disabled people in AI/ML discourses, the aim of our scoping review drawing from academic literature, Canadian newspapers, and Twitter tweets was to answer the following research questions: How does the AI/ML focused literature covered engage AI/ML in relation to disabled people? What is said and not said? What is the tone of AI/ML coverage? How are disabled people defined? Are disabled people mentioned in relation to “AI for good” or “AI for social good”? What roles, identities, and stakes are assigned to disabled people? Lastly, which of the above-mentioned potential effects of AL/ML on disabled people (a–f) are present in the literature?

1.1. Portrayal and Role, Identity, and Stake Narrative of Disabled People and AI/ML

Many factors influence how AI/ML is discussed and what is said or not said but could have been said [50,51,52,53]. How one defines disabled people [54,55,56] is one such factor that can influence how a problem is defined and what solution is sought in relation to disabled people [57,58] and how a disabled person is portrayed impacts discourses [59,60]. As such how disabled people are defined and portrayed within AI/ML discourses influences how AI/ML is discussed and what is said or not said, but could have been said, in relation to disabled people.
There are many ways one can define and portray disabled people. The terms “disabled people” and “people with disabilities” for example, are often used to depict the social group of disabled people [61] and the social issues they face. Whereby, the term “patient” is mostly used to focus on the medical aspect of the disabled patient.
According to role theory, how one is portrayed impacts the role one is expected to have [62,63,64,65]. Role expectation of oneself is impacted by the role expectations others have of oneself [66]. According to identity theory, the perception of ‘self’ is influenced by the role one occupies in the social world [67]. In relation to AI/ML, disabled people could have roles such as being therapeutic and non-therapeutic users of AI/ML linked products, being victims of AI/ML product use and processes or being knowledge producers and knowledge consumers of the products and processes. Disabled people could also have the role of influencer of, and knowledge producer for, AI/ML ethics and governance discourses.
One can have many different identities whereby different identities have different weight for oneself [67]. Based on the roles disabled people can have within the AI/ML discourse, disabled people can choose various identities they exhibit in AI/ML discourses ranging from passive user of AI/ML products to active shaper of AI/ML discourses. However, it is well described that there are many barriers for disabled people to live out certain identities and perform certain roles such as being influencers of, and knowledge producers for, science and technology governance and ethics discourses [49].
How an individual perceives oneself influences how they perceive actions such as disabling actions towards themselves [68], which in turn influences what role they see themselves occupying in relation to AI/ML discourses. It also influences intergroup relationships [68,69] between disabled and non-disabled people within the AI/ML discourses and the relationship between different disability groups linked to different identities of self.
Indeed identities are seen to have five distinct features: identities are social products, self-meaning, symbolic in a sense that one’s response is similar to the response of others, reflexive, and a “source of motivation for action particularly actions that result in the social confirmation of the identity” [70] (p. 242), whereby all five features play themselves out within AI/ML discourses. Dirth and Branscombe asked: “For instance, to what degree do members of the disability community use medical, rehabilitation, and technological means to distance themselves from the stigmatized identity, and do orientations to treatment vary as a function of social identification strength?” [68] (p. 809). How this question is answered is one factor influencing the roles disabled people occupy in relation to AI/ML discourses.
Stakeholder theory has been applied to organizational management for some time, and it can also be applied to our study. According to the stakeholder theory, a stakeholder’s action expresses their identity [71]. In our case, how disabled people are portrayed and what identity is attached to disabled people by others and disabled people attach to themselves could therefore be one factor that influences stakeholder salience and stakeholder identification [72]. This includes whether disabled people see themselves or are seen by others as stakeholders in the AI/ML discourses and what is seen as the stake.
Regarding the potential role of disabled people, the question is whether only the ability to fulfill the role of the therapeutic or non-therapeutic user of AI/ML products is at stake or if there are other aspects deemed to be at stake and who sees what being at stake. According to Mitchell et al., there are three main factors identifying a stakeholder: “(1) the stakeholder’s power to influence the firm, (2) the legitimacy of the stakeholder’s relationship with the firm, and (3) the urgency of the stakeholder’s claim on the firm” [72] (p. 853). Applying this to disabled people and AI/ML discourses, the questions are whether disabled people have the stakeholder power to influence AI/ML discourse; whether disabled people are seen to be socially impacted by AI/ML discourses, which would give legitimacy to disabled people to be involved in AI/ML ethics and governance discussions; and whether there is a feeling of urgency for disabled people to be engaged in AI/ML discourses. Each of Mitchell et al.’s three points might be answered differently, and different actions might be flagged as needed in relation to AI/ML and disabled people depending on the role assigned to disabled people in relation to AI/ML, which in turn is impacted in part by the identity of the disabled person. A narrow and broad definition of stakeholder exists [73,74]. Michell et al. list Freeman’s broad definition, “A stakeholder in an organization is (by definition) any group or individual who can affect or is affected by the achievement of the organization’s objectives”, and Clarkson’s narrow definition, “Voluntary stakeholders bear some form of risk as a result of having invested some form of capital, human or financial, something of value, in a firm. Involuntary stakeholders are placed at risk as a result of a firm’s activities” [72]. Both definitions would suggest that disabled people are stakeholders in the AI/ML discourses in all their potential roles already mentioned. Various articles outline ways to identify stakeholder groups [75]. However, depending on identity and role, disabled people have different stakes in AI/ML discourses.

1.2. The Tone of the Discourse

The tone of a discourse is another factor that can influence how AI/ML is discussed and what is said or not said but could have been said [50,51,52,53]. A techno-optimistic or techno-enthusiastic tone [76,77] could not lend itself to cover disabled people as being negatively impacted by AI/ML advancements (whereby a techno-skeptic or techno-pessimistic tone could), but rather, facilitates the coverage of disabled people as potential therapeutic or non-therapeutic users. Words such as risk, challenge, and problem can be used to shape any given topic [78], including what is seen at stake for disabled people and what AI/ML development is impacting disabled people. How one defines disabled people and the tone of discourse influence how such words are used. For many targets of AI/ML, such as assistive devices and technologies, it is known that disabled people already face many challenges such as costs, access, and design issues [79], the imagery of the disabled person [58,80], and the fear of being judged for using them [81,82] or not using them [79]. Such aspects would not be covered under a techno-optimistic tone.

1.3. The Issue of Social Good

Social good is for example described as achieving “human well-being on a large scale” [83], although no one definition is accepted. The concept of “social good” is applied to many areas such as education [84,85] (wherein, education is changing the meaning of “social good” [86]), water [87], health [88], food [89], meaningful work [90], healthcare [91], and sustainability [92]. Many conflicts are outlined around “social good” and some describe “social good” as a “dispensable commodity” [93].
AI is also discussed in relation to “social good” [21,29,30,31,32,33,34,35,36] and other words with similar connotation such as “common good” [37] and “for good” [38]. There are subfields which include “data for social good” [94,95], “IT for social good” [96], and “computer for social good” [97,98,99]. The ethical framework for “AI for the Social Good” defines AI as “addressing societal challenges, which have not yet received significant attention by the AI community or by the constellation of AI sub-communities, [the use of] AI methods to tackle unsolved societal challenges in a measurable manner” [37]. In a workshop focusing explicitly on “AI for social good”, the term “social good” is described as follows: “[social good] is intended to focus AI research on areas of endeavor that are to benefit a broad population in a way that may not have direct economic impact or return, but which will enhance the quality of life of a population of individuals through education, safety, health, living environment, and so forth” [36]. However, what exactly constitutes “AI for social good” [100,101] and what “makes AI socially good” [31] (p. 1) is still debated. Cowl et al. argued that the following seven factors are essential for AI for social good: “(1) falsifiability and incremental deployment; (2) safeguards against the manipulation of predictors; (3) receiver-contextualised intervention; (4) receiver-contextualised explanation and transparent purposes; (5) privacy protection and data subject consent; (6) situational fairness; and (7) human-friendly semanticisation” [31] (p. 3). AI can be a major force for social good; it depends in part on how we shape this new technology and the questions we use to inspire young researchers [36].
Disabled people are impacted by who defines “social goods”, what is defined as a “social good”, who has access to the “social good”, and whether “social good” discourses focus on just “doing good” or also “preventing bad”. Many of the problems faced by disabled people highlighted in the United Nations Convention on the Rights of Persons with Disabilities [102] indicate inequitable access to “social goods”. Discourses around “social good” often lead to detrimental consequences for disabled people, such as eugenic practices done for the “social good” [103], how and what meanings of work are seen as a “social good”, and how this is operationalized [90,104] or how social justice as a public good is conceived [93].

2. Methods

2.1. Study Design

We chose a modified scoping review drawing from another study [105] as the most appropriate approach for the study given our research questions to identify the current understanding of a given topic [106]; in our case, how the AI/ML literature covered engages with disabled people. Our study followed a modified version of the stages outlined by Arksey and O’Malley [105], namely, identifying the review’s research question, identifying databases to search, generating inclusion/exclusion criteria, recording the descriptive quantitative results, selecting literature based on descriptive quantitative results for directed content analysis of qualitative data, and reporting findings of qualitative analysis.

2.2. Identifying and Clarifying the Purpose and Research Questions

The objective of our study was to ascertain whether, to what extent, and how AI/ML focused academic literature, Canadian newspapers, and Twitter tweets engaged with disabled people in general and in relation to “for good” and “social good”. Our study focused on literature directly using the terms “artificial intelligence”, “AI”, or “machine learning”. As such, we did not engage with literature that mentioned only related terms such as “ICT” or “web accessibility” without also mentioning the AI related terms.
Our research questions were: How does academic literature, Canadian newspapers, and Twitter tweets cover AI/ML in relation to disabled people? What is said and not said? How are disabled people defined in the literature covered? Are disabled people mentioned in relation to “for good” or for “social good”? What is the tone of AI/ML coverage of disabled people? What roles, identities, and stakes are assigned to disabled people in the AI/ML literature? Are disabled people engaged with as active agents, such as influencers of development of AI/ML products and processes or ethics and governance discussions? Lastly, which of the below potential effects of AL/ML on disabled people are present in the literature?
(a)
as potential non-therapeutic users (consumer angle)
(b)
as potential therapeutic users
(c)
as potential diagnostic targets (diagnostic to prevent disability or to judge disability)
(d)
by changing societal parameters caused by humans using AI/ML (military, changes in how humans interact, employers using it in the workplace, etc.)
(e)
AI/ML outperforming humans (see workplace)
(f)
increasing autonomy of AI/ML (AI/ML judging disabled people)

2.3. Data Sources and Data Collection

Canadian newspapers were chosen as a source of data because a) the government of Canada’s 2017 AI strategy includes the investigation of the impact of AI on society as one focus, which could be discussed in newspapers; b) Canada has a developed AI/ML academic community that could contribute to newspaper coverage; and c) over 75% of Canadians still read newspapers [107,108] and as such, are influenced by what they read. Tweets from Twitter.com were searched, as Twitter is seen to be highly effective in its message propagation [109,110,111]. Academic literature was chosen because academic discourses are to generate evidence that informs policies [112,113,114].
Eligibility criteria and search strategies for articles:

2.3.1. Search Strategy 1: Newspapers

We used the Canadian Newsstream, a database consisting of n = 300 English language Canadian newspapers, from January 1980 to June 2019. An explicit search strategy was employed to obtain the data [115].
We searched in the full-text of articles for the presence of AI related terms (“artificial intelligence” OR “machine learning” OR “AI”) in conjunction with the term patient (results not downloaded) and terms linked to disabled people (results downloaded): “disabled people” OR “people with a disability” OR “deaf people” OR “blind people” OR “people with disabilities” OR “people with a learning disability” OR “people with a physical disability” OR “people with a hearing impairment” OR “people with a visual impairment” OR “people with a mental disability” OR “people with a mental health” OR “learning disability people” OR “physical disability people” OR “physically disabled people” OR “hearing impaired people” OR “visually impaired people” OR “mental disability people” OR “mental health people” OR “autism people” OR “autistic people” OR “people with autism” OR “ADHD people” OR “people with ADHD” OR “people with a mental health” OR “people with a mental disability” OR “people with mental disabilities” OR “mental health people” OR “mental disability people” OR “mentally disabled people” OR “disabled person”, OR “person with a disability” OR “deaf person” OR “blind person” OR “person with disabilities” OR “person with a learning disability” OR “person with a physical disability” OR “person with a hearing impairment” OR “person with a visual impairment” OR “person with a mental disability” OR “person with a mental health” OR “learning disability person” OR “physical disability person” OR “physically disabled person” OR “hearing impaired person” OR “visually impaired person” OR “mental disability person” OR “mental health person” OR “autism person” OR “autistic person” OR “person with autism” OR “ADHD person” OR “person with ADHD” OR “person with a mental health” OR “person with a mental disability” OR “person with mental disabilities” OR “mental health person” OR “mental disability person” OR “mentally disabled person”.
We obtained n = 234 non-duplicate newspaper articles for download (Figure 1).

2.3.2. Search Strategy 2: Twitter

For tweets, the search engine of the Twitter.com webpage was searched on 17 August 2018.
Step 2a:
We searched for the presence of “AI” OR “machine learning” OR “artificial intelligence”.
Step 2b:
We searched for the presence of “disabled people” OR “people with disabilities” within the tweets of step 2a.
We obtained n = 2879 unique tweets for download (Figure 2).

2.3.3. Search Strategy 3: Academic Literature

Eligible articles were identified using explicit search strategies [115]. On 1 June 2019, we searched the academic databases EBSCO-ALL, an umbrella database that includes over 70 other databases, and Scopus, which incorporates the full Medline database collection with no time restrictions. These two databases were chosen because together they contain journals that cover a wide range of topics from areas relevant to answer the research questions. The two databases contain over 4.8 million articles published by journals that contain the terms “AI” OR “artificial intelligence” OR “machine learning” OR “IEEE” in the journal title and include journals focusing on societal aspects of AI such as the journal “AI and Society” and the proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. We also searched the ACM Guide to Computing Literature, arXiv, and the IEEE Xplore digital databases.
Strategy 3a:
We searched the abstracts of EBSCO-ALL, Scopus, arXiv, IEEE Xplore, and ACM Guide to Computing Literature using the same search terms used for the newspapers and the same download criteria.
Strategy 3b:
We searched Scopus for the presence of the AI terms used for the newspapers in the academic journal title and the presence of the term “patient” and the 58 terms depicting disabled people we used for the newspapers in the abstracts of the academic articles; we used the same download criteria as mentioned under newspapers.
Altogether, after elimination of duplicates, we obtained n = 1540 unique academic abstracts for download (Figure 1).
Not using full-text searches of academic literature as an exclusion criterium has been used in scoping reviews conducted by others [116] and was chosen to ensure that AI/ML and disabled people were the primary theme of the article found. As an additional exclusion criterion, we only searched for scholarly peer-reviewed journals in EBSCO-ALL, while we searched for reviews, peer reviewed articles, conference papers, and editorials in Scopus. The other databases were searched without exclusion limits. All databases were searched for the full time-frame available.

2.4. Data Analysis

To answer the research questions, we (both authors) employed a) a descriptive quantitative analysis approach and b) a thematic qualitative content approach [117,118] using a combination of both manifest and latent content coding methods [118,119,120,121,122]. Manifest coding is used to examine “…the visible, surface, or obvious components of communication” [121], most specifically the frequency and location of a certain “recording unit” [120]. Latent coding, on the other hand, is used to assist with “…discovering underlying meanings of the words or the content” [117], including the discourses, settings, and tone reflected in the mentions of AI themes in relation to disabled people [117,121]. We employed the manifest coding approach on the level of the database searches (non-downloaded material) and on the level of the downloaded 1540 academic abstracts, 234 full-text newspaper articles, and 2879 tweets. We employed latent coding and a directed qualitative content analysis approach keeping in mind the research questions [117] on the level of the downloaded material. As to the coding procedure, we familiarized ourselves with the downloaded content [123]. We then independently identified and clustered the themes based on meaning, repetition, and the research questions [117,121].

2.5. Trustworthiness Measures

Trustworthiness measures include confirmability, credibility, dependability, and transferability [124,125,126]. No difference in the hit counts for terms on the level of the databases and the downloaded material was evident between both authors. Differences in codes and theme suggestions of the qualitative data using latent coding were few and discussed between both authors and revised as needed. Confirmability is evident in the audit trail made possible by using the Memo and coding functions within ATLAS.Ti 8™. As for transferability, our methods description gives all the required information for others to decide whether they want to apply our keyword searches on other data sources such as grey literature, or AI and machine learning literature in other languages, or whether they want to perform more in-depth latent coding based on our manifest coding.

2.6. Limitation

Our findings cannot be generalized to the whole academic literature, non-academic literature, or non-English literature. However, our findings allow for conclusions to be made within the parameters of the searches. Our newspaper results cannot be generalized to newspapers in general or newspapers in Canada. Our English language Twitter results cannot be generalized to non-English Twitter results or results one might obtain in other countries or with other social media platforms.

3. Results

The results are divided into four parts: In part 1, we give quantitative data from our first search stage (non-downloaded stage) and second stage (downloaded material) on some terms used for disabled people and the focus of the coverage. In parts 2–4, we present the results of the content analysis of the downloaded material. In part 2 we focus on the tone of the AI/ML coverage. In part 3, we focus on the role, identity, and stake narrative of AI/ML on disabled people and in part 4 we focus on the presence of the terms “social good” and “for good”.

3.1. Part 1: Classification of Disabled People and Focus of Coverage

How a disabled person is classified often sets the stage for what a discourse focuses on. In the first step, we searched for three terms (patient, disabled people, people with disabilities) in academic literature, newspapers, and Twitter tweets.
The academic literature, newspapers, and Twitter tweets contained at least 20 times more content for the term “patient” than for the terms “disabled people” and “people with disabilities” together. With the term “patient”, we obtained 23,990 academic hits and 6154 newspaper hits. With the terms “disabled people” and “people with disabilities” together we obtained 1258 academic hits and 214 newspaper hits.
As to Twitter tweets, we found 2879 hits for the terms “disabled people” and “people with disabilities” from the beginning of Twitter till 17 August 2018. Whereas, the term “patient” generated 2700 hits from 1–17 August 2018 alone (all-time hit not obtained for “patient”). The terms “disabled people” and “people with disabilities” generated 119 hits for that time frame.
The vastly higher numbers for the term “patient” indicate that health is a major focus for AI/ML discourse covering disabled people in the academic literature, newspapers, and Twitter tweets.
In a second step we analyzed the downloaded 1540 academic abstracts, 234 full-text newspaper articles, and 2879 tweets obtained by using terms related to disabled people excluding the term “patient” (Figure 1 and Figure 2), for health-related content (other content is dealt with in another section).
Within the 1540 academic abstracts, the following health-linked terms were mentioned: “health” (254 abstracts), “patient” (167), “therapy” (34), “rehabilitation” (171), “care” (220), “medical” (62), “clinical” (45), “treatment” (33), “disease” (41), “disorder” (44), “healthy” (26), “diagnos*” (36), “mental health” (17), and “healthcare”/”health care” (67).
As to the newspapers, although many of the health-related terms were mentioned many times in the 234 newspaper articles (“health” was mentioned 720 times, “care” (790), “healthcare”/”health care” (173), “patient” (110), and “disease” (88)), many of these hits were false positives meaning that most hits did not relate to content that covered disabled people and fewer of these hits were linked to disabled people in relation to AI/ML. We found five articles that covered the terms “healthcare”/”health care” in relation to disabled people and AI/ML, three for “rehabilitation”, two for “care”, and one article each for “treatment”, “therapy”, “disease”, and “disorder”.
Within the 2879 Twitter tweets, the following health-linked terms were mentioned: “health” (117 times), “patient” (2), “therapy” (4), “rehabilitation” (2), “care” (56), “medical” (5), “clinical” (0), “treatment” (0), “disease” (1), “disorder” (0), “healthy” (1), “diagnos*” (0), “mental health” (3), and “healthcare”/”health care” (12).
The findings suggest that health is still a major focus even in articles downloaded based on terms related to disabled people excluding the term “patient”.

3.2. Part 2: Tone of Coverage

The tone in the downloaded content from all three sources was predominantly techno-optimistic. We found no content covering the negative effects of AI/ML use by society on disabled people or negative effects of autonomous AI/ML on disabled people in the academic literature and newspapers and little in tweets. There are many terms such as “ethic”, “risk”, “challenge”, “barrier”, “problem’, and “negative”, which have the potential to present a differentiated picture of what impact AI/ML advancements could have for disabled people but were not used to convey existing, or potentially problematic societal issues disabled people might face. Other terms that could be used also to cover existing, or potentially problematic societal issues disabled people might face, were hardly present such as “justice” (3), “equity” (2), and “equality” (4).

3.2.1. Academic Literature

Most abstracts followed a techno-optimistic narrative, for example “During the last decades, people with disabilities have gained access to Human-Computer Interfaces (HCI); with a resultant impact on their societal inclusion and participation possibilities, standard HCI must therefore be made with care to avoid a possible reduction in this accessibility” [127] (p. 247). The term “negative” was not used to indicate an impact of AI/ML on disabled people. “Challenge” was linked to the use of products to compensate for a ‘bodily deficiency’ [128] but not to indicate societal changes enabled by AI products and processes that might pose challenges for disabled people. “At risk” was used to indicate medical and social consequences linked to the ‘disability’ [129] or “at risk” to not have access to a product [130]. “At risk” was not used within the context of disabled people being “at risk” of AI related products and processes. “Barrier” was used in the sense of not having access to the product [131] or that technology eliminates barriers [132], not that AI/ML generates societal or other negative barriers for disabled people. The focus of the term “problem” was on products helping to solve problems disabled people face due to their ‘disability’ [133], and access to a new product was flagged as a problem. “Problem” was not used to indicate that AI/ML generates societal problems for disabled people. “Ethics” was only mentioned in four academic abstracts. In the first abstract, authors argued that ethical issues are often not covered if the focus is on the consumer angle [134]. According to the second abstract of a paper that focused on the very issue of how ethics are covered in relation to disabled people and AI/ML, the conclusion was that very few articles exist [1]. In the third abstract, the authors suggested that ethical problems appear when hearing computer scientists work on Sign Languages (SL) used by the deaf community [135]. The fourth abstract, which focused on AI applied to robots for children with disabilities, acknowledged that there are ethical considerations around data needed by AI algorithms [136] without mentioning them.

3.2.2. Newspapers

A techno-optimistic tone was present throughout all newspaper coverage. To give one example, “companies like Microsoft and Google try to harness the power of artificial intelligence to make life easier for people with disabilities” [137] (p. B2). Terms such as “risk”, “challenge”, “barrier”, “problem”, and “negative” were not linked to disabled people. “Ethics” was mentioned once in which the ethical issue of whether to use invasive BCI or wait for non-invasive versions was highlighted, although the article is not clear whether this was about non-disabled people [138]. If the focus was not on disabled people, articles often mentioned negative aspects of AI/ML such as Stephen Hawkins warning about AI [139]. Many articles covered job loss by non-disabled people, for example “While numbers can vary wildly, one analysis says automation, robots and artificial intelligence (AI) have the potential to wipe out nearly 50 per cent of jobs around the globe over the next decade or two” [140] (p.A6). Not one article covered the threat of AI/ML to disabled people such as in relation to job situations.

3.2.3. Twitter

Within the 2879 tweets, the coverage was overwhelmingly techno-optimistic. Common phrases included “Empower people with disabilities” appeared in n = 439 tweets; “AI to help people with disabilities”, n = 414; “help disabled people”, n = 268; “AI to empower people with disabilities” n = 248, “Machine Learning Opens Up New Ways to Help Disabled People”, n = 170; “Artificial Intelligence Poised to Improve Lives of People With Disabilities”, n = 136; “AI can improve tech for people with disabilities” n = 74; or “AI can be a game changer for people with disabilities”, n = 14. There were n = 1739 tweets linked to the accessibility initiative of Microsoft using wording such as “AI can do more for people with disabilities” and “Microsoft is launching a $25 million initiative”, finishing the sentence with various versions of “to use Artificial Intelligence (AI) to build better technology for people with disabilities.”
The term “ethics” was mentioned 10 times; seven of which did not mention ethics explicitly in relation to AI and disabled people. One indicated that ethics needs to be tackled [141]. Two tweets mentioned actual ethical issues [142,143]. As to “barrier” in 18 tweets, 16 saw AI enabling technology to break down barriers. The term was used once to indicate newly generated problems for disabled people [144]. “Challenge” was present in 49 tweets of which all were in regard to AI taking on the challenges disabled people face, such as “AI to help people with disabilities deal with challenges” [145]. “Risk” was mentioned eight times with three seeing risks of more inequity for disabled people. The term “problem” was used in 11 tweets, six of which indicated the problem of AI use causing problems for disabled people such as problematic use of an algorithm [146,147,148], problems around suicide [149], personality tests [150], and job hiring [151].

3.3. Part 3: Role, Identity, and Stake Narrative

In the content downloaded from all sources, the data we found engaged with disabled people predominantly as therapeutic and non-therapeutic users.
Within the 1540 academic abstracts, the term “user” was employed 1643 times and the term “consumer” 29 times. Linked to the user angle was the presence of terms such as “design” (1141), “access” (1756), “accessibility” (803), and “usability” (195). Within the 1141 times the term “design” was used, all but eight focused on products envisioned specifically for disabled people. Of these eight, one gave a general overview of design for all and the convention on the rights of persons with disabilities [152]; one covered the advancement on “access for all” for a part of Germany [153]; one was a review of social computing (SC) for social inclusion [154]; one made the case of access issues with the Prosperity4all platform [155]; one was a review of ICT and emergency management research [156]; and one was about urban design education [157].
Within the 234 full-text newspaper articles, the term “user” was mentioned 91 times but only 36 times in relation to disabled people and three times in relation to disabled people and AI/ML (AI making hearing aids better once and AI and robotics, twice). The term “consumer’ was mentioned 41 times but only four times in relation to disabled people and not once in relation to disabled people and AI/ML. The term “design*” was mentioned 33 times in relation to disabled people and two times in relation to disabled people and AI/ML, one reporting on an autonomous homecare bot and one mentioning “AI for inclusive design”. “Access *” was mentioned 21 times in relation to disabled people; twice in conjunction with disabled people and AI covering the Microsoft AI for accessibility initiative and an accessibility sport hub chatbot that finds accessible sport programs and resources for disabled people.
Within the 2879 Twitter tweets, the term “user” was employed 17 times and the term “consumer” seven times. Linked to the user angle was the presence of terms such as “design” (161), “access” (989), “accessibility” (672), and “usability” (3).
In all sources, we did not find any discussions linked to AI/ML governance involving disabled people or disabled people as knowledge producers (outside of the consumer angle and being involved in development of AI/ML as consumers) (for tweet examples see [158,159,160]).
Two tweets questioned the helping narrative [161,162]. We did not find any engagement with the potential negative impacts of AI/ML use by members of society and autonomous AI/ML action for disabled people.

3.4. Part 4: Mentioning of “Social Food” or “for Good”

The term “social good” was mentioned once in the newspaper data covering Google’s initiative "AI for Social Good" and mentioning military and job losses as negative consequences; however, disabled people were not mentioned [163]. “For good” was mentioned twice. One article stated “Microsoft has committed $115 million to an "AI for Good" initiative that provides grants to organizations harnessing AI for humanitarian, accessibility and environmental projects” [163] (p. A47). The second article focused on the “Award for good”, which simply stated that AI has the possibility to improve the life of disabled people [164]. The phrases “social good” or “for good” were not mentioned in the academic abstracts or Twitter tweets downloaded.

4. Discussion

Our scoping review revealed that a) the term “patient” was used 20 times more than the terms “disabled people” or “people with disabilities” together; b) the tone of coverage was mostly techno-optimistic; c) the main role, identity, and stake narratives surrounding disabled people reflected the roles and identities of therapeutic and non-therapeutic users of AI/ML advanced products and processes and stakes linked to fulfilling these roles and identities; d) content related to AI/ML causing social problems for disabled people was nearly absent (beyond the need to actually access AI/ML related technologies or processes); e) discussions around disabled people being involved in, or impacted by, AI/ML ethics and governance discourses were absent; and f) the absence of content around “AI for good” and “AI for social good” in relation to disabled people. These findings were evident for the academic abstracts, full-text newspaper articles, and Twitter tweets. In the remainder of the discussion section, we discuss our findings in relation to the four parts of the results section.

4.1. Part 1: Classification of Disabled People and Focus of Coverage

Our study showed that the term “patient” was present at least 20 times more than the terms “disabled people” and “people with disabilities” together in all the material used suggesting that AI/ML is much more discussed around the role and identity of the “patient” and what is at stake for “patients” than in relation to the terms “disabled people” or “people with disabilities”. Our study also showed that health is more of a focus in relation to disabled people than social issues experienced by disabled people in the material covered. These findings are problematic for disabled people. Objective information, good decision making, resource allocation, innovation, diffusion of new technologies [165], and functionality of the health care system [166] are some stakes identified for patients. However, stakes such as objective information, good decision making, resource allocation, innovation, and diffusion of new technologies will be different outside of the “patient” arena namely in the arena of the social life of disabled people. Furthermore, there are many other issues at stake for disabled people that are outside the health and healthcare arena, as evident by the many issues beyond health and healthcare flagged as problematic for disabled people in the UN Convention on the Rights of Persons with Disabilities [102]. Many of these issues are already or will be impacted by advancements in AI/ML, such as employment.
Whether AI/ML is discussed with a health or non-health focus and patient versus non-patient focus in relation to disabled people is one factor that influences how AI/ML is discussed and what is said or not said but could have been said [50,51,52,53]. A focus on health and content linked to the term “patient” would fit with the role and identity narrative of disabled people as therapeutic users and ‘disability’/‘impairment’ as a diagnostic target. The possible role and identity of disabled people as knowledge producers and consumers would be linked to the topics of health and healthcare and not societal issues disabled people face due to AI/ML use by others and the increasing appearance of autonomous AI. Furthermore, if disabled people would be involved as influencers of, and knowledge producers for AI/ML governance and ethics discourses, the role and identity of disabled people that focuses on patient, health, and healthcare would lead to different interventions and contributions to the AI/ML governance and ethics discourses by disabled people and different groups of disabled people would be involved than the role and identity of disabled people that focuses on being negatively impacted as members of society by AI/ML use by others and the increasing appearance of autonomous AI. Indeed, a similar difference of focus and involvement of groups of disabled people based on their identity can be observed in discussions around anti-genetic discrimination laws [60].
That the literature covering AI/ML uses the term “patient” more frequently than the terms “disabled people” or “people with disabilities” is similar to many other technology discussions such as brain computer interfaces [80] or social robotics [58]. However, such bias in focus has an impact on AI/ML use not present in relation to other technologies. Machine learning (ML) is centered around the goal of making artificial machines learn without supervision. However, what the artificial machine learns depends on the data it obtains. Uneven data mean biased judgments. Amazon recently stopped their experiment of using AI instead of a human to deal with hiring procedures because the AI driven hiring process was too biased towards males [167]. Given the lopsided quantity of clinical/medical/health versus non-medical/clinical/health role, identity, and stake narratives related to disabled people present in the literature, one can predict that AI technology will learn a biased picture of disabled people and will not learn about many of the problems that AI/ML might cause for disabled people.
According to role theory, how one is portrayed impacts the role one is to have [62,63,64,65]. As such, one can predict that AI/ML will see certain roles (therapeutic and non-therapeutic user) as applicable to disabled people but not others (influencer of ethics and governance of AI/ML discussions). Role expectations of oneself are impacted by the role expectations others have of oneself [66]. This correlation is linked so far to human beings acting as the other, whereby one can debate with the other human being at least in principle if one does not agree. However, if the other is an autonomous AI this will disempower disabled people to go against the other as one cannot argue with the autonomous AI entity.
According to identity theory, the perception of ‘self’ is influenced by the role one occupies in the social world [67]. As such, if the role within the AI/ML social world disabled people occupy continues to be mostly the role of patient and therapeutic and non-therapeutic user, it will bias the perception of self to move towards such patient and therapeutic and non-therapeutic user identity and make the exhibition of other identities difficult and problematic. Indeed, disabled people in an open forum discussion on sustainability stated that the medical role of disabled people predominantly present in many discourses hinders the involvement of disabled people in policy discussions [168,169]. In that consultation many demands were flagged in relation to academics [168]. Although the focus was on sustainable development these demands are also applicable to AI/ML discourses:
“[w]ork closely with all other stakeholders in the area and undertake research which can provide an evidence base for addressing relevant policy and practice challenges”;
“should undertake research on relevant topics to increase knowledge and understanding of the CRPD and the human rights-based approach to disability, and to develop tools for development programming and planning”;
“monitor CRPD”;
“provide evidence for effective inclusive practices in development/research”;
“to research, publish and interrogate reliable data on disability and ensure that it is disseminated to inform policy and programs and the appropriate levels”;
“to conduct action research to highlight and develop efficient tools and methods to accelerate disability-inclusive policies and practices”.
“teach universal design”;
“capacity building and awareness-raising throughout society” [168] (pp. 4162–4163).
One can predict that some demands, namely to “include disability as a topic in relevant study courses” and to “develop, organize and monitor specific study courses” [168] (pp. 4162–4163), are not met by the current AI/ML discourse although we did not investigate curricula content in our study.
Our data suggest that many of these roles and actions were not met in the AI/ML discourse. It is not enough to discuss disabled people as therapeutic or non-therapeutic users of AI/ML products. These aspects are certainly important but disabled people have more at stake than access to AI/ML influenced product and processes and role and identity must move beyond understanding disabled people only as therapeutic and non-therapeutic users of AI/ML influenced products and processes.

4.2. Part 2 and 3: Tone of Coverage and Role, Identity, and Stake Narrative

The tone of coverage is another factor that influences how AI/ML is discussed and what is said or not said but could have been said [50,51,52,53]. We found that academic abstracts, newspaper articles, and Twitter tweets mostly exhibited a techno-optimistic tone. Situations such as where disabled people are not direct users but are impacted by bad design of autonomous AI, as outlined in an example of a sidewalk robot [170], were not present. This fits with the techno-optimistic focus of the literature covered.
Furthermore, all sources engaged in techno-optimistic way with disabled people in their roles and identities as therapeutic and non-therapeutic users and the stakes linked to these roles and identities. The mostly techno-optimistic tone and the limited role, identity, and stake narrative come with consequences.

4.2.1. The Issue of Techno-Optimism

The report “Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research” [34] lists the following as essential research topics: economic impact, inclusion, and equality; AI, labor, and the economy; governance and accountability; social and societal influences of AI; AI morality and values; AI and “social good”; and managing AI risk. All these topics should include a focus on the negative impact of AI/ML on disabled people. However, our data suggest that these topics are not researched keeping in mind disabled people. Indeed, one would not cover these topics within a techno-optimistic tone. The same is true for the many impacts and principles listed in “appendix 2” of the same report [34]. Most of these impacts and principle are relevant to disabled people and as such, should also be dealt with in relation to disabled people but cannot if the coverage is predominantly techno-optimistic, as we found.
The same report acknowledges that “different publics concern themselves with different problems and have different perspectives on the same issues” [34] (p. 56) and that technology can be a threat or an opportunity [34]. However, different publics also have different issues at stake even if they work on the same problem. For example, there is increasing literature indicating that many disabled people feel left out, feel not being taken into account in climate change discussions, and although disabled people also care about the environment and climate change, the discourse often instrumentalizes disabled people (for example using a medical identity of ‘disability’ [171] or does not take into account the impact of demanded climate change actions [172] on disabled people. The AI/ML discourse in all three sources we covered does not come close in what is needed to understand the issues disabled people already face and increasingly will face in relation to AI/ML advancements.
Justice is mentioned in many documents linked to AI governance [21,22,23,24,25] as is solidarity [23,25] and equity or equality [23,25,173] demanding a differentiated engagement with disabled people beyond the techno-optimistic tone we found in our study.
Various countries have AI strategies [19,20]. However, of the 26 strategies that are listed [19,20] only five mention disabled people, whereby the rest is based on a techno-optimistic view of AI/ML and disabled people [174,175,176,177,178,179].
Various AI strategies and reports mention the media [177,179,180,181], although none mentioned media in relation to disabled people. Our findings suggest that readers of the Canadian newspapers and AI tweets covered will rarely be triggered to think about inappropriate use or negative consequences of AI/ML in all their forms for disabled people. In a UK report, it is stated that many AI researchers and witnesses that connected to AI developments felt that the public had too negative a view of AI and its implications, and that more positive coverage was needed [179] (for the same point see also the New Zealand report [180]). Our findings do not support these views, given the techno-optimistic tone of coverage we found. If the AI researchers and witnesses connected with AI development are correct, then there is a two-tiered system of reporting on AI: one related to non-disabled people and one in relation to disabled people. Such a hierarchy is not surprising. A recent study looking at the coverage of robotics in academic literature and newspapers made the point that many discussions exist surrounding the negative impact of robotics on the employment situation of non-disabled people, and the study also revealed that coverage of the impacts of robotics on the employment situation of disabled people was highly techno-optimistic [182].

4.2.2. Linking Techno-Optimism to the Role, Identity, and Stake Narrative

A techno-optimistic tone facilitates the role and identity of disabled people as potential therapeutic and non-therapeutic users, both of which could be seen as consumer identities and a stake narrative that is linked to the user angle. As much as the therapeutic and non-therapeutic user role and identity is important, there is more to the relationship between AI/ML and disabled people. Indeed, the research topics, principles, and impacts mentioned in a report by Whittlestone and co-workers called “Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research” [34] cannot be dealt with if the role, identity, and stake narratives linked to disabled people are purely from a user angle. It ignores the identity problems that disabled people have flagged for so long. The consumer identity is engaged with outside [183] and in relation to disabled people [184]. Within the techno-optimistic coverage, the consumer identity diminishes disabled people to being cheerleaders of techno-advancements with the only problem being the lack of access to a given technology and technology consumption being the solution to the problems disabled people face. A consumer identity, especially in conjunction with a techno-optimistic narrative, is not enough to prevent the problems AI/ML might pose for disabled people. It might work for products such as webpages and computers and for therapeutics, but it does not work for societal problems disabled people might face due to AI/ML advancements, such as war and conflict or changing occupational landscapes and social and societal influences of AI/ML in general.
If oneself and others see disabled people only within a consumer identity, then this will limit what other roles oneself and others see disabled people as occupying in relation to AI/ML discourses. It furthermore influences the scope of a given role such as knowledge producer or influencer of AI/ML ethics governance discourses. It also entails the danger that others instrumentalize disabled people of the consumer flavor to question disabled people that are looking at issues beyond consumerism. As such, AI/ML discourses that focus so exclusively on therapeutic and non-therapeutic consumer identities influence intergroup relationships [68,69] between disabled and non-disabled people within the AI/ML discourses and between disability groups and disabled people exhibiting different identities, roles, and stakes.
If the consumer identity is predominantly used, it will impact in accordance with the stakeholder theory, stakeholder salience, and stakeholder identification [72]. It will impact for which aspects of AI/ML advancements disabled people see themselves as stakeholders or are seen by others in AI/ML discourses as stakeholders and what is seen as being at stake for disabled people.
Considering Mitchell’s three main factors identifying a stakeholder: “(1) the stakeholder’s power to influence the firm, (2) the legitimacy of the stakeholder’s relationship with the firm, and (3) the urgency of the stakeholder’s claim on the firm” [72] (p. 853), the techno-optimistic tone, consumer role, and consumer identity narrative raise the issue: if the only stakeholder power of disabled people is linked to being consumers, can they only influence AI/ML discussions linked to the consumer angle? Social, ethics, and governance of AI/ML discussions might not be seen in need of involving disabled people as stakeholders, and disabled people might not be seen as being at risk, but a stakeholder under Clarkson’s definition of involuntary stakeholder [74] puts disabled people at risk as a result of AI/ML actors.

4.2.3. Techno-Optimism, User Narrative, and the Issue of Governance

Public engagement is seen as a necessary component for AI advancements [43,45,46] and AI governance. Many problems have been highlighted that diminish the opportunity for the involvement of disabled people in public policy and governance engagements [49,60] including the very medical imagery of disabled people [168,169]. Our findings suggest that the overall role expectation and identity of being therapeutic or non-therapeutic users and the techno-optimistic tone are two other factors that are barriers to the involvement of disabled people in the governance and ethics debate around the societal aspects of AI/ML.
In 2018, the Canadian government started a “national consultation to reinvigorate Canada’s support for science and to position Canada as a global leader in research excellence” [185]. One of the three main consultation areas is “Strengthen equity, diversity and inclusion in research”, and one pillar of that strategy is “Equitable participation: Increase participation of researchers from underrepresented groups in the research enterprise” [185]. Disabled people are an underrepresented group in the research enterprise. To entice disabled people to perform research, they need to be recognized in a broader role and identity narrative than being the user of AI/ML products and processes. Similarly, universities and funders need to be enticed to perform research that covers the breath of impact of AI/ML on disabled people.
Our data suggest that although the Canadian AI strategy has the goal of being a globally recognized leader in the economic, ethical, policy, and legal implications of AI [186], so far, they do not indicate that they think disabled people are missing in AI/ML discourse, which might be a reflection of the dominant role and identity of ‘user’ linked to disabled people for which our data suggest disabled people are involved. Various articles outline ways to identify stakeholder groups [75]; however, given the main identity and role of disabled people as users, our data suggest that AI/ML discourses are not identifying disabled people as stakeholders outside of the user label.

4.3. The Social Good Discourse

Given the lack of presence in the literature covered, our data suggest the need for more engagement by the “AI for social good” actors with disabled people. Our data also indicate an opportunity for conceptual work on the meaning of “social good” and the conflicts between social groups in relation to “social good”. Furthermore, the AI/ML coverage in relation to disabled people in general falls short given discussions within the “AI for social good” literature.
Cowl et al. argued that the following seven factors are essential for AI for social good: “(1) falsifiability and incremental deployment; (2) safeguards against the manipulation of predictors; (3) receiver-contextualised intervention; (4) receiver-contextualised explanation and transparent purposes; (5) privacy protection and data subject consent; (6) situational fairness; and (7) human-friendly semanticisation” [31] (p. 3). If these seven factors are to work with disabled people, one needs an intricate understanding of disabled people and their situation, which is not provided within the literature covered. Furthermore, one needs to engage with disabled people outside of the therapeutic and non-therapeutic user role.
For example, Cowl et al., under situational fairness concluded, “6) AI4SG designers should remove from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety, or other ethical imperatives” [31] (p. 17). Would people even know about the impact on inclusivity or safety in relation to disabled people if disabled people are not systemically engaged? For example, the focus of the 2018 “AI for Good Summit” was to bring together stakeholders together to tackle the 17 sustainable development goals (SDGs). However, did they think about how AI could be used in a detrimental way by enabling problematic SDG goals, such as the one evident in paragraph number 26:
“We are committed to the prevention and treatment of non-communicable diseases, including behavioural, developmental and neurological disorders, which constitute a major challenge for sustainable development” [187]? This goal for sure is questionable if looked at through a disability rights lens.
A recent Dagstuhl workshop on “AI for social good” produced 10 challenges and many topics one has to think about [35]. Nearly all the challenges and topics indicate that the AI/ML coverage we found is lacking and needs to improve. However, the workshop itself showed the same limited framework in covering disabled people only as therapeutic and non-therapeutic users [35] as we identified in our study. According to an IEEE document, respect for human rights as set out in the UN Convention on the Rights of Persons with Disabilities is an important goal of AI [23], which indicates that the gap we found in our study has to be filled in relation to and outside of the “AI for social good” focus.

5. Conclusions and Future Research

The findings of our study suggest that the role, identity, and stake narrative of disabled people in the AI/ML literature covered was limited. The patient and user roles and identities were the most frequently used as were stakes in sync with these roles. The social impact of AI/ML on disabled people was not engaged with, and disabled people were only seen as knowledge producers in relation to usability of AI/ML products, but not in relation to direct or indirect societal impact of AI/ML use by others on disabled people. Ethical issues were not engaged with in relation to disabled people; AI governance and public participation in AI/ML policy development was not a topic linked to disabled people and the concepts of ‘social good” and “for good” were not engaged with in relation to AI/ML and disabled people.
Furthermore, our study suggests minor differences in the coverage of AI/ML in relation to disabled people between the academic abstracts, newspaper articles, and Twitter tweets covered suggesting a broader systemic problem and not a problem with any one source.
Given that the roles one expects of oneself are impacted by the role expectations others have of oneself [66] and given that the perception of ‘self’ is influenced by the role one occupies in the social world [67], the role narrative we found in our data is disempowering for disabled people. Our findings suggest that the role and identity narrative of disabled people in relation to AI/ML must change in academic literature, newspaper articles, and Twitter tweets.
Our findings of a mostly techno-optimistic coverage and a limited role narrative around disabled people might explain why disabled people are covered so limitedly and one-sidedly, if at all, in the AI strategies of various governments. To rectify the problematic findings of our study, our data suggest that we need a systemic change in how one engages with the topic of AI/ML and disabled people. It is not just about any one group such as researchers or journalists having to broaden their focus of reporting on AI/ML and disabled people. Many academic and newspaper articles engage with the negative impact of AI/ML on non-disabled people. Indeed, in government reports covering AI/ML, it is stated that the coverage is too negative [179,180]. If this is the case, the question is, why is the situation so different if disabled people are covered? This discrepancy of tone of coverage based on whether one covers disabled people or non-disabled people is also evident in the robotics coverage (academic articles, newspapers) [182], and as such, our findings suggest a broader systemic problem. So how does one achieve a more diverse and realistic coverage so that disabled people do not only benefit from AI/ML advancements but also are not negatively impacted by AI/ML?
It is argued that “understanding how AI will impact society requires interdisciplinary research, especially for the social sciences and humanities to understand its lived impacts and our everyday understandings of new technology” [48] (see also [39]). This argument and our findings suggest that there is a need for scholars, including community-based scholars (community members doing the research) [188] and students, to focus on the impact of AI/ML on the lived situation of disabled people beyond the user and techno-optimistic angle. We must better understand why scholars covering the social aspects of disabled people have not engaged with the topics we found lacking. Another angle of investigation could be why disabled students are not acting as knowledge producers on the topics we found lacking. Based on a study that investigated the experience of disabled postsecondary students in postsecondary education [189], we suggest that the experience reported (feeling medicalized, hesitant to self-advocate, to try to fit in with the norm) might hinder disabled students to be knowledge producers in relation to governance, public engagement, and ethics in relation to AI/ML and disabled people. Studies that directly investigate the question of why the topics we found lacking where not engaged with are warranted. Given the Canadian government’s effort to diversify its research force [185], our findings suggest that interviewing people involved in this diversification effort on how to deal with our problematic findings are useful. We see the Canadian government initiative as a possibility to deal with some of the systemic problems [49,189] that contribute to the problems we found in our study.
Given our findings, further studies that interview disabled people, AI/ML policy makers, AI/ML academics, AI/ML funders, people in AI/ML governance and AI/ML ethics discussion, and people involved in the development and execution of AI/ML strategies in numerous countries, regarding their views on AI/ML and disabled people are warranted; whereby questions could focus on the discrepancy of tone of coverage and the limited role understanding of disabled people.
Social media, such as Twitter, has become increasingly influential [109,110,111]. As such, it is important to understand our Twitter results better to change our problematic findings. For example, why was the coverage so overwhelmingly techno-optimistic and focused on disabled people as users and so few tweets indicated the social impact of AI/ML on disabled people? Studies interviewing disabled people about how to be public perception influencers, such as on Twitter, in relation to AI/ML and to better understand why we found so few tweets indicating problems of AI/ML for disabled people are warranted.
Given the problematic findings with newspapers, which is a source of information still read by many people, studies that interview journalist students on their knowledge on AI/ML and disabled people should also be useful.
We also need studies that investigate the views of the “AI for good” and “AI for social good” community on disabled people and what this community thinks about how AI/ML impacts disabled people and what disabled people think about the concept of “for good” and “for social good” in general and in relation to AI/ML.
Finally, we think that other review studies would be useful using non-English material and grey literature. Although we think that the main findings of our study will also be found in other sources, data on sources beyond what we focused on are needed, such as how social media in China, newspapers in India, or academic literature in German language journals cover disabled people and AI/ML, to name a few possibilities.

Author Contributions

Conceptualization, A.L. and G.W.; methodology, A.L. and G.W.; formal analysis, A.L. and G.W.; investigation, A.L. and G.W.; data curation, A.L. and G.W.; writing—original draft preparation, A.L. and G.W.; writing—review and editing, A.L. and G.W.; supervision, G.W.; project administration, G.W.; funding acquisition, G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Government of Canada, Canadian Institutes of Health Research, Institute of Neurosciences, Mental Health and Addiction ERN 155204 in cooperation with ERA-NET NEURON JTC 2017.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lillywhite, A.; Wolbring, G. Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people. Assist. Technol. 2019, 1–7. [Google Scholar] [CrossRef] [PubMed]
  2. Feng, R.; Badgeley, M.; Mocco, J.; Oermann, E.K. Deep learning guided stroke management: A review of clinical applications. J. NeuroInterventional Surg. 2017, 10, 358–362. [Google Scholar] [CrossRef] [PubMed]
  3. Ilyasova, N.; Kupriyanov, A.; Paringer, R.; Kirsh, D. Particular Use of BIG DATA in Medical Diagnostic Tasks. Pattern Recognit. Image Anal. 2018, 28, 114–121. [Google Scholar] [CrossRef]
  4. André, Q.; Carmon, Z.; Wertenbroch, K.; Crum, A.; Frank, D.; Goldstein, W.; Huber, J.; Van Boven, L.; Weber, B.; Yang, H. Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data. Cust. Needs Solutions 2017, 5, 28–37. [Google Scholar] [CrossRef] [Green Version]
  5. Deloria, R.; Lillywhite, A.; Villamil, V.; Wolbring, G. How research literature and media cover the role and image of disabled people in relation to artificial intelligence and neuro-research. Eubios J. Asian Int. Bioeth. 2019, 29, 169–182. [Google Scholar]
  6. Hassabis, D.; Kumaran, D.; Summerfield, C.; Botvinick, M. Neuroscience-Inspired Artificial Intelligence. Neuron 2017, 95, 245–258. [Google Scholar] [CrossRef] [Green Version]
  7. Bell, A.J. Levels and loops: The future of artificial intelligence and neuroscience. Philos. Trans. R. Soc. B Boil. Sci. 1999, 354, 2013–2020. [Google Scholar] [CrossRef] [Green Version]
  8. Lee, J. Brain–computer interfaces and dualism: A problem of brain, mind, and body. AI Soc. 2014, 31, 29–40. [Google Scholar] [CrossRef]
  9. Cavazza, M.; Aranyi, G.; Charles, F. BCI Control of Heuristic Search Algorithms. Front. Aging Neurosci. 2017, 11, 225. [Google Scholar] [CrossRef] [Green Version]
  10. Buttazzo, G. Artificial consciousness: Utopia or real possibility? Computer 2001, 34, 24–30. [Google Scholar] [CrossRef]
  11. De Garis, H. Artificial Brains. Inf. Process. Med. Imaging 2007, 8, 159–174. [Google Scholar]
  12. Catherwood, P.; Finlay, D.; McLaughlin, J. Intelligent Subcutaneous Body Area Networks: Anticipating Implantable Devices. IEEE Technol. Soc. Mag. 2016, 35, 73–80. [Google Scholar] [CrossRef]
  13. Meeuws, M.; Pascoal, D.; Bermejo, I.; Artaso, M.; De Ceulaer, G.; Govaerts, P. Computer-assisted CI fitting: Is the learning capacity of the intelligent agent FOX beneficial for speech understanding? Cochlea- Implant. Int. 2017, 18, 198–206. [Google Scholar] [CrossRef] [PubMed]
  14. Wu, Y.-C.; Feng, J.-W. Development and Application of Artificial Neural Network. Wirel. Pers. Commun. 2017, 102, 1645–1656. [Google Scholar] [CrossRef]
  15. Garden, H.; Winickoff, D. Issues in Neurotechnology Governance. Available online: https://doi.org/10.1787/18151965 (accessed on 26 January 2020).
  16. Crowson, M.G.; Lin, V.; Chen, J.M.; Chan, T.C.Y. Machine Learning and Cochlear Implantation—A Structured Review of Opportunities and Challenges. Otol. Neurotol. 2020, 41, e36–e45. [Google Scholar] [CrossRef]
  17. Wangmo, T.; Lipps, M.; Kressig, R.W.; Ienca, M. Ethical concerns with the use of intelligent assistive technology: Findings from a qualitative study with professional stakeholders. BMC Med. Ethic. 2019, 20, 1–11. [Google Scholar] [CrossRef] [Green Version]
  18. Neto, J.S.D.O.; Silva, A.L.M.; Nakano, F.; Pérez-Álcazar, J.J.; Kofuji, S.T. When Wearable Computing Meets Smart Cities. In Smart Cities and Smart Spaces; IGI Global: Pennsylvania, PA, USA, 2019; pp. 1356–1376. [Google Scholar]
  19. Ding, J. Deciphering China’s AI Dream. Available online: https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf (accessed on 26 January 2020).
  20. Dutton, T. An Overview of National AI Strategies. Available online: https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd (accessed on 26 January 2020).
  21. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [Green Version]
  22. Asilomar and AI Conference Participants. Asilomar AI Principles Principles Developed in Conjunction with the 2017 Asilomar Conference. Available online: https://futureoflife.org/ai-principles/?cn-reloaded=1 (accessed on 26 January 2020).
  23. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, T.I.G.I. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (A/IS). Available online: http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf (accessed on 26 January 2020).
  24. Participants in the Forum on the Socially Responsible Development of AI. Montreal Declaration for a Responsible Development of Artificial Intelligence. Available online: https://www.montrealdeclaration-responsibleai.com/the-declaration (accessed on 26 January 2020).
  25. European Group on Ethics in Science and New Technologies. J. Med Ethic 1998, 24, 247. [CrossRef] [Green Version]
  26. University of Southern California USC Center for Artificial Intelligence in Society. USC Center for Artificial Intelligence in Society: Mission Statement. Available online: https://www.cais.usc.edu/wp-content/uploads/2017/05/USC-Center-for-Artificial-Intelligence-in-Society-Mission-Statement.pdf (accessed on 26 January 2020).
  27. Lehman-Wilzig, S.N. Frankenstein unbound: Towards a legal definition of artificial intelligence. Futures 1981, 13, 442–457. [Google Scholar] [CrossRef]
  28. Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B.; et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Available online: https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf (accessed on 26 January 2020).
  29. Smith, K.J. The AI community and the united nations: A missing global conversation and a closer look at social good. In Proceedings of the AAAI Spring Symposium—Technical Report, Palo Alto, CA, USA, 27–29 March 2017; pp. 95–100. [Google Scholar]
  30. Prasad, M. Back to the future: A framework for modelling altruistic intelligence explosions. In Proceedings of the AAAI Spring Symposium—Technical Report, Palo Alto, CA, USA, 27–29 March 2017; pp. 60–63. [Google Scholar]
  31. Cowls, J.; King, T.; Taddeo, M.; Floridi, L. Designing AI for Social Good: Seven Essential Factors. SSRN Electron. J. 2019, 1–21. [Google Scholar] [CrossRef] [Green Version]
  32. Varshney, K.R.; Mojsilovic, A. Open Platforms for Artificial Intelligence for Social Good: Common Patterns as a Pathway to True Impact. Available online: https://aiforsocialgood.github.io/icml2019/accepted/track1/pdfs/39_aisg_icml2019.pdf (accessed on 26 January 2020).
  33. Ortega, A.; Otero, M.; Steinberg, F.; Andrés, F. Technology Can Help to Right Technology’s Social Wrongs: Elements for a New Social Compact for Digitalisation. Available online: https://t20japan.org/policy-brief-technology-help-right-technology-social-wrongs/ (accessed on 26 January 2020).
  34. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Dihal, K.; Cave, S. Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research. Available online: https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf (accessed on 26 January 2020).
  35. Clopath, C.; De Winne, R.; Emtiyaz Khan, M.; Schaul, T. Report from Dagstuhl Seminar 19082, AI for the Social Good. Available online: http://drops.dagstuhl.de/opus/volltexte/2019/10862/ (accessed on 26 January 2020).
  36. Hager, G.D.; Drobnis, A.; Fang, F.; Ghani, R.; Greenwald, A.; Lyons, T.; Parkes, D.C.; Schultz, J.; Saria, S.; Smith, S.F.; et al. Artificial Intelligence for Social Good. Available online: https://cra.org/ccc/wp-content/uploads/sites/2/2016/04/AI-for-Social-Good-Workshop-Report.pdf (accessed on 26 January 2020).
  37. Berendt, B. AI for the Common Good?! Pitfalls, challenges, and ethics pen-testing. Paladyn J. Behav. Robot. 2019, 10, 44–65. [Google Scholar] [CrossRef] [Green Version]
  38. Efremova, N.; West, D.; Zausaev, D. AI-Based Evaluation of the SDGs: The Case of Crop Detection With Earth Observation Data. SSRN Electron. J. 2019, 1–4. [Google Scholar] [CrossRef] [Green Version]
  39. Canadian Institute for Advanced Research (CIFAR). AI & Society. Available online: https://www.cifar.ca/ai/ai-society (accessed on 26 January 2020).
  40. Gasser, U.; Almeida, V. A Layered Model for AI Governance. IEEE Internet Comput. 2017, 21, 58–62. [Google Scholar] [CrossRef] [Green Version]
  41. Lauterbach, B.; Bonim, A. Artificial Intelligence: A Strategic Business and Governance Imperative. Available online: https://gecrisk.com/wp-content/uploads/2016/09/ALauterbach-ABonimeBlanc-Artificial-Intelligence-Governance-NACD-Sept-2016.pdf (accessed on 26 January 2020).
  42. Rahwan, I. Society-in-the-loop: Programming the algorithmic social contract. Ethic- Inf. Technol. 2017, 20, 5–14. [Google Scholar] [CrossRef] [Green Version]
  43. Boyd, M.; Wilson, N. Rapid developments in Artificial Intelligence: How might the New Zealand government respond? Policy Q. 2017, 13, 36–43. [Google Scholar] [CrossRef]
  44. Wang, W.; Siau, K. Artificial Intelligence: A Study on Governance, Policies, and Regulations. Available online: https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1039&context=mwais2018 (accessed on 26 January 2020).
  45. Wilkinson, C.; Bultitude, K.; Dawson, E. “Oh Yes, Robots! People Like Robots; the Robot People Should do Something” Perspectives and Prospects in Public Engagement With Robotics. Sci. Commun. 2010, 33, 367–397. [Google Scholar] [CrossRef] [Green Version]
  46. Stahl, B.C.; Wright, D. Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation. IEEE Secur. Priv. Mag. 2018, 16, 26–33. [Google Scholar] [CrossRef]
  47. European Commission. Report from the High-Level Hearing ‘A European Union Strategy for Artificial Intelligence’. Available online: https://ec.europa.eu/epsc/sites/epsc/files/epsc_-_report_-_hearing_-_a_european_union_strategy_for_artificial_intelligence.pdf (accessed on 26 January 2020).
  48. McKelvey, F. Next Steps for Canadian AI Governance: Reflections on Student Symposium on AI and Human Rights. Available online: http://www.amo-oma.ca/en/2018/05/10/next-steps-for-canadian-ai-governance-reflections-on-student-symposium-on-ai-and-human-rights/ (accessed on 26 January 2020).
  49. Diep, L. Anticipatory Governance, Anticipatory Advocacy, Knowledge Brokering, and the State of Disabled People’s Rights Advocacy in Canada: Perspectives of Two Canadian Cross-Disability Rights Organizations. Master’s Thesis, University of Calgary, Calgary, AB, Cannada, September 2017. [Google Scholar]
  50. Fairclough, N. Analysing Discourse: Textual Analysis for Social Research. Available online: https://pdfs.semanticscholar.org/a3cd/84f4fd0d89eda5a15b9f9c7fa01394aca9d9.pdf (accessed on 26 January 2020).
  51. Gill, R. Discourse Analysis. In Qualitative Researching with Text, Image and Sound; Bauer, M.W., Gaskell, G., Eds.; A Practical Handbook; Sage Publications: London, UK, 2000; pp. 172–190. [Google Scholar]
  52. Schröter, M.; Taylor, C. Exploring silence and absence in discourse: Empirical approaches; Springer: New York, NY, USA, 2017; p. 395. [Google Scholar]
  53. Van Dijk, T.A. Discourse, knowledge, power and politics. In Critical Discourse Studies in Context and Cognition; John Benjamins Publishing Company: Amsterdam, The Netherlands, 2011; pp. 27–63. [Google Scholar]
  54. Longmore, P.K. A Note on Language and the Social Identity of Disabled People. Am. Behav. Sci. 1985, 28, 419–423. [Google Scholar] [CrossRef]
  55. Hutchinson, K.; Roberts, C.; Daly, M. Identity, impairment and disablement: Exploring the social processes impacting identity change in adults living with acquired neurological impairments. Disabil. Soc. 2017, 33, 175–196. [Google Scholar] [CrossRef]
  56. Fujimoto, Y.; Rentschler, R.; Le, H.; Edwards, D.; Härtel, C. Lessons Learned from Community Organizations: Inclusion of People with Disabilities and Others. Br. J. Manag. 2013, 25, 518–537. [Google Scholar] [CrossRef]
  57. Wolbring, G. Solutions follow perceptions: NBIC and the concept of health, medicine, disability and disease. Heal. Law Rev. 2004, 12, 41–47. [Google Scholar]
  58. Yumakulov, S.; Yergens, D.; Wolbring, G. Imagery of Disabled People within Social Robotics Research. Comput. Vis. 2012, 7621, 168–177. [Google Scholar]
  59. Zhang, L.; Haller, B. Consuming Image: How Mass Media Impact the Identity of People with Disabilities. Commun. Q. 2013, 61, 319–334. [Google Scholar] [CrossRef]
  60. Wolbring, G.; Diep, L. The Discussions around Precision Genetic Engineering: Role of and Impact on Disabled People. Laws 2016, 5, 37. [Google Scholar] [CrossRef] [Green Version]
  61. Barnes, C. Disability Studies: New or not so new directions? Disabil. Soc. 1999, 14, 577–580. [Google Scholar] [CrossRef]
  62. Titsworth, B.S. An Ideological Basis for Definition in Public Argument: A Case Study of the Individuals with Disabilities in Education Act. Argum. Advocacy 1999, 35, 171–184. [Google Scholar] [CrossRef]
  63. Maturo, A. The medicalization of education: ADHD, human enhancement and academic performance. Ital. J. Sociol. Educ. 2013, 5, 175–188. [Google Scholar]
  64. Varul, M.Z. Talcott Parsons, the Sick Role and Chronic Illness. Body Soc. 2010, 16, 72–94. [Google Scholar] [CrossRef] [Green Version]
  65. Wilson, R. The Discursive Construction of Elderly’s Needs—A Critical Discourse Analysis of Political Discussions in Sweden. Available online: http://www.diva-portal.org/smash/get/diva2:1339948/FULLTEXT01.pdf (accessed on 26 January 2020).
  66. Schulz, H.M. Reference group influence in consumer role rehearsal narratives. Qual. Mark. Res. Int. J. 2015, 18, 210–229. [Google Scholar] [CrossRef]
  67. Hogg, M.A.; Terry, D.J.; White, K.M. A Tale of Two Theories: A Critical Comparison of Identity Theory with Social Identity Theory. Soc. Psychol. Q. 1995, 58, 255. [Google Scholar] [CrossRef]
  68. Dirth, T.P.; Branscombe, N.R. Recognizing Ableism: A Social Identity Analysis of Disabled People Perceiving Discrimination as Illegitimate. J. Soc. Issues 2019, 75, 786–813. [Google Scholar] [CrossRef] [Green Version]
  69. Jiang, C.; Vitiello, C.; Axt, J.R.; Campbell, J.T.; Ratliff, K.A. An examination of ingroup preferences among people with multiple socially stigmatized identities. Self Identit. 2019, 1–18. [Google Scholar] [CrossRef]
  70. Burke, P.J.; Reitzes, N.C. An Identity Theory Approach to Commitment. Soc. Psychol. Q. 1991, 54, 239. [Google Scholar] [CrossRef]
  71. Crane, A.; Ruebottom, T. Stakeholder Theory and Social Identity: Rethinking Stakeholder Identification. J. Bus. Ethic 2011, 102, 77–87. [Google Scholar] [CrossRef]
  72. Mitchell, R.K.; Agle, B.R.; Wood, D.J. Toward a theory of stakeholder identification and salience: Defining the principle of who and what really counts. Acad. Manag. Rev. 1997, 22, 853–886. [Google Scholar] [CrossRef]
  73. Friedman, A.L.; Miles, S. Stakeholders: Theory and practice; Oxford University Press on Demand: Oxford, UK, 2006; p. 362. [Google Scholar]
  74. Clarkson, M. A risk based model of stakeholder theory. In Proceedings of the second Toronto conference on stakeholder theory; University of Toronto: Toronto, ON, Canada, 1994; pp. 18–19. [Google Scholar]
  75. Schiller, C.; Winters, M.; Hanson, H.M.; Ashe, M.C. A framework for stakeholder identification in concept mapping and health research: A novel process and its application to older adult mobility and the built environment. BMC Public Heal. 2013, 13, 428. [Google Scholar] [CrossRef] [Green Version]
  76. Inclezan, D.; Pradanos, L.I. Viewpoint: A Critical View on Smart Cities and AI. J. Artif. Intell. Res. 2017, 60, 681–686. [Google Scholar] [CrossRef]
  77. Einsiedel, E.F. Framing science and technology in the Canadian press. Public Underst. Sci. 1992, 1, 89–101. [Google Scholar] [CrossRef]
  78. Yudkowsky, E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. Available online: https://www.researchgate.net/profile/James_Peters/post/Can_artificial_Intelligent_systems_replace_Human_brain/attachment/59d62a00c49f478072e9cbc4/AS:272471561834509@1441973690551/download/AIPosNegFactor.pdf (accessed on 26 January 2020).
  79. Nierling, L.; João-Maia, M.; Hennen, L.; Bratan, T.; Kuuk, P.; Cas, J.; Capari, L.; Krieger-Lamina, J.; Mordini, E.; Wolbring, G. Assistive technologies for people with disabilities Part III: Perspectives on assistive technologies. Available online: http://www.europarl.europa.eu/RegData/etudes/IDAN/2018/603218/EPRS_IDA(2018)603218(ANN3)_EN.pdf (accessed on 26 January 2020).
  80. Wolbring, G.; Diep, L.; Jotterand, F.; Dubljevic, V. Cognitive/Neuroenhancement Through an Ability Studies Lens. In Cognitive Enhancement; Oxford University Press (OUP): Oxford, UK, 2016; pp. 57–75. [Google Scholar]
  81. Diep, L.; Wolbring, G. Who Needs to Fit in? Who Gets to Stand out? Communication Technologies Including Brain-Machine Interfaces Revealed from the Perspectives of Special Education School Teachers Through an Ableism Lens. Educ. Sci. 2013, 3, 30–49. [Google Scholar] [CrossRef] [Green Version]
  82. Diep, L.; Wolbring, G. Perceptions of Brain-Machine Interface Technology among Mothers of Disabled Children. Disabil. Stud. Q. 2015, 35, 35. [Google Scholar] [CrossRef]
  83. Garlington, S.B.; Collins, M.E.; Bossaller, M.R.D. An Ethical Foundation for Social Good: Virtue Theory and Solidarity. Res. Soc. Work. Pr. 2019, 30, 196–204. [Google Scholar] [CrossRef]
  84. Singell, L.; Engell, J.; Dangerfield, A. Saving Higher Education in the Age of Money. Academe 2006, 92, 67. [Google Scholar] [CrossRef]
  85. Gerrard, H. Skills as Trope, Skills as Target: Universities and the Uncertain Future. N. Z. J. Educ. Stud. 2017, 52, 363–370. [Google Scholar] [CrossRef]
  86. Blanco, P.T. Volver a donde nunca se estuvo. Pacto social, felicidad pública y educación en Chile (c.1810-c.2010). Araucaria 2017, 19, 323–344. [Google Scholar] [CrossRef]
  87. Wahid, N.A.; Alias, N.H.; Takara, K.; Ariffin, S.K. Water as a business: Should water tariff remain? Descriptive analyses on Malaysian households’ socio-economic background. Int. J. Econ. Res. 2017, 14, 367–375. [Google Scholar]
  88. Walker, G. Health as an Intermediate End and Primary Social Good. Public Heal. Ethic 2017, 11, 6–19. [Google Scholar] [CrossRef]
  89. Riches, G. First World Hunger: Food Security and Welfare Politics; Springer: New York, NY, USA, 1997; p. 200. [Google Scholar]
  90. Morrison, A. Contributive justice: Social class and graduate employment in the UK. J. Educ. Work. 2019, 32, 335–346. [Google Scholar] [CrossRef]
  91. Daniels, N. Equity of Access to Health Care: Some Conceptual and Ethical Issues. Milbank Mem. Fund Quarterly. Heal. Soc. 1982, 60, 51. [Google Scholar] [CrossRef]
  92. Castiglioni, C.; Lozza, E.; Bonanomi, A. The Common Good Provision Scale (CGP): A Tool for Assessing People’s Orientation towards Economic and Social Sustainability. Sustainability 2019, 11, 370. [Google Scholar] [CrossRef] [Green Version]
  93. Rioux, M.; Zubrow, E. Social disability and the public good. In The Market or The Public Domain; Routledge: Abington on the Thames, UK, 2005; pp. 162–186. [Google Scholar]
  94. Bogomolov, A.; Lepri, B.; Staiano, J.; Letouzé, E.; Oliver, N.; Pianesi, F.; Pentland, A. Moves on the Street: Classifying Crime Hotspots Using Aggregated Anonymized Data on People Dynamics. Big Data 2015, 3, 148–158. [Google Scholar] [CrossRef]
  95. Bryant, C.; Pham, A.T.; Remash, H.; Remash, M.; Schoenle, N.; Zimmerman, J.; Albright, S.D.; Rebelsky, S.A.; Chen, Y.; Chen, Z.; et al. A Middle-School Camp Emphasizing Data Science and Computing for Social Good. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education—SIGCSE ’19; Association for Computing Machinery (ACM), Minneapolis, MN, USA, February 2019; pp. 358–364. [Google Scholar]
  96. Iyer, L.S.; Dissanayake, I.; Bedeley, R.T. “RISE IT for Social Good”—An experimental investigation of context to improve programming skills. In Proceedings of the 2017 ACM SIGMIS Conference on Computers and People Research, Bangalore, India, 21–23 June 2017; pp. 49–52. [Google Scholar]
  97. Chen, Y.; Rebelsky, S.A.; Chen, Z.; Gumidyala, S.; Koures, A.; Lee, S.; Msekela, J.; Remash, H.; Schoenle, N.; Albright, S.D. A Middle-School Code Camp Emphasizing Digital Humanities. In Proceedings of the SIGCSE ’19: The 50th ACM Technical Symposium on Computer Science Education, Minneapolis, MN, USA, 27 February–2 March 2019. [Google Scholar]
  98. Fisher, D.H.; Cameron, J.; Clegg, T.; August, S. Integrating Social Good into CS Education. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education, SIGCSE 2018, Baltimore, MD, USA, 21–24 February 2018. [Google Scholar]
  99. Goldweber, M. Strategies for Adopting CSG-Ed In CS 1. In Proceedings of the 2018 Research on Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT), Baltimore, MD, USA, 21–21 February 2018; pp. 1–2. [Google Scholar]
  100. Shi, Z.R.; Wang, C.; Fang, F. Artificial Intelligence for Social Good: A Survey. Available online: https://arxiv.org/pdf/2001.01818.pdf (accessed on 26 January 2020).
  101. Musikanski, L.; Rakova, B.; Bradbury, J.; Phillips, R.; Manson, M. Artificial Intelligence and Community Well-being: A Proposal for an Emerging Area of Research. Int. J. Community Well-Being 2020, 1–17. [Google Scholar] [CrossRef] [Green Version]
  102. United Nations. Convention on the Rights of Persons with Disabilities (CRPD). Available online: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html (accessed on 26 January 2020).
  103. Thompson, W.S. Eugenics and the Social Good. Soc. Forces 1925, 3, 414–419. [Google Scholar] [CrossRef]
  104. Graby, S. Access to work or liberation from work? Disabled people, autonomy, and post-work politics. Can. J. Disabil. Stud. 2015, 4, 132. [Google Scholar] [CrossRef] [Green Version]
  105. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef] [Green Version]
  106. Anderson, S.; Allen, P.; Peckham, S.; Goodwin, N. Asking the right questions: Scoping studies in the commissioning of research on the organisation and delivery of health services. Heal. Res. Policy Syst. 2008, 6, 7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. News Media Canada. FAQ. Available online: https://nmc-mic.ca/about-newspapers/faq/ (accessed on 26 January 2020).
  108. News Media Canada. Snapshot 2016 Daily Newspapers. Available online: https://nmc-mic.ca/wp-content/uploads/2015/02/Snapshot-Fact-Sheet-2016-for-Daily-Newspapers-3.pdf (accessed on 26 January 2020).
  109. Ye, S.; Wu, S.F. Measuring Message Propagation and Social Influence on Twitter.com. In Computer Vision; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2010; Volume 6430, pp. 216–231. [Google Scholar]
  110. Kim, Y.; Chandler, J.D. How social community and social publishing influence new product lauch: The case of twitter during the playstation 4 and XBOX One launches. J. Mark. Theory Pract. 2018, 26, 144–157. [Google Scholar] [CrossRef]
  111. Zannettou, S.; Caulfield, T.; De Cristofaro, E.; Sirivianos, M.; Stringhini, G.; Blackburn, J. Disinformation Warfare: Understanding State-Sponsored Trolls on Twitter and Their Influence on the Web. Available online: https://arxiv.org/abs/1801.09288 (accessed on 26 February 2020).
  112. Young, K.; Ashby, D.; Boaz, A.; Grayson, L. Social Science and the Evidence-based Policy Movement. Soc. Policy Soc. 2002, 1, 215–224. [Google Scholar] [CrossRef] [Green Version]
  113. Bowen, S.; Zwi, A. Pathways to “Evidence-Informed” Policy and Practice: A Framework for Action. PLoS Med. 2005, 2, e166. [Google Scholar] [CrossRef] [Green Version]
  114. Head, B. Three Lenses of Evidence-Based Policy. Aust. J. Public Adm. 2008, 67, 1–11. [Google Scholar] [CrossRef]
  115. Davis, K.; Drey, N.; Gould, D.; Drey, N. What are scoping studies? A review of the nursing literature. Int. J. Nurs. Stud. 2009, 46, 1386–1400. [Google Scholar] [CrossRef]
  116. Burwell, S.; Sample, M.; Racine, E. Ethical aspects of brain computer interfaces: A scoping review. BMC Med. Ethic 2017, 18, 60. [Google Scholar] [CrossRef]
  117. Hsieh, H.-F.; Shannon, S.E. Three Approaches to Qualitative Content Analysis. Qual. Heal. Res. 2005, 15, 1277–1288. [Google Scholar] [CrossRef]
  118. Edling, S.; Simmie, G.M. Democracy and emancipation in teacher education: A summative content analysis of teacher educators’ democratic assignment expressed in policies for Teacher Education in Sweden and Ireland between 2000-2010. Citizenship Soc. Econ. Educ. 2017, 17, 20–34. [Google Scholar] [CrossRef] [Green Version]
  119. Ahuvia, A. Traditional, Interpretive, and Reception Based Content Analyses: Improving the Ability of Content Analysis to Address Issues of Pragmatic and Theoretical Concern. Soc. Indic. Res. 2001, 54, 139–172. [Google Scholar] [CrossRef]
  120. Cullinane, K.; Toy, N. Identifying influential attributes in freight route/mode choice decisions: A content analysis. Transp. Res. Part E: Logist. Transp. Rev. 2000, 36, 41–53. [Google Scholar] [CrossRef]
  121. Downe-Wamboldt, B. Content analysis: Method, applications, and issues. Heal. Care Women Int. 1992, 13, 313–321. [Google Scholar] [CrossRef] [PubMed]
  122. Woodrum, E. “Mainstreaming” Content Analysis in Social Science: Methodological Advantages, Obstacles, and Solutions. Soc. Sci. Res. 1984, 13, 1. [Google Scholar] [CrossRef]
  123. Clarke, V.; Braun, V. Thematic Analysis. In Encyclopedia of Critical Psychology; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2014; pp. 1947–1952. [Google Scholar]
  124. Baxter, P.; Jack, S. Qualitative case study methodology: Study design and implementation for novice researchers. Qual. Rep 2008, 13, 544–559. [Google Scholar]
  125. Lincoln, Y.S.; Guba, E.G.; Pilotta, J.J. Naturalistic inquiry. Int. J. Intercult. Relations 1985, 9, 438–439. [Google Scholar] [CrossRef]
  126. Shenton, A. Strategies for ensuring trustworthiness in qualitative research projects. Educ. Inf. 2004, 22, 63–75. [Google Scholar] [CrossRef] [Green Version]
  127. Miesenberger, K.; Ossmann, R.; Archambault, D.; Searle, G.; Holzinger, A. More Than Just a Game: Accessibility in Computer Games. Available online: https://www.researchgate.net/publication/221217630_More_Than_Just_a_Game_Accessibility_in_Computer_Games (accessed on 26 February 2020).
  128. Dengler, S.; Awad, A.; Dressler, F. Sensor/Actuator Networks in Smart Homes for Supporting Elderly and Handicapped People. In Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops (AINAW’07), Niagara Falls, ON, Canada, 21–23 May 2007. [Google Scholar]
  129. Mintz, J.; Gyori, M.; Aagaard, M. Touching the Future Technology for Autism?: Lessons from the HANDS Project; IOS Press: Amsterdam, The Netherlands, 2012; p. 135. [Google Scholar]
  130. Agangiba, M.A.; Nketiah, E.B.; Agangiba, W.A. Web Accessibility for the Visually Impaired: A Case of Higher Education Institutions’ Websites in Ghana. Form. Asp. Compon. Softw. 2017, 10473, 147–153. [Google Scholar]
  131. Brewer, J. Exploring Paths to a More Accessible Digital Future. In Proceedings of the ASSETS ’18: 20th International ACM SIGACCESS Conference on Computers and Accessibility, Galway, Ireland, 22–24 October 2018. [Google Scholar]
  132. Venter, A.; Renzelberg, G.; Homann, J.; Bruhn, L. Which Technology Do We Want? Ethical Considerations about Technical Aids and Assisting Technology. Comput. Vis. 2008, 5105, 1325–1331. [Google Scholar]
  133. Lasecki, W.S. Crowdsourcing for deployable intelligent systems. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, Bellvue, WA, USA, 14–18 July 2013. [Google Scholar]
  134. Bruhn, L.; Homann, J.; Renzelberg, G. Participation in Development of Computers Helping People. Comput. Vis. 2006, 4061, 532–535. [Google Scholar]
  135. Braffort, A. Research on Computer Science and Sign Language: Ethical Aspects. Comput. Vis. 2002, 2298, 1–8. [Google Scholar]
  136. Adams, K.; Encarnação, P.; Rios-Rincón, A.M.; Cook, A. Will artificial intelligence be a blessing or concern in assistive robots for play? J. Hum. Growth Dev. 2018, 28, 213–218. [Google Scholar] [CrossRef] [Green Version]
  137. Kirka, D. Talking Gloves, Tactile Windows: AI Helps the Disabled. Available online: https://apnews.com/f67a9be7cc77406ab84baf20f4d739f3/Talking-gloves,-tactile-windows:-new-tech-helps-the-disabled (accessed on 25 February 2020).
  138. Stonehouse, D. The cyborg evolution: Kevin Warwick’s experiments with implanting chips to talk to computers isn’t as far-fetched as one might think. The Ottawa Citizen, 28 March 2002; F2. [Google Scholar]
  139. Dingman, S. Even Stephen Hawking fears the rise of machines. The Globe and Mail, 3 December 2014. [Google Scholar]
  140. Rivers, H. Automation: ’very big threat and necessary evil’. Tillsonburg News, 4 July 2018; A6. [Google Scholar]
  141. Michigan Medicine. “This technology has great potential to help people with disabilities, but it also has potential for misuse and unintended consequences.” Learn more about why ethics are important as brain implants and artificial intelligence merge. Twitter: Twitter. 2017. Available online: https://twitter.com/umichmedicine/status/932601930899165186 (accessed on 25 February 2020).
  142. grumpybrummie. Totally unethical #uber. People with disabilities next? Plus AI should be used for the greater good. Imagine the outcry if someone patented AI that spotted potential suicides and patented it! The #bbc picked this up and did not note the significance. Tweet, 2018. Available online: https://twitter.com/grumpybrummie/status/1006507671141388288 (accessed on 25 February 2020).
  143. Celestino Güemes @tguemes. This is great. I expect they include some money on ethical aspects of AI, to fight “artificial bias amplification” that impact all of us. “Microsoft commits $25M over 5 years for new ‘AI for Accessibility’ initiative to help people with disabilities”. Twitter: Twitter. 2018. Available online: https://twitter.com/tguemes/status/993724895405203456 (accessed on 25 February 2020).
  144. MyOneWomanShow. @DavidLepofsky: AI can create barriers for people with disabilities. Says we need to take action in order to mitigate that risk. #AIsocialgood #ipOZaichallenge @lawyersdailyca. Tweet, 2018.
  145. atlaak. Microsoft will use AI to help people with disabilities deal with challenges in three key areas: Employment, human connection and modern life. Tweet, 2018. Available online: https://twitter.com/MyOneWomanShow/status/959523216573304833 (accessed on 25 February 2020).
  146. MrTopple. Why have you done this again to @NicolaCJeffery, @TwitterSupport? We’ve been here before with your AI/algorithms intentionally targeting disabled people. And you’ve done it again. You know this apparent discrimination is unacceptable yes? #DisabilityRights. Tweet, 2018. Available online: https://twitter.com/MrTopple/status/983473326797574145 (accessed on 25 February 2020).
  147. Jenny_L_Davis. “AI performs tasks according to existing formations in the social order, amplifying implicit biases, and ignoring disabled people or leaving them exposed”. Tweet, 2018. Available online: https://twitter.com/Jenny_L_Davis/status/964619430268354560 (accessed on 25 February 2020).
  148. PeaceGeeks. There is increasing evidence that women, ethnic minority, people with disabilities and LGBTQ experience discrimination by biased algorithms. How do we make sure that artificial intelligence doesn’t further marginalize these groups? Tweet. 2017. Available online: https://twitter.com/PeaceGeeks/status/1020023012563746816 (accessed on 25 February 2020).
  149. SFdirewolf. Content warning: Suicide, suicidal ideation MT Facebook’s AI suicide prevention tool raises concerns for people with mental. Tweet, 2017. Available online: https://twitter.com/SFdirewolf/status/940235227682701313 (accessed on 25 February 2020).
  150. jont. More bad news for disabled people, I want to know how @hirevue and personality test tools like @saberruk avoid AI driven discrimination. Tweet, 2017. Available online: https://twitter.com/jont/status/905392476952944641 (accessed on 25 February 2020).
  151. karineb. Real problematic: #AI ranks a job applicant with 25,000 criteria. Do you smile? do you make an eye contact? Good. What if not? what chance does this process give to people who are not behaving like the mainstream? of people with disabilities? Real problematic. Tweet, 2018. Available online: https://twitter.com/karineb/status/991644766071861248 (accessed on 25 February 2020).
  152. Bühler, C. Design for All—From Idea to Practise. Comput. Vis. 2008, 5105, 106–113. [Google Scholar]
  153. Hubert, M.; Bühler, C.; Schmitz, W. Implementing UNCRPD—Strategies of Accessibility Promotion and Assistive Technology Transfer in North Rhine-Westphalia. Comput. Vis. 2016, 9758, 89–92. [Google Scholar]
  154. Constantinou, V.; Kosmas, P.; Parmaxi, A.; Ioannou, A.; Klironomos, I.; Antona, M.; Stephanidis, C.; Zaphiris, P. Towards the Use of Social Computing for Social Inclusion: An Overview of the Literature. Form. Asp. Compon. Softw. 2018, 10924 LNCS, 376–387. [Google Scholar]
  155. Treviranus, J.; Clark, C.; Mitchell, J.; Vanderheiden, G.C. Prosperity4All—Designing a Multi-Stakeholder Network for Economic Inclusion. Univers. Access -Hum.-Comput. Interact. Aging Assist. Environ. 2014, 8516, 453–461. [Google Scholar]
  156. Gjøsæter, T.; Radianti, J.; Chen, W. Universal Design of ICT for Emergency Management. Lect. Notes Comput. Sci. 2018, 10907 LNCS, 63–74. [Google Scholar]
  157. Dalmau, F.V.; Redondo, E.; Fonseca, D. E-Learning and Serious Games. Form. Asp. Compon. Softw. 2015, 9192, 632–643. [Google Scholar]
  158. juttatrevira. Replying to @juttatrevira @melaniejoly @bigideaproj @CQualtro training AI to serve “outliers”, including people with disabilities results in better prediction, planning, risk aversion & design Tweet. 2017. Available online: https://twitter.com/juttatrevira/status/867387763808718852 (accessed on 25 February 2020).
  159. SmartCitiesL. “People with #disabilities must be involved in the design & development of #SmartCities and #AI because we will build-in #accessibility and human-centered innovation from the beginning in a way that businesses and non-disabled techies could never think of otherwise,” says @DLBLLC. Tweet, 2018. Available online: https://twitter.com/juttatrevira/status/867387763808718852 (accessed on 25 February 2020).
  160. SPMazrui. @HonTonyCoelho urges business to include people with disabilities in the development of AI. Applauds companies like Apple for including people with disabilities and making products better for everyone! @AppleNews @USBLN. Tweet, 2018. Available online: https://twitter.com/SPMazrui/status/1016716434674552832 (accessed on 25 February 2020).
  161. newinquiry. “There are Innovations for Disabled People being made in the Field of aCcessible Design and Medical Technologies, Such As AI Detecting Autism (Again). However, in These Narratives, Technologies Come First—As “Helping People with Disabilities”. Tweet, 2018. Available online: https://twitter.com/newinquiry/status/961974216643022848 (accessed on 25 February 2020).
  162. zagbah. AI may help improve the lives of Disabled people who can afford to access the technology. Tech ain’t free & Dis folk are typically poor. Tweet, 25 February 2017. Available online: https://twitter.com/zagbah/status/892180529587642369 (accessed on 25 February 2020).
  163. The Calgary Sun. Google to give $25 million to fund humane AI projects. The Calgary Sun, 30 October 2018; A47. [Google Scholar]
  164. Canada NewsWire. UAE Launches ’AI and Robotics Award for Good’ Competition to Transform Use of Robotics and Artificial Intelligence. Available online: https://www.prnewswire.com/news-releases/uae-launches-ai-and-robotics-award-for-good-competition-to-transform-use-of-robotics-and-artificial-intelligence-291430031.html (accessed on 25 February 2020).
  165. Mulley, A.; Gelijns, A. The patient’s stake in the changing health care economy. In Technology and Health Care in an Era of Limits; National Academy Press: Washington, DC, USA, 1992; pp. 153–163. [Google Scholar]
  166. Eschler, J.; O’Leary, K.; Kendall, L.; Ralston, J.D.; Pratt, W. Systematic Inquiry for Design of Health Care Information Systems: An Example of Elicitation of the Patient Stakeholder Perspective. In Proceedings of the 2015 48th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5–8 January 2015. [Google Scholar]
  167. Reuters. Amazon scraps secret AI recruiting tool that showed bias against women. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (accessed on 26 January 2020).
  168. Wolbring, G.; Mackay, R.; Rybchinski, T.; Noga, J. Disabled People and the Post-2015 Development Goal Agenda through a Disability Studies Lens. Sustainability 2013, 5, 4152–4182. [Google Scholar] [CrossRef] [Green Version]
  169. Participants of the UN Department of Economic and Social Affairs (UNDESA) and UNICEF organized Online Consultation - 8 March - 5 April Disability inclusive development agenda towards 2015 & beyond. Disability inclusive development agenda towards 2015 & beyond. Available online: http://www.un.org/en/development/desa/news/social/disability-inclusive-development.html (accessed on 26 January 2020).
  170. Ackerman, E. My Fight With a Sidewalk Robot. Available online: https://www.citylab.com/perspective/2019/11/autonomous-technology-ai-robot-delivery-disability-rights/602209/ (accessed on 26 January 2020).
  171. Wolbring, G. Ecohealth Through an Ability Studies and Disability Studies Lens. In Understanding Emerging Epidemics: Social and Political Approaches; Emerald: Bingley, West Yorkshire, England, 2013; pp. 91–107. [Google Scholar]
  172. Fenney, D. Ableism and Disablism in the UK Environmental Movement. Environ. Values 2017, 26, 503–522. [Google Scholar] [CrossRef]
  173. Yuste, R.; Goering, S.; Arcas, B.A.Y.; Bi, G.-Q.; Carmena, J.M.; Carter, A.; Fins, J.J.; Friesen, P.; Gallant, J.; Huggins, J.; et al. Four ethical priorities for neurotechnologies and AI. Nature News 2017, 551, 159–163. [Google Scholar] [CrossRef] [Green Version]
  174. Government of Italy. White Paper: Artificial Intelligence at the service of the citizen. Available online: https://ai-white-paper.readthedocs.io/en/latest/doc/capitolo_3_sfida_7.html (accessed on 26 January 2020).
  175. Executive Office of the President National Science and Technology Council Committee on Technology. Preparing for the future of artificial intelligence. Available online: https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf (accessed on 26 January 2020).
  176. Government of France. Artificial Intelligence: “Making France a Leader”. Available online: https://www.gouvernement.fr/en/artificial-intelligence-making-france-a-leader (accessed on 26 January 2020).
  177. Villani, C. For a meaningful artificial intelligence towards a French and European strategy. Available online: https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf (accessed on 26 January 2020).
  178. European Commission. Artificial Intelligence for Europe. Available online: http://ec.europa.eu/newsroom/dae/document.cfm?doc_id=51625 (accessed on 26 January 2020).
  179. House of Lords Select Committee on Artificial Intelligence. AI in the UK: Ready, willing and able? Available online: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf (accessed on 26 January 2020).
  180. AI Forum New Zealand. Shaping a Future New Zealand An Analysis of the Potential Impact and Opportunity of Artificial Intelligence on New Zealand’s Society and Economy. Available online: https://aiforum.org.nz/wp-content/uploads/2018/07/AI-Report-2018_web-version.pdf (accessed on 26 January 2020).
  181. Executive Office of the President (United States). Artificial intelligence, automation, and the economy. Available online: https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF (accessed on 26 January 2020).
  182. Wolbring, G. Employment, Disabled People and Robots: What Is the Narrative in the Academic Literature and Canadian Newspapers? Societies 2016, 6, 15. [Google Scholar] [CrossRef] [Green Version]
  183. Christopherson, S. The Fortress City: Privatized Spaces, Consumer Citizenship. Post-Fordism 2008, 409–427. [Google Scholar]
  184. Kelly, C. Wrestling with Group Identity: Disability Activism and Direct Funding. Disabil. Stud. Q. 2010, 30, 30. [Google Scholar] [CrossRef]
  185. Canada Research Coordinating Committee. Canada Research Coordinating Committee Consultation - Key Priorities. Available online: http://www.sshrc-crsh.gc.ca/CRCC-CCRC/priorities-priorites-eng.aspx#edi (accessed on 26 January 2020).
  186. Canadian Institute for Advanced Research (CIFAR). Pan-Canadian Artificial Intelligence Strategy. Available online: https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy (accessed on 26 January 2020).
  187. United Nations, Transforming our World: The 2030 Agenda for Sustainable Development. Available online: https://stg-wedocs.unep.org/bitstream/handle/20.500.11822/11125/unep_swio_sm1_inf7_sdg.pdf?sequence=1 (accessed on 25 February 2020).
  188. Wolbring, G.; Djebrouni, M.; Johnson, M.; Diep, L.; Guzman, G. The Utility of the “Community Scholar” Identity from the Perspective of Students from one Community Rehabilitation and Disability Studies Program. Interdiscip. Perspect. Equal. Divers. 2018, 4, 1–22. [Google Scholar]
  189. Hutcheon, E.J.; Wolbring, G. Voices of “disabled” post secondary students: Examining higher education “disability” policy using an ableism lens. J. Divers. High. Educ. 2012, 5, 39–49. [Google Scholar] [CrossRef] [Green Version]
1
We acknowledge that there is an ongoing discussion whether one should use people first language (people with disability instead of using the phrase disabled people). We use both types of phrases in our search strategies in order not to miss articles, but we use disabled people instead of people first language in our own writing.
Figure 1. Flow chart of the selection of academic abstracts and full-text newspaper articles for qualitative analysis.
Figure 1. Flow chart of the selection of academic abstracts and full-text newspaper articles for qualitative analysis.
Societies 10 00023 g001
Figure 2. Flow chart of the selection of Twitter tweets for qualitative analysis.
Figure 2. Flow chart of the selection of Twitter tweets for qualitative analysis.
Societies 10 00023 g002

Share and Cite

MDPI and ACS Style

Lillywhite, A.; Wolbring, G. Coverage of Artificial Intelligence and Machine Learning within Academic Literature, Canadian Newspapers, and Twitter Tweets: The Case of Disabled People. Societies 2020, 10, 23. https://doi.org/10.3390/soc10010023

AMA Style

Lillywhite A, Wolbring G. Coverage of Artificial Intelligence and Machine Learning within Academic Literature, Canadian Newspapers, and Twitter Tweets: The Case of Disabled People. Societies. 2020; 10(1):23. https://doi.org/10.3390/soc10010023

Chicago/Turabian Style

Lillywhite, Aspen, and Gregor Wolbring. 2020. "Coverage of Artificial Intelligence and Machine Learning within Academic Literature, Canadian Newspapers, and Twitter Tweets: The Case of Disabled People" Societies 10, no. 1: 23. https://doi.org/10.3390/soc10010023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop