1. Introduction
Over the past thirty years a number of studies have looked at gender differences with respect to the use of Information and Communications Technology (ICT) [
1,
2], and significant gender effects in access, motivation, and user behavior have been described [
3,
4,
5]. More recent sources still find “(e)stablished power structures in most societies reinforce masculine advantages” [
6] (p. 33). There is also bias against older people in technology design [
7,
8,
9].
Discrimination and societal stereotypes against specific groups are still affecting a large number of people today. These may be unconscious and situated in practices that act to exclude the “others” based on gender, age, disability, cultural background and more [
6]. Generally, technological differences between males and females are still profound in terms of computer literacy, interest, self-competence, self-efficacy, probability to choose computer science as a career, and having positive attitudes towards programming [
3,
10]. Persistent arguments that older people tend to only reluctantly use and adopt new technologies were found to be partially wrong [
11,
12] and can deepen the existing situation through information exclusion. These stereotypes have very complex social roots. It is argued that there is more to the male dominance of technology than power [
13,
14], which requires a sound understanding of the context within which gender and age constructs appear. In this regard, technology is understood as a masculine culture [
15,
16]. For these purposes, it is vital to reflect on implicit bias related to social prejudice, be they gender stereotypes [
17] or racial and age evaluations [
18,
19]. By exposing implicit stereotypes, it may be possible to identify ways to counter them and promote equitable treatment.
The digital divide, the gap that exists between different populations with regard to access to and usage of technology, is still a reality today and is thought to further reinforce social inequalities [
20,
21]. Even in the same country, there may be stratification based on education, gender and age group, among other factors. In view of the global knowledge economy where concepts such as lifelong learning, active (digital) citizenship, social cohesion and participation are of major relevance, ICT skills have become a necessity rather than luxury, especially in transforming social-economic and political aspects of people’s lives [
22].
Based on the changing dynamics of the knowledge society, the purpose of this study is to investigate the extent to which gender- and age-related biases still persist among ICT students in this “woke” age with regard to the use of technological devices and how this might also affect the design of systems for these demographic groups in the years to come.
1.2. Theoretical Background
Even today, females are still negatively affected by existing gender stereotypes and prescribed gender identities despite relatively equal access and use of computer technology [
26]. This may be due to the fact that from its beginning the computer was socially constructed as a male domain [
27,
28] and that the use, liking and competence of computer technology was associated with being male [
29]. The bias can be subtle, with men referred to as “computer programmer”, but this is often prefaced with “female” for women, which emphasizes that it is not the norm [
30]. Arguably, societal expectations on females as well as gender stereotypes have generally led to different socialization processes where women are expected to take on idealized gendered identities.
Unsurprisingly, the digital gender divide harms women’s self-image given that they have to fight societal norms to succeed at information technology [
31,
32]. Such a gender bias has greatly contributed to the software design process, to girls’ lack of computer confidence and to their computer anxiety [
33]. Additionally, the lack of encouragement from educators and parents has negatively impacted girls’ enrollment rate in computer science [
34].
What is more, there is also a bias against older people, termed as “ageism”, i.e., the assumption that older adults tend to be both incapable of learning how to use new media and technological devices and disinterested in learning how to use them [
35,
36,
37]. Popular media have reported that even organizations focused on older adults have been found to use images of technology-limited older people [
38]. Some researchers have shown that these stereotypes of older people with respect to technology are not accurate [
9,
12]. These biases can have an impact in practice, as communities may not welcome older people, fearing that they will be a burden, when in reality older people can make contributions, e.g., through spending and working as volunteers [
39]. Certainly, some skills they have gained may no longer be relevant or valued; for example, being able to use a slide rule is less valued now that calculators are included on most smartphones. It is vital to adopt an intersectional approach towards digital ageism. In doing so, different aspects and demographical dimensions are analyzed not in a single axis of oppression but rather with regard to multiple axes and with different identities that flow into each other [
40,
41].
Looking at the history of computers [
42], all but the very old could have used computers in their work, depending in part on their education and profession. In practice, in western countries the majority of old people have used a computer, e.g., for e-mail [
43]. Perhaps one problem is that although studies show people experience bias based on age, many people are not aware of their own prejudices based on age [
44]. Another difficulty is that usage of technologies by older people, including smartphones and the internet, have increased rapidly in the past ten years, though the level of adoption depends on age and educational attainment [
45,
46]. The goal is not to eliminate bias, but to reflect on these biases and work with diverse end users to understand the impact of the devices being designed [
6].
For older people wanting to use computers, there are additional challenges. Some of the challenges they report in using computers are cognition, physical ability and perception [
47]. At the same time, many older people experience some sort of additional limitations [
48,
49] that relate to these challenges. Presbyopia (age-related vision problems) [
50], already starts in the 40s which makes it hard to focus on things that are close. Furthermore, 17% of people over 60 have have hearing problems [
51] (p. 30). Physiological changes in the brain, including shrinking and reduced blood flow, can affect the ease of organizing activities, eye-hand coordination and forming new memories [
52,
53]. Arthritis can limit physical abilities as it makes joints stiffer. In fact, 20% of people between 55 and 64 in Austria report problems with their arms or hands [
54]. However, with age there is increasing variability of physical, sensory and mental function between people [
48,
55], so that some older people may have few or no limitations.
When thinking of older people, the focus is frequently placed on their limitations [
56,
57] even though older people are capable of using systems effectively [
7]. Consequently, designing predominantly for limitations may result in technologies that are stigmatizing [
7,
58], and might even prevent adoption [
23,
59].
Regardless of what group a system is intended for, design is essential, both for its usability and the user-experience [
60,
61]. When designing for people of different interests and abilities, there is a tension between a universal design, i.e., one suited for all user groups, and adapting for specific needs. When making adjustments for one specific group, the developers’ hidden stereotypes may have a negative impact on the design [
62]. However, in practice, “it isn’t that hard to address accessibility considerations”, if these are considered up-front [
63] (p. 24).
Computer-based systems are becoming increasingly important. Given that today’s computing students will be involved in designing the next generation of systems, their views will also impact the systems people will have in the future. For these reasons, the authors of this study set out to see what preconceptions the students had about certain groups of people, and how they thought the design should be adapted for them.
It was expected that in the Austrian context the results for the questions taken from Petrie would have similar results [
25], since both countries can be considered industrialised western European nations [
64]. We sought to analyze the results in more detail, for example, to investigate the differences between the devices, and to analyze the implicit target group. We expected that the bias towards women would, if anything, be greater based both on our experience and the differences between the countries described elsewhere, e.g., Hofstede’s dimension concerning how masculine a culture is [
65] or the GLOBE study that determines the level of assertiveness and performance orientation [
66]. Lastly, we hoped to investigate the impact of any biases, i.e., how these opinions may affect the designs produced.
2. Materials and Methods
In this study, a combination of quantitative and qualitative methods was used.
Each questionnaire shows one picture of a person reading a book, followed by eleven questions: eight of which were closed and three were open-ended (see
Appendix A). The questions were to be answered based on the image-the questions were the same for all pictures. The first nine questions relating to stereotypes and all of the pictures are the same as in the previous questionnaire [
25]. Both of the new questions were related to design. The questions from the original questionnaire were translated into German by a coauthor with professional translation experience. The result was then compared to the original by another person, and piloted to check if the results roughly match expectations.
The questions covered:
Age in general (3 questions);
Likelihood of use, i.e., whether they were of the opinion that this person uses a desktop, laptop and smartphone regularly (rated from 1 = definitely not to 7 = definitely for each device);
Expertise, i.e., how versed they thought the person was with these devices (rated from 1 = not at all to 7 = very for each device);
Which design aspects would be particularly important for this person with regard to use (open question), and;
Whether adapted technology should be developed for this person.
In addition, demographic data concerning age, gender, degree programme and year in studies was collected.
Eight different pictures were used: two of younger men, two of older men, two of younger women and two of older women (see
Figure 1), all of the same racial profile. The pictures were kept the same as the original study to support the comparability of the results. The people and clothing depicted are not obviously from a different country. The questionnaire was formatted so that the picture was at the top and all questions were on the same side of the paper. This ensured that the picture could be seen while answering all questions.
Since most of the questions were taken from the previous study, a full validation of the questionnaire was not deemed necessary. To test the translations and new questions the questionnaire was piloted with a convenience sample of 200 students at the same university at which the study was conducted. The results from the new questions indicated that only the question related to design aspects required a minor modification to be understood correctly.
For the study, the hard-copy questionnaires were distributed to students in first semester courses of selected bachelor degree studies in ICT at the University of Applied Sciences Upper Austria. The university offers a variety of computing degrees. The sample was chosen to be representative of computing students in the country. The questionnaires were distributed shortly before COVID-19 restrictions started.
A total of 216 questionnaires were returned. Of these, 16 were discarded as they were incomplete or invalid. The remaining 200 questionnaires formed the basis of our analyses. The invalid questionnaires were distributed randomly in the sample and, therefore, representativity was not affected by removing them. See the description of the sample in
Section 3.1 for more details.
The statistical analysis allowed for a detailed assessment of patterns of responses regarding the perception of the participants concerning the likelihood of use, the extent of expertise of potential ICT users and need for adaptation. Using qualitative analysis for the design aspects, themes emerged that allowed the investigators to assess what these students, i.e., our participants, think about the design of technology with respect to age and gender. In this regard, different layers of information were combined and contextualized [
67].
A more detailed description of the methods used for the quantitative and qualitative analysis is given below.
2.1. Quantitative Analysis for Likelihood of Use and Expertise
The goal was to analyse the perceptions of first-year ICT students concerning different categories of potential ICT users and different devices used. Moreover, an in-depth analysis was performed to determine which types of people were thought to be able to use these devices without the need for developing an adapted technology.
Initially several statistical methods were used to analyse the data. In this context the usual significance level of was applied. First the method of a multivariate analysis of variance (ANOVA) was used for two response variables in terms of the entries at the seven point scale:
Each of the two analyses investigates the effects of the possible explanatory variables:
Device
devices for use: desktop computer (short PC), laptop (LT), smartphone (SP);
Age
age of potential users: younger, older;
Gender
gender of potential users: men, women;
Adapt
inferred need of potential users for adaption of technology: yes, no, don’t know;
Partgender
gender of the participant: male, female, not specified;
Partstudies
anonymized study program of the participant: A, B, C, D, E.
The model uses the possible explanatory variables alone to see, for example, whether the mean ratings for each of the three devices are statistically equal or not. However, it also uses combinations of two or three variables such as, e.g., device*gender or device*age*gender. Through this, one can see whether the mean ratings within the 3*2*2 categories (older men with laptop, younger women with smartphone,...) are statistically equal or not. Comparing the mean ratings within the different categories the ANOVA also gives qualitative information concerning students’ perceptions.
Note that when considering only the effects of device, age and gender, this analysis repeats the one performed in the previous study [
25]. The additional tests which are described below give more detailed information concerning differences between devices and the relationship of
likelihood of use and
expertise, and also correlations between the different factors.
Since, as expected, the device was found to be a significant explanatory variable, which also appeared in combinations significantly, the ANOVA was redone for each of the devices separately. This allowed the inferred likelihood of use, as well as, the inferred expertise with the considered device, to be analysed in more detail.
Since the statistical findings for inferred likelihood of use and inferred expertise were similar, a multivariate regression model for each of the devices was introduced to determine whether inferred likelihood of use could explain expertise as a response variable. Possible additional explanatory variables included age, gender and inferred need for adaption of technology adapt.
In addition, several correlations were calculated.
Finally, further analysis was performed to characterize the group which, according to the opinion of the participants, does not need adaption of technology. This group is referred to as the implicit target group.
3. Results
This section is organized as follows. We start off with a description of the sample. We then present detailed results from the ANOVAs, the linear regression model, the correlation analysis and the analysis of the implicit target group. After that, we summarize the key findings concerning
age,
gender and combinations derived from these results (starting in
Section 3.7). Finally, the qualitative results with respect to the design aspects are presented (in
Section 3.10). The statistical results are described in more detail in
Appendix B.
4. Discussion
In this research, we aimed to explore how both the age and gender of people impact the perceived use of computers and expertise with these, and also how these opinions effect the aspects considered important in the design of technologies for them. Although the present research was inspired by the work of Petrie (2018), it sought to extend that study not only by repeating the investigation in a different country, but also by doing more detailed analysis and by adding questions relating to the design aspects thought to be important for the cohort depicted. The additional analyses included the ANOVAs for different devices separately, and regression analysis. This allowed a more detailed investigation if any double effect of being older and female exists. The two questions that were added about design investigate the potential impact of any biases, i.e., how systems might be designed differently due to them. Note that these questions were also used in the correlation analysis and in the implicit target group analysis.
The results indicate biases both with respect to older people and women. The design aspects mentioned illustrate these, e.g., the colors suggested for young women and the larger fonts for older people.
Given that the participants were first year students, these results generally represent the natural bias of people interested in computer science. Since participants did not know anything about the people depicted, the answers reflect their implicit biases. The study included more male than female participants, which is an accordance with a rather low rate of females in STEM (Science, Technology, Engineering, Mathematics) fields.
In the following, we discuss the aspects related to age and gender, the impact these factors could have on the designs these students produce, and finally, what could be done to counteract these.
4.2. Gender
The bias between the genders is smaller. There is, however, still a significant difference between genders for both perceived likelihood of use and expertise. In the previous study [
25], only the perceived expertise was significantly different. This indicates that, as expected, the bias towards women in Austria may indeed be greater than in the UK, where the previous study was conducted.
The more detailed analysis performed in this study allows us to see that in terms of likelihood of use, it is only the desktop computer that women are thought to be less likely to use. Women were judged by participants to have less expertise for both the laptop and desktop computer. For the smartphone, the perceived expertise did not differ between genders.
Combining these results, we can see that the desktop computer seems to be perceived as a device that is old fashioned and more associated with expertise. There is the least difference between older and younger people with this device, and the perceived expertise correlates only on the likelihood of use and age. Women, who generally are thought to have less expertise, are thought to be significantly less expert with this device. In fact, the operating system and software for desktops and laptops are often the same, e.g., Windows 10, MacOS.
The laptop is the only device for which the perceived likelihood of use is correlated with gender. Furthermore, for the laptop, even though women are thought to have a similar likelihood of using it, they were thought to have lower expertise with this device compared to men. This matches studies that found that men in computing are ordinary, where women in computing are considered exceptional [
71,
72].
The qualitative results mirror the quantitative results (see also
Figure 6), and give richer details with regard to what is meant. Aspects mentioned for men related to working with (proper) computers, such as the efficiency, are more common, and for women more superficial aspects such as color are mentioned.
Petrie (2018) reported there was not an additional effect in being both older and female. Through the more detailed analysis performed here, it is possible to see that in this study the effect of being a woman is exacerbated by being older, as with the laptop, older women are thought to be even less likely to have expertise.
The perceptions of female students with regard to gender did not differ significantly to those of males. However, computer science students may differ in their perceptions from other students the same age. Many computer science students in Austria graduate from technical schools with a low proportion of female students. The low number of women in STEM could contribute to a bias based on sex. In addition, the faculty of technical schools is largely composed of males that tend to reproduce and reinforce gender biases. Hence, they may have already been socialized in traditional male roles where technology is produced, used and generated by men rather than women. Reasons for this disparity are manifold and may be due to a range of factors including traditional societal expectations, lacking encouragement from family and school environments, insufficient confidence and doubts on the part of young females.
There was a difference between the perceptions of participants from different degree programmes with regard to gender, but with closer analysis there was no clear trend, e.g., that it was more pronounced in more technical computing programs. Thus, this was thought to be incidental.
4.4. What Can Be Done
Since these participants are still early in their computing careers, there is still potential to effect their views.
Studies indicate that the use of language is important with respect to gender issues. Certainly, with artificial intelligence systems, “if that data is laden with stereotypical concepts of gender, the resulting application of the technology will perpetuate this bias” and “could quietly undo decades of advances in gender equality” [
83] (p. 14). Even studies looking at human-generated content in Wikipedia, found clear male biases in both English and German [
84] (p. 33).
With respect to older people, “(T)he narrow focus on accessibility concerns harms older adults by excluding them from wider benefits of technologies” [
85] (p. 67). It is important that systems are also useful and acceptable for them.
One step to do away with implicit bias is to uncover hidden prejudice so people can reflect on it [
6]. By realizing one’s own often unconscious prejudices, such as those regarding gender and age described here, it is hoped that deeply entrenched societal patterns may gradually be influenced, changed and unlearned. Thus, educators need to make students aware of their biases and encourage them to reflect on these in their work. More importantly still, they need to question their own hidden prejudices and mitigate against them.
There are methods that have been shown to be successful in combating gender bias in software systems in particular. Working with the a diverse group of users can help to understand the individual and societal impact for different groups [
6]. There are also specific methods, such as the
GenderMag [
86]. Using the method, it was found that the bias could be removed from the product. Moreover, it was identified that men also benefited—they too had a higher success rate with the resulting software and in practice, many accessibility features can make products better for all users [
63].
Hence, educators, such as the authors themselves, have the opportunity to act against this by using gender-neutral formulations, raising awareness about women and older people with regard to their computer literacy, helping students discover their stereotypes and by teaching students methods to counteract implicit biases.
In a similar way, companies producing software can introduce methods that combat bias, but also check for bias in the resulting systems. Some also feel that it is key that more women be involved in software development to help resolve gender bias in technical systems [
83].