Next Article in Journal
The Oneiro- and the Hagio-: Teaching about Dreams from the Standpoint of Comparative Hagiology
Previous Article in Journal
Imagination, Secularism and the University: The Presence and Consequences of Islamic Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Relationship between Religiosity Level and Emotional Responses to Artificial Intelligence in University Students

Department of Catholic Social Thought and Sociology of Morality, Institute of Sociological Sciences, Faculty of Social Sciences, The John Paul II Catholic University of Lublin, al. Racławickie 14, PL-20-950 Lublin, Poland
*
Author to whom correspondence should be addressed.
Religions 2024, 15(3), 331; https://doi.org/10.3390/rel15030331
Submission received: 12 February 2024 / Revised: 29 February 2024 / Accepted: 6 March 2024 / Published: 9 March 2024
(This article belongs to the Section Religions and Health/Psychology/Social Sciences)

Abstract

:
This article aims to analyze the correlations between the level of religiosity and emotional responses to artificial intelligence (AI). In these times marked by the growing penetration of artificial intelligence into people’s everyday lives, it is important to identify the sociocultural determinants of attitudes towards this increasingly dynamically developing technology. A significant element of society’s cultural mosaic is religiosity, and, as such, it undoubtedly has an effect on emotional responses to AI. This is a reason to investigate how religions and religiosity impact attitudes towards AI. The study used Welch’s analysis of variance (ANOVA) to compare university students’ emotional reactions to AI across religiosity levels, which allowed for identifying significant differences in the levels of emotions such as fear, sadness, and anger between irreligious, religious, and indifferent groups. Significant differences in emotional responses to AI were found between individuals with different religiosity levels. Compared to irreligious and indifferent students, the religious ones more often showed intense emotions, such as fear and anger. The results indicate the need to include the issues of religion and religiosity in the process of designing and implementing AI. This may contribute to a greater acceptance of this technology in a society of individuals with different levels of religiosity. Understanding the effect of religiosity on the response to AI is also of considerable importance for the development and implementation of ethically responsible AI solutions, which should take account of the diversity of individuals’ beliefs and value systems.

1. Introduction

In an era of rapid technological development, artificial intelligence (AI) is becoming increasingly present in everyday life. The omnipresence of AI provokes a variety of emotional reactions in society, and these can be determined by numerous factors. For some people, AI can be a source of fascination and hope for future innovations and improvements to the quality of life. Others perceive it as a threat to human autonomy or as a violation of moral and spiritual boundaries. Important factors that determine individuals’ perceptions of new phenomena are religion and religiosity. This sphere of life can significantly influence the formation of individuals’ worldview and behavior (Wodka et al. 2022), also having a real effect on the way people perceive and react to the development of AI technologies.
Diverse interpretations of and responses to AI may stem from differences among religious doctrines, which offer specific approaches to new technologies and the role of humankind in an increasingly technology-dominated world. While some religions stress the idea of human–technology harmony, others place emphasis on the need to treat new technologies with reserve (Noble 2013). Consequently, the degree of identification with the doctrine, manifesting itself in the level of religiosity, may determine attitudes towards them.
The perception of AI through the lens of religion and religiosity may also influence ethics related to the development and application of this technology. Issues such as artificial consciousness, autonomous machine decisions, and the impact of AI on the job market may be evaluated differently by individuals with different religious sensitivities, and this leads to a complex debate on the present and future role of AI in society.
This study aimed to investigate and explain how the level of religiosity influenced university students’ emotional responses to AI in Poland. Students are a unique social group. Due to their age and educational level, they are open to technological innovations, have relatively extensive knowledge about them, and gladly use them (Zdun 2016). In the future, they are going to be the elite of society and will perform functions in various spheres of socioeconomic life and in public administration. This article analyzes the relationships between different levels of religiosity and the perception of and responses to AI, focusing on emotions such as fear, sadness, anger, and trust. This study sought to explain how religiosity shapes individuals’ attitudes towards AI.
There are many definitions of religiosity, just as there are many definitions of religion, which makes the term “religiosity” vague and ambiguous (Davie 1993). Sociological research takes account of the differences between perspectives on and interpretations of religiosity, usually concerning its characteristics, essence, and unique features (Kozak 2014). Using traditional research instruments, oriented at the substantive institutional understanding of religiosity, it is difficult to assess individualized and subjective forms of religiosity; “Many contemporary people relate religiosity to personal incidents and experiences rather than institutional forms of religion” (Mariański 2004, p. 29). Today, religiosity is often interpreted as a part of cultural identity, with traditional beliefs and practices sometimes giving way to more symbolic forms of expressing affiliation (Wódka et al. 2020). Despite the pluralism of the phenomenon of religiosity itself, the ways of understanding religiosity have certain elements in common; these include a similar way of thinking about social issues shared by members of a religious group (Fel et al. 2020), a similar doctrine and ethics, and shared worship (Schilbrack 2022). What seems to be very important in the analysis of religiosity is the self-reported parameter of global attitude to faith (subjective declaration of the “depth” of one’s religiosity) (Kozak 2015) and its dynamics—namely, the increase, decrease, or stability of the level of religiosity (Zarzycka et al. 2020).
For the purposes of our research project, we used a parameter combining two key variables: global attitude to faith and engagement in religious practices. This enabled us to build a typology dividing respondents into religious, irreligious, and indifferent groups, which is useful for a better explanation of their emotional responses to AI. This is consistent with the approach recognizing the complexity of religious experience and the need for a multidimensional analysis of that experience in the sociology of religion, as indicated in a study stating, “The question about global attitude to faith gives rise to a simple typology of believers (…) while the answer to the question about self-identification is further verified in a number of detailed questions, including ones about religious practices” (Bożewicz 2022, p. 42).
We assumed that determining the relationship between religiosity level and emotional responses associated with AI would allow for a deeper exploration of these issues and for explaining how people with different levels of religiosity feel about technological changes. Such findings may be of great significance, for instance, to AI designers and programmers, who are striving to develop a technology that is positively perceived and accepted by diverse communities.

1.1. Review of the Literature

The existing publications present a fairly wide range of perspectives, covering diverse aspects of relationships between AI and social attitudes. The literature approaches issues such as the influence of religion and culture on the perception of AI (Reed 2021), the effect of AI on communities and human interactions (Yang et al. 2020), attitudes towards AI and transhumanism (Nath and Manna 2023), and AI-related ethical and philosophical issues (Stahl and Eke 2024), both from a theoretical perspective and in light of empirical research results. What seems to be an important object of exploration is the question of how religiosity can influence emotional responses to AI. It is emphasized in the literature that religious beliefs can inspire both positive and negative emotions towards AI, influencing people’s reactions to technological progress and its potential consequences. Researchers stress the importance of including ethical and spiritual aspects in the process of designing AI, which may lead to more balanced and ethically responsible AI development, taking account of the harmony between the world of nature, society, spirituality, and technology (Uttam 2023).
In the literature, there has also been an ongoing debate on the potential threats and fears connected with various ideas regarding the application of AI and with the different areas of its application. These include existential anxiety caused by threats associated with the latest technologies and the anxiety about the dignity of human life (Shibuya and Sikka 2023). Authors also engage in theoretical reflection on the influence of various religious and cultural traditions on people’s attitudes towards robots and AI, focusing on the differences between Eastern and Western cultures (Persson et al. 2021). They analyze how different religions influence the perception and acceptance of technologies and how these are translated into diverse approaches to machines and AI in specific cultures and religious traditions (Ahmed and La 2021). The conclusions of these analyses indicate a significant effect of religious and cultural traditions on the way different societies react to the development and implementation of advanced technologies (Prabhakaran et al. 2022). Authors also underscore the importance of taking these cultural and religious differences into account in designing and implementing new technologies (Jecker and Nakazawa 2022). For example, in cultures shaped by the dominant monotheistic religions (including Christianity, Judaism, and Islam), people may perceive the human being as unique and distinct from machines, which leads to greater caution or skepticism about technologies imitating or replacing human actions (Yam et al. 2023).
Research by Koivisto and Grassini (2023) explores the associations between different religious values, ethics, and the perception and evaluation of AI application and development. They highlight the benefits and limitations of AI, acknowledging its role in facilitating human work without replacing inherent human qualities such as creativity, intuition, and emotions. Similarly, Markauskaite et al. (2022) argue that AI cannot be a substitute for interpersonal cooperation or the sense of belonging within a society. Bakiner (2023) advocates for AI development grounded in humanistic values like human rights and democracy, while Robinson (2020) emphasizes the importance of considering pluralism and religious values in the evolution of AI. These authors collectively argue against viewing AI as a threat to religion or humanity, instead recommending societal participation in AI enhancement (Abadi et al. 2023). This discourse extends to challenges related to transhumanism and posthumanism, with Al-Kassimi (2023) cautioning against threats to human values and dignity from neglecting ethical and spiritual considerations in AI development. The consensus among these scholars is the necessity for deep ethical reflection and the integration of humanistic values in the design and implementation of solutions involving AI.
While theoretical reflection is focused on issues such as the relationship between religious and ethical values and AI perception, empirical studies indicate differences in the perception of AI across different backgrounds. In a study conducted on the Reddit platform, comprising an analysis of 33,912 comments from 388 subreddits, Qi et al. (2023) sought to identify the determinants of the social perception of AI, including systems such as ChatGPT. Significant differences in attitudes towards AI were found between what is referred to as technological communities (focused on technological issues) and non-technological ones (those attaching great importance to the social and axiological dimensions of life). The former concentrated on the technical aspects of AI. The latter, by contrast, placed more emphasis on the social and ethical aspects of artificial intelligence, more often engaging in debates on issues such as privacy, data security, and the effect of AI on the job market.
In a study conducted by Jackson et al. (2023), a sociological experiment aimed to examine the influence of robot preachers on religious commitment. The study was conducted at Kodaiji Temple in Kyoto, where a robot named Mindar regularly preached sermons. The study included 398 participants, to whom a survey was administered after they had listened to sermons delivered by Mindar or by a human preacher. It was found that the participants who evaluated the robot as better than the human preacher showed a lower degree of religious commitment and a lower tendency to make donations, which suggests that robots may not be perceived as reliable religious authorities.
A study conducted in South Korea in 2022 aimed to determine the associations of the Big Five personality traits (extraversion, agreeableness, conscientiousness, neuroticism, openness) with the perception of AI (N = 530 adults). It revealed that extraverts often had negative feelings about AI, agreeable subjects saw it as positive and useful, conscientious subjects evaluated AI as useful but less socially friendly, and neurotic subjects experienced negative emotions but perceived AI as socially friendly. Individuals open to novelties considered AI useful, and enthusiasts of new technologies usually exhibited positive attitudes towards AI (Park and Woo 2022). A different study, conducted by Schepman and Rodway (2023) on a sample of 300 adults from the UK, was focused on determining the relationships between personality traits and attitudes towards AI. Using the General Attitudes towards Artificial Intelligence Scale (GAAIS) and other measures, the authors found that introverts, agreeable subjects, and conscientious individuals approached AI more positively. They also observed that general trust in people had a positive effect on attitudes towards AI, whereas lack of trust in corporations had a negative effect on these attitudes.
Research conducted on a sample of 1015 young people entering the labor market from 48 countries aimed to determine their attitudes towards emotional artificial intelligence (EAI) in the context of human resource management. EAI is an artificial intelligence technology that detects, interprets, responds to, and simulates human emotions using voice tone analysis, facial expression analysis, and the analysis of other behavioral indicators. More positive attitudes towards its use were found in students with a higher income, men, students in business-related fields, those in senior years, and individuals with a better understanding of EAI. Religiosity, especially among Muslims and Buddhists, was associated with greater EAI anxiety. Regional differences also influenced attitudes, with respondents from East Asia being less anxious than those from Europe, which may be related to cultural differences (Mantello et al. 2023).
A study conducted in the UK on a sample of 4200 subjects by the Centre for Data Ethics and Innovation (CDEI) aimed to determine attitudes towards AI, its areas of application, and the use of personal data. Respondents generally accepted the use of personal data by AI, but they were concerned about the security of such data and about fairness in AI use. The vast majority (81%) expressed approval for AI collecting data for the needs of the National Health Service (NHS), and 62% approved AI collecting data for the British government. More than half of the respondents (52%) reported limited knowledge about data use, and 31% were afraid that AI-related benefits would be distributed unequally. The study also revealed expectations for a better supervision of AI and greater transparency of its operations (Clarke et al. 2022).

1.2. Hypotheses

The literature review in this study suggests that various factors, including religious beliefs, may considerably influence the development of emotional responses—both positive and negative ones—to AI. On this basis, we formulated the following general hypothesis:
General hypothesis: The level of religiosity has a significant effect on emotional responses to AI, conditioning both positive and negative attitudes towards this technology.
To investigate in detail how the level of religiosity translated into different types of emotions in the context of AI, we formulated two specific hypotheses:
A hypothesis concerning negative emotions: Religious people generally show higher levels of negative emotions (such as fear and anger) in response to AI compared to irreligious and indifferent individuals.
A hypothesis concerning positive emotions: Religious people generally show lower levels of positive emotions (e.g., joy) and lower levels of some negative emotions (e.g., sadness as the opposite of joy) in response to AI compared to irreligious and indifferent individuals.

2. Material and Method

2.1. Participants and Procedure

In the first step, we designed a survey questionnaire titled “Students’ Attitudes towards AI”, based on the three-component theory of attitudes by Rosenberg and Hovland (1960), in which three basic aspects of attitudes are distinguished: affective, cognitive, and behavioral. The affective component of the survey was focused on respondents’ emotional reactions to AI, collecting data about their feelings, such as enthusiasm about, fear of, or trust in artificial intelligence. The cognitive component measured respondents’ knowledge and beliefs about AI. It included questions about their understanding of the possibilities and limitations of AI and the predicted consequences of its development. Finally, the behavioral component concerned respondents’ AI-related behaviors and actions, such as their willingness to use AI in education and personal life and their active participation in AI-related courses and projects. By combining these three aspects, the questionnaire aimed to provide a comprehensive picture of respondents’ attitudes towards artificial intelligence, reflecting both their inner feelings and beliefs and the external manifestation of these attitudes through actions. In this article, the object of interest was the affective component. We analyzed the relationship between the level of religiosity and emotions towards AI.
In the next step, we conducted a pilot “Students’ Attitudes towards AI” survey to test its effectiveness and comprehensibility. The pilot survey consisted of administering the questionnaire to a test sample of randomly selected university students. The aim of this stage was to test the clarity and intelligibility of the questions; the pilot survey was also meant to check if the structure of the questionnaire was logical and whether or not it caused ambiguities or misinterpretations. The results of the pilot survey allowed for making the necessary modifications and adjustments to the questionnaire before distributing it among a wider population of students. The pilot study was also an opportunity to assess the time needed to complete the questionnaire and to identify the potential technical problems with the LimeSurvey platform, which was used to conduct the survey. The modifications were meant to ensure that the questionnaire collected the data reflecting students’ attitudes towards AI as comprehensively and as effectively as possible, in accordance with the three-component theory of attitudes.
The reliability analysis conducted after collecting all the data and using Cronbach’s alpha coefficient—whose value for the “Students’ Attitudes towards AI” survey questionnaire was 0.925—was focused on assessing the internal consistency of the measure. In our analysis, we used two types of data: answers to questions indicated on a scale (e.g., a Likert scale) and quantitative variables, which together were supposed to assess a specific aspect of sociometric attitudes, making it possible to investigate how these different types of answers were interrelated in the sample. The high value of the alpha coefficient indicates the very good internal consistency of the survey questionnaire, which suggests that specific questions effectively measure consistent constructs of attitudes towards AI. This result attests to the questionnaire being a sound and reliable measure that can be used in further research and analyses in this field.

2.2. Respondent-Driven Sampling (RDS)

In this article, we present the results of quantitative research concerning the attitudes towards AI of university and college students learning in Poland, from whom data were collected using the method known as respondent-driven sampling (RDS) (Salganik 2006; Salganik and Heckathorn 2004; Tyldum 2021; Sadlon 2022).
RDS is a snowball sampling technique1 developed by Douglas Heckathorn. It was designed to be based on the use of social networks and consists of participants recruiting further individuals, thus creating multistage chain-referrals (Heckathorn 2014). This technique presupposes that, through statistical adjustments, it is possible to recruit a representative sample even though the initial subjects—or seeds—are not randomly selected (Heckathorn and Cameron 2017). Chain-referrals based on social networks can provide representative population samples for sociological research in various social backgrounds, including students (Dinamarca-Aravena and Cabezas 2023).
To ensure the representativeness of the sample, we used the student numbers released by Statistics Poland (GUS) in June 2023, with a division into groups of fields of study in accordance with the ISCED-F 2013 classification. The data generated from RDS yielded very similar results to the data from Statistics Poland. The sampling frame included student numbers for groups of fields of study in Poland, which made it possible to reliably investigate students’ attitudes towards AI across various fields of study, providing a solid basis for determining the character and diversity of those attitudes in the Polish academic community. Our research resulted in a sample of 1088 respondents, which was a representative reflection of the community of Polish university and college students (Table 1).
As mentioned above, we used respondent-driven sampling (RDS) to collect data from respondents studying in Poland and took account of their numbers specified by Statistics Poland in accordance with ISCED-F 2013, a classification of groups of fields of study. This ensured a sample representative of the diverse academic community.
The RDS procedure began with the choice of “seeds”—a small, diverse, and well-connected group of students in various fields, who were not randomly selected. The “seed respondents” (Wave 1) were informed about the purpose and manner of sampling. Next, they received an electronic link to the survey questionnaire, which—using an email address—generated a unique token in the LimeSurvey registration system, allowing the respondent to complete the survey questionnaire. Wave 1 respondents were surveyed using an auditorium questionnaire. After completing the questionnaire, they were asked to recruit further participants for the study using a snowball sampling technique by means of an electronic link to the questionnaire or a QR code.
As part of the RDS method, each further recruitment wave (Wave 2, Wave 3, etc.) continued the procedure until the desired sample size was attained. Additionally, to increase the scope and effectiveness of the survey, we involved study ambassadors, such as lecturers and representatives of the student government. Using this methodology, we managed to gather a sample of Polish students that was representative in terms of the diversity of fields of study.
The sample reached the size of 1088 students in two months, and stabilization in its specific segments was achieved after an average of five weeks. In RDS, a decrease in the number of newly recruited participants was observed with each successive recruitment wave. This phenomenon stemmed from the expansion of the respondents’ social networks, which, with time, begin to overlap, and the number of potential new participants who could be recruited decreased. In other words, the longer the recruitment process lasted, the smaller the group of people available for recruitment was, because part of the network had already taken part in the study and had been included in previous “waves.” As a result, the speed of recruitment decreased with every successive wave, and the total number of waves was 5, in order to achieve the desired sample size and for the sample to be representative of the diverse academic community.
To distinguish the waves in the study, we separated them temporally. Respondents from successive waves completed the survey questionnaire in the consecutive weeks. For this purpose, we used the meta-data collected through LimeSurvey, such as each respondents’ date of completing the questionnaire. Every new respondent joining the sample was automatically assigned to the appropriate wave based on the dates of the survey commencement and completion.
To avoid the possible phenomenon known as bottlenecks in RDS, the “seeds” were selected from different types of higher education institutions, from different locations, among students of different ages, and at different stages of education. This diversity-oriented approach to the selection of seeds was intended to ensure the wide representativeness of the sample and the avoidance of isolated subpopulations. Moreover, in the process of recruitment, we consciously took steps to encourage participants to involve individuals from a variety of student circles, which promoted the creation of heterogeneous social networks.
Differences in recruitment speed can also affect research results (Wojtynska 2011). This problem shows that, in RDS studies, the recruitment process should be carefully monitored and regulated to avoid disproportions in the composition of the sample. For this reason, the seeds were consciously selected from different higher education institutions, different locations in Poland, different years of study, and different age groups in order to ensure diversity and minimize the risk of unequal recruitment speed. This approach was meant to ensure a sample as balanced and representative as possible.

2.3. Measures

In this study, we used a measure of religiosity based on a combination of two variables: religious practices and self-reported religious belief. Variable D08 was the frequency of religious practices, such as visits to the church, such as an Orthodox church, Protestant church, or other places of worship. Respondents rated the frequency of their participation in these practices on a scale from 1 = never to 5 = very often (several times a week). Variable D09 concerned self-rated religiosity. Respondents used a scale from 1 (non-believer) to 5 (strong believer) to rate their religiosity level. The two variables were summed, forming a scale from 2 to 10 that was intended to reflect the overall level of respondents’ religiosity. Next, we converted it into a categorical variable, RELIGIOSITY, which comprised three categories (irreligious: 2–4; indifferent: 5–7; religious: 8–10). This structure enabled a more nuanced analysis of the relationship between religiosity level and emotional responses to artificial intelligence, covering both practices and personal religious identification.
In our study, we used a set of measures designed specifically to assess EMOTIONAL RESPONSES TO AI, based on participants’ self-reports. Labeled as Z09, this set consisted of a series of questions concerning different emotions that might arise in respondents in the context of AI: curiosity, fear, sadness, anger, trust, disgust, and joy. Respondents rated their emotions on a scale from 1 (strongly disagree) to 5 (strongly agree). A similar measure of emotional reactions to new technologies was successfully applied in a study of attitudes towards new technologies in 2023, conducted on a representative sample of Polish adults (Soler et al. 2023). In our study, this measure made it possible to establish how respondents with different levels of religiosity reacted to the increasing presence of AI in their everyday lives.

2.4. Analytical Method

In the “Students Attitudes towards AI” survey, we initially decided to apply a one-way analysis of variance (ANOVA) using the F-test as a standard method for comparing mean scores of emotions between the three groups—irreligious, religious, and indifferent. As the key instrument in ANOVA, the F-test is used to assess if the differences between the mean scores of the groups are statistically significant. Its application, however, requires meeting certain assumptions, including an equality of variance across the groups.
Therefore, we first performed a Levene’s test of homogeneity of variance. For the emotion variables of curiosity and fear, the Levene’s test indicated a homogeneity of variance, which is usually the basis for using the classic ANOVA F-test. However, for the emotion variables of sadness, anger, trust, and disgust, p-values were lower than 0.05, which signaled statistically significant differences in variance across the groups2.
The violation of the equality of variance assumption (in the case of the analyzed emotions) induced us to apply alternative methods of analysis. We therefore used Welch’s ANOVA test (W-test), which is more resistant to inequalities of variance and more effective in controlling type I errors. The W-test allowed for a comparison of mean scores of emotions across the groups, even when the standard assumptions for the F-test were not met. Thanks to this approach, we achieved a greater reliability of the results, as far as the differences in the distributions of data across the groups are concerned (Delacre et al. 2019). In other words, the application of Welch’s ANOVA test allowed us to adjust the analysis to the observed inequalities of variance. This made it possible to reliably compare mean emotion scores between respondents with different levels of religiosity, even when the standard ANOVA assumption was not met.

3. Results

3.1. Religiosity

According to the data, nearly one in three students in Poland (29.3%) reported never visiting places of worship. Additionally, 21.6% of the respondents very rarely attended religious ceremonies, forming a significant portion of non-practicing individuals. Together, these groups represented 50.9% of the sample, indicating a majority that engaged in religious practices infrequently or not at all. Those who attended ceremonies irregularly accounted for 21.5% of the sample. Meanwhile, a smaller proportion of students, 17.6%, reported that they go to places of worship moderately often. A minority of the respondents, constituting 9.9% of the sample, indicated that they were practicing religious believers who frequently attended ceremonies. These data illustrate diverse religious practice frequencies among students, with a notable inclination towards less frequent participation (see Table 2).
In the case of self-reported belief, about one in five students identified as non-believers, forming 19.9% of the sample. Those who were indifferent to religion accounted for 15.2% of the participants. A significant portion of the sample, 26.3%, were undecided but still attached to religious traditions. Approximately one in four students considered themselves believers, with this group representing 25.4% of the sample. Lastly, strong believers constituted 13.3% of the students, equating to roughly one in seven respondents (see Table 3).
The analysis of consolidated data on religiosity among students allowed for a division of the sample into three categories: irreligious, indifferent, and religious. The results showed that 36.3% of the respondents identified as irreligious, which suggested a lack of religious affiliation or religious practices. The largest group was indifferent individuals, accounting for 39.4% of the sample; indifference may refer to agnostic attitudes or a low commitment to religious issues. Religious respondents—namely, those who indicated a higher level of religiosity—constituted 24.3% of the analyzed population. Overall, these data may attest to a trend towards the deinstitutionalization of religion and to an individual and subjective approach to religious matters among younger generations (see Table 4).

3.2. Emotions

When analyzing emotions associated with the subject of AI among students, one can observe diverse reactions. Curiosity (M = 3.3, SD = 1.2) seemed to be fairly evenly distributed among the respondents, with the largest number (25.5%) choosing 3 on a 5-point scale, which means moderate curiosity. Likewise, reactions of fear (M = 2.78, SD = 1.3) were fairly evenly distributed, though a smaller number of respondents (11.0%) indicated very strong fear (a rating of 5).
More than half of the respondents (52.3%) reported that they felt no sadness (M = 2.3, SD = 1.3) in the context of AI, choosing the lowest level on the scale (a rating of 1). Anger (M = 2.3, SD = 1.3) was also an emotion rarely experienced towards AI. Roughly half of the students most often signaled the absence of anger (53.8%) towards AI technology. The majority of students felt no disgust (M = 2.3, SD = 1.3) towards AI, either; 54.4% of respondents chose option 1, indicating a lack of such feelings about AI.
Trust (M = 2.3, SD = 1.2) in AI among students was limited; 37.7% of the respondents chose the lowest rating (1).
Joy (M = 2.5, SD = 1.2) induced by AI was also experienced to various degrees, with 30.4% of respondents selecting option 1, which means a lack of joy, and a fairly high percentage (27.3%) of those indicating a moderate level of joy (a rating of 3). The variations in these emotional responses are visually represented in Figure 1.
In summation, this study indicates mixed emotions about AI, with a tendency towards moderate trust and curiosity but also with a clear lack of intensive negative emotions such as sadness, anger, and disgust. Most of the respondents seemed to approach the topic of AI with reserve or moderate interest, without strong emotional reactions. The diverse emotional responses across different religiosity levels are further illustrated in Figure 2.
The application of Welch’s ANOVA test (W-test) allowed us to assess statistical differences in the levels of the following emotions: curiosity, fear, sadness, anger, trust, disgust, and joy. No statistical differences were observed in the level of curiosity between groups with different levels of religiosity (W-test: 2.420, p = 0.090), which suggests a similar degree of interest in AI issues across all these groups. However, in the cases of fear (W-test: 7.176, p < 0.001), sadness (W-test: 10.726, p < 0.001), and anger (W-test: 9.958, p < 0.001), we found statistically significant differences between the groups, which shows that these emotions were experienced to different degrees by individuals with different religiosity levels.
In the cases of trust (W-test: 1.219, p = 0.296) and joy (W-test: 0.829, p = 0.437), there were no statistically significant differences across the groups, which suggests that these emotions were experienced with similar intensity by all respondents, regardless of their religiosity level. By contrast, in the case of disgust (W-test: 6.971, p = 0.001), we did observe statistically significant differences across the groups.
Overall, the results indicate significant differences in emotional responses to AI across the groups with different religious attitudes, particularly in the context of negative emotions such as fear, sadness, and anger. Such differences in the experience of emotions may influence attitudes towards AI and its acceptance in different social groups. These results highlight the significance of including the religious aspect in research on responses to new technologies, such as AI.
After performing Welch’s ANOVA test, we applied the Games–Howell test as a post hoc method in order to more accurately compare the groups pairwise3. This test proved to be particularly useful in identifying which pairs of groups differed in terms of emotional responses to AI. The detailed results of these pairwise are presented in Table 5.
The results of the “Students’ Attitudes towards AI” survey indicate significant emotional differences across religious groups in terms of responses to artificial intelligence. Religious individuals showed a higher level of fear compared to both irreligious and indifferent ones. In the case of sadness, we observed a lower level of this emotion in the religious group than in the remaining two groups. The religious group showed a higher level of anger compared to irreligious and indifferent respondents. Religious respondents showed a lower level of disgust compared to irreligious and indifferent ones.
Given the results of our study, people with strong religious beliefs appear to be more susceptible to feelings of fear regarding AI than their non-religious or religiously neutral counterparts. Religious people reported less sadness towards artificial intelligence but showed significantly more anger. This points to a diverse emotional landscape where fear and anger may arise from concerns about AI’s potential to challenge human dignity and the inherent values cherished in many religious traditions. Moreover, religious respondents were less likely to dislike AI than non-religious or neutrally religious respondents, suggesting that emotional responses to AI among religious people are complex and multi-faceted. These findings indicate that levels of religiosity are closely related to more intense and varied emotional reactions to this technology, potentially influencing the acceptance of and attitudes towards artificial intelligence. This finding highlights the need for a more nuanced consideration of emotional responses to this technology, considering individual differences in values and beliefs.

4. Discussion

The results of the “Students’ Attitudes towards AI” survey make it possible to directly address the hypotheses concerning the effect of religiosity level on emotional responses to AI.
The results support the statement that the level of religiosity plays an important role in the development of emotional reactions to AI. It was observed that religious individuals showed diverse emotions in response to AI, which supports the general hypothesis about the significant effect of religiosity on attitudes towards this technology. This would suggest the existence of certain correlations between religiosity level and the perception of and emotional response to AI. Consequently, these results highlight the need to include religious aspects in further research on attitudes towards AI.
The results of this study provide data supporting hypothesis H1 concerning negative emotions. We found that religious individuals exhibited considerably higher levels of fear and anger compared to irreligious and indifferent respondents. This supports the assumption that religiosity can intensify negative emotional reactions to new technologies, such as AI. This means that hypothesis H1, postulating that religiosity level can intensify feelings such as fear and anger towards AI, are supported by the results of our study.
The results of this study do not clearly support the hypothesis concerning positive emotions (H2). Although we found a lower level of sadness among religious respondents, which may indicate a lower intensity of negative emotions, the absence of significant differences across the groups in the level of joy does not provide sufficient evidence to unambiguously support H2. This may mean that the relationship between religiosity and positive emotions towards AI is more complex and requires further research.
The results of the “Students’ Attitudes towards AI” survey indicate the complex effect of the level of religiosity on the emotional response to artificial intelligence. The survey revealed significant emotional differences between religious groups, especially in negative emotions such as fear, sadness, and anger. These results suggest that religiosity is associated with more complex and intense emotions towards AI technology. Our findings support the results of a study concerning cultural and religious differences in the perception of AI (Yam et al. 2023) and a study (Abadi et al. 2023) indicating the importance of religious and ethical values in the perception of AI. Also, the studies by Uttam (2023), Shibuya and Sikka (2023), and Al-Kassimi (2023), which underscore the effect of religion and culture on AI-related emotional responses, are in line with the findings of our study.
Our study reveals significant differences in emotional responses to artificial intelligence across individuals with different levels of religiosity, noting that more religious individuals show higher levels of fear and anger towards AI. This corresponds to the results of the study conducted by Mantello and colleagues (Mantello et al. 2023), who found a negative correlation between religiosity and attitudes towards artificial emotional intelligence (EAI), particularly among the followers of Islam and Buddhism. These results confirm the previous findings that deeply rooted religious beliefs may influence the perception and acceptance of new technologies. Both the results reported by Mantello and our findings point to the significance of the religious and cultural context in shaping emotional responses to AI technology (Mantello et al. 2023). Our results show stronger AI-related fears and uncertainty among religious respondents, which may stem from anxiety about the consistency of technological development with their ethical and spiritual values.
Our findings indicate that religious individuals experience a higher level of fear when confronted with AI than irreligious or indifferent ones. This increase in fear may stem from anxiety about the unknown and potential threat that new technologies may pose to religious values and beliefs. This fear can be interpreted as a defensive reaction to AI, which is perceived as a threat to traditional values and moral principles deeply rooted in religious doctrines. In modern societies, there is a frequently observed tendency for people to treat religious beliefs as subjective opinions, as opposed to beliefs based on empirical evidence, which are perceived as objective knowledge (McGuire 2008). In this context, religious people may be afraid that advanced technologies, such as AI, may lead to the deterioration of interpersonal relationships (Glikson and Woolley 2020), a decrease in the significance of human empathy (Montemayor et al. 2022), or the blurring of boundaries between human and machine intelligence (Galaz et al. 2021).
Additionally, the increase in fear may be related to a sense that AI poses a threat to the unique role of the human being as a creature endowed with free will and moral choice. In many religions, humans are perceived as exceptional beings (Fel 2018), and AI can be seen as a challenge to this uniqueness. In particular, the development of autonomous AI systems, capable of making decisions independently from human supervision, can cause anxiety associated with the possibility of losing control over technology and its unpredictable outcomes (Ho et al. 2023; Park et al. 2023).
Moreover, the fear among religious people may be fueled by public debates, which often present artificial intelligence in the context of negative scenarios, such as the human workforce being supplanted by machines (Modliński et al. 2023), potential threats to privacy and security (Siriwardhana et al. 2021), and even apocalyptic scenarios associated with the excessive autonomy of AI (Federspiel et al. 2023). This kind of narrative may increase existing fears and lead to a greater distrust of new technologies.
The lower level of sadness among religious people found in our study may stem from the deeply rooted belief that religion not only provides a system of values but also networks of social and spiritual support (Isański et al. 2023), which can be helpful in coping with the uncertainty and anxiety caused by rapid technological changes. For many religious people, faith is a source of security and hope (Adamczyk and Jarek 2020), which may effectively prevent the feeling of sadness associated with the potential threat posed by new technologies to the traditional way of life.
Religion often provides its followers with an interpretative framework that helps them see the world and its changes as meaningful. In the face of the growing role of technology, churches and other religious institutions experience a loss of their monopolistic position in the market of worldviews and interpretations of life (Mariański 2023). This implies the need for them to adapt to technological realities without violating the principles of faith. Religious institutions may begin to see technological progress as harmonizing with their activities; this helps followers adjust to and find their place in the new technological reality. In some religions, there is a perceptible belief that technologies such as AI can support religious practices. This kind of approach can mitigate negative emotions, such as sadness, in the face of challenges associated with modern technologies.
The tendency to harmonize religion with technological progress finds expression in the practical activities of religious communities. Examples of such adaptation of religiosity and mediatization of religious practices include the transferring of religious activities to the Internet (Bingaman 2023). Research on the response of churches to the COVID-19 pandemic shows that they quickly adapt to new technologies, enabling the continuation of religious practices online. This attests to the flexibility and adaptability of religious communities in the face of technological changes (Cooper et al. 2021). Additionally, practices such as prayer, meditation, and participation in rituals can play a role as a collection of mechanisms for coping with stress and anxiety, contributing to greater emotional resistance and a weaker feeling of sadness (Fel et al. 2023; Jurek et al. 2023). These activities can give religious believers a sense of control and inner peace, even in the face of dynamic technological changes.
The observation that religious people show a higher level of anger in response to AI may be related to strong anxieties concerning the ethical and moral implications of the developing technologies. Anger may therefore stem from the feeling that AI is being developed without the necessary reflection on its potential impact on traditional values and social norms (Sætra and Fosch-Villaronga 2021).
The higher level of anger in response to AI that was observed among religious people may be related to strong anxieties concerning the ethical and moral implications of the developing technologies. There are no codes of ethics that would harmonize technological innovations with religious and spiritual values, provoking questions about specific technological implementations based on ethics. What remains a problem is the concrete application of categories such as fairness, transparency, and human supervision in AI systems and the definition of what “human-centered artificial intelligence” means in light of religious and ethical values (Hagendorff 2020). A modification is recommended in the approach to ethics, which should cover various aspects—from the ethics of technology, through machine ethics, computer ethics, and information ethics, to data ethics. This kind of change in approach is crucial for ethics to be effective and appropriate to the rapidly developing technologies (Gebru et al. 2021). Moreover, the higher level of anger may also be an expression of fears associated with the possible application of AI in a manner that may violate the privacy, dignity, or freedom of individuals, which is contrary to many religious beliefs about the sacredness of life and personal autonomy. Examples of this kind of application may include surveillance, behavioral control, or even the manipulation of people by means of advanced technologies.
The lower level of disgust observed among religious believers in response to AI may reflect their ability to see these technologies in a wider and often spiritual context. As mentioned before, many religious people may perceive new technologies not as a threat but as tools that can support their religion or even enrich spiritual and social life.
The development of new technologies, especially the Internet, has enabled the emergence of new forms of expressing religiosity and religious practice and has given rise to new religious movements. It has also enabled the adaptation and reinterpretation of traditional religious practices in such a way that they can function and develop in the digital space. In other words, new Internet technologies have not only led to the emergence of new forms of religiosity but have also made it possible to transfer and adjust existing, traditional religious practices to a digital environment. This opens new possibilities for their development and expression (Campbell 2011).
In this perspective, technologies such as AI are not perceived as inherently good or evil; their evaluation depends on the way people use them. They can be used to support spiritual development, education, and communication or even to achieve a deeper understanding of religious texts and practices.
The lower level of disgust observed may also stem from the fact that many religions promote values such as mercy, compassion, and understanding. These values may encourage the faithful to approach new technologies with greater openness and a less critical attitude. Rather than feel disgust, religious people may be more inclined to look for ways in which technology can serve higher purposes such as helping the needy, improving the quality of life, or even spiritual development.
However, the weaker disgust response does not have to mean a lack of criticism or unawareness of potential threats. Religious people may still be aware of and concerned about the moral consequences of using AI technology, but, at the same time, they see it as a potential tool to achieve positive goals.
In summation, the lower level of disgust towards AI among religious people can be interpreted as resulting from their ability to integrate new technologies with their spiritual worldview and values. Balanced with ethical and moral responsibility, this attitude of openness can contribute to a more harmonious coexistence between technological progress and traditional value systems.

5. Conclusions

The “Students’ Attitudes towards AI” survey sheds light on the complex effect of the level of religiosity on the emotional response to artificial intelligence. As shown by the research results presented, religiosity is related to more intense emotions, both negative (fear, anger) and positive (lower sadness), in response to AI. These results highlight the fact that attitudes towards technology are determined not only by technical aspects but also by deeply rooted values and the level of religiosity.
More specifically, the observed increase in fear and anger among religious people can be interpreted as a defensive reaction to the perceived threat to traditional values and beliefs. On the other hand, the lower level of sadness observed may reflect the social and spiritual support that religion offers to its followers in the face of uncertainty and anxiety connected with rapid technological changes. Further development of AI is inevitable and cannot be stopped. Modern technologies based on AI tools acting as “priests” can play the role of virtual spiritual mentors, providing information and spiritual support (Jackson et al. 2023).
When developing new technologies, it is essential to ensure that they are designed in such a way that the user, regardless of his or her religiosity level, does not experience fear when using them. It is important that this technology should evoke a sense of security and trust rather than fear or anxiety. AI ethics should therefore take account of users’ emotional responses, offering solutions that minimize fear and uncertainty and, at the same time, increase a sense of control and understanding of technology. It should be assumed that AI can be used in the service of humankind in an ethical and responsible way, with due respect accorded to diverse cultural and religious beliefs. The inclusion of these aspects in the design of AI will contribute to the development of more acceptable solutions that will support and facilitate users’ everyday lives without evoking negative emotions. In this context, the need for further research into the effect of religiosity on the emotions evoked by AI becomes apparent.

Author Contributions

Conceptualization, J.K. and S.F.; methodology, J.K. and S.F.; software, J.K.; validation, J.K. and S.F.; formal analysis, J.K.; investigation, J.K. and S.F.; resources, J.K. and S.F.; data curation, J.K.; writing—original draft preparation, J.K.; writing—review and editing, S.F.; visualization, J.K.; supervision, S.F.; project administration, S.F.; funding acquisition, J.K. and S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Polish Catholic Mission in England and Wales & M. B. Grabowski Fund (London); grant number 001/[2023]. And the APC was funded by The John Paul II Catholic University of Lublin (KUL).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Humanities & Social Sciences Research Ethics Committee (HSSREC)—protocol code 12/2023; date of approval: 2 November 2023.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
RDS is a type of snowball sampling technique used for analyzing the characteristics of hidden or hard-to-reach populations.
2
Violations of ANOVA assumptions may result in type I and type II errors. Type I errors consist of wrongly rejecting a null hypothesis, whereas type II errors consist of wrongly accepting it. A violation of the normality assumption has little effect in the form of type I errors, but it may affect statistical power. A violation of the equality of variance assumption leads to type I errors, and in the case of unbalanced sample sizes, it may lead to type I or II errors, depending on the pairing of sample size and variance.
3
The Games–Howell test is used when ANOVA assumptions, such as the homogeneity of variance assumption, are not met, which was the case in our study.

References

  1. Abadi, Mansurni Adi, Rajih Arraki, Anna Podara, Tatevik Ghazaryan, and Ahmad Mufid. 2023. The Turning Point of Civilization: Sociological Perspective Toward Artificial Intelligence on Modern Humanity. Simulacra 6: 293–310. [Google Scholar] [CrossRef]
  2. Adamczyk, Grzegorz, and Marcin Jarek. 2020. Autonomy or Co-occurence? Religiousness and Moral Types of Young People from the Tarnowska Diocese. Zeszyty Naukowe Wyższej Szkoły Nauk Społecznych z siedzibą w Lublinie 9: 45–58. [Google Scholar]
  3. Ahmed, Habib, and Hung Manh La. 2021. Evaluating the Co-dependence and Co-existence between Religion and Robots: Past, Present and Insights on the Future. International Journal of Social Robotics 13: 219–35. [Google Scholar] [CrossRef]
  4. Al-Kassimi, Khaled. 2023. A Postmodern (Singularity) Future with a Post-Human Godless Algorithm: Trans-Humanism, Artificial Intelligence, and Dataism. Religions 14: 1049. [Google Scholar] [CrossRef]
  5. Bakiner, Onur. 2023. The Promises and Challenges of Addressing Artificial Intelligence with Human Rights. Big Data & Society 10: 20539517231205476. [Google Scholar] [CrossRef]
  6. Bingaman, Kirk A. 2023. Religion in the Digital Age: An Irreversible Process. Religions 14: 108. [Google Scholar] [CrossRef]
  7. Bożewicz, Marta. 2022. Nowy język badań sondażowych nad religijnością. Analizy i próby [New Language of Survey Research on Religiosity: Analyses and Attempts]. Warszawa: Wydawnictwo Naukowe Scholar. [Google Scholar]
  8. Campbell, Heidi. 2011. Internet and Religion. In The Handbook of Internet Studies. Edited by Mia Consalvo and Charles Ess. Oxford and Chichester: A John Wiley & Sons, Ltd., pp. 232–50. [Google Scholar]
  9. Clarke, Holly, Hannah Spiro, and Ben Lyons. 2022. Attitudes to Data and AI Tracker Survey. London: Centre for Data Ethics and Innovation Public. [Google Scholar]
  10. Cooper, Anthony-Paul, Ilkka Jormanainen, Annastasia Shipepe, and Erkki Sutinen. 2021. Faith Communities Online: Christian Churches’ Reactions to the COVID-19 Outbreak. International Journal of Web Based Communities 17: 99–119. [Google Scholar] [CrossRef]
  11. Davie, Grace. 1993. Believing without Belonging. A Liverpool Case Study. Archives de Sciences Sociales des Religions 38: 79–89. Available online: http://www.jstor.org/stable/30114732 (accessed on 18 February 2024).
  12. Delacre, Marie, Christophe Leys, Youri L Mora, and Daniël Lakens. 2019. Taking Parametric Assumptions Seriously: Arguments for the Use of Welch’s F-test Instead of the Classical F-test in One-way ANOVA. International Review of Social Psychology 32: 1–13. [Google Scholar] [CrossRef]
  13. Dinamarca-Aravena, Katherine Andrea, and José M. Cabezas. 2023. The Interprofessional Practice of Health Professionals in Inclusive Schools: A Study Using Respondent-Driven Sampling. Journal of Interprofessional Education & Practice 32: 100650. [Google Scholar] [CrossRef]
  14. Federspiel, Frederik, Ruth Mitchell, Asha Asokan, Carlos Umana, and David McCoy. 2023. Threats by Artificial Intelligence to Human Health and Human Existence. BMJ Global Health 8: e010435. [Google Scholar] [CrossRef]
  15. Fel, Stanisław. 2018. Idee katolickiej nauki społecznej w myśli socjologicznej Jana Turowskiego [Catholic Social Teaching Ideas in the Sociological Thought of Jan Turowski]. Roczniki Nauk Społecznych 46: 5–15. [Google Scholar] [CrossRef]
  16. Fel, Stanisław, Jarosław Kozak, and Marek Wódka. 2020. Zostać czy wracać? Metaforycznie o reemigracji Polaków po Brexicie [Remain or Return? On the Post-Brexit Re-Emigration of Poles Metaphorically]. Polityka i Społeczeństwo 18: 80–96. [Google Scholar] [CrossRef]
  17. Fel, Stanisław, Krzysztof Jurek, and Katarzyna Lenart-Kłoś. 2023. The Relationship Between Socio-Demographic Factors and Depression: A Cross Sectional Study Among Civilian Participants in Hostilities in Ukraine. Scientific Reports 13: 21897. [Google Scholar] [CrossRef] [PubMed]
  18. Galaz, Victor, Miguel A. Centeno, Peter W. Callahan, Amar Causevic, Thayer Patterson, Irina Brass, Seth Baum, Darryl Farber, Joern Fischer, David Garcia, and et al. 2021. Artificial Intelligence, Systemic Risks, and Ustainability. Technology in Society 67: 101741. [Google Scholar] [CrossRef]
  19. Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for Datasets. Communications of the ACM 64: 86–92. [Google Scholar] [CrossRef]
  20. Glikson, Ella, and Anita Williams Woolley. 2020. Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals 14: 627–60. [Google Scholar] [CrossRef]
  21. Hagendorff, Thilo. 2020. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines 30: 99–120. [Google Scholar] [CrossRef]
  22. Heckathorn, Douglas D. 2014. Respondent-Driven Sampling: A New Approach to the Study of Hidden Populations. Social Problems 44: 174–99. [Google Scholar] [CrossRef]
  23. Heckathorn, Douglas D., and Christopher J. Cameron. 2017. Network Sampling: From Snowball and Multiplicity to Respondent-Driven Sampling. Annual Review of Sociology 43: 101–19. [Google Scholar] [CrossRef]
  24. Ho, Manh-Tung, Ngoc-Thang B. Le, Peter Mantello, Manh-Toan Ho, and Nader Ghotbi. 2023. Understanding the Acceptance of Emotional Artificial Intelligence in Japanese Healthcare System: A Cross-Sectional Survey of Clinic Visitors’ Attitude. Technology in Society 72: 102166. [Google Scholar] [CrossRef]
  25. Isański, Jakub, Marek Nowak, Jarosław Kozak, and John Eade. 2023. The New Parochialism? Polish Migrant Catholic Parishes on the Path of Change. Review of Religious Research 1: 1–21. [Google Scholar] [CrossRef]
  26. Jackson, Joshua Conrad, Kai Chi Yam, Pok Man Tang, Ting Liu, and Azim Shariff. 2023. Exposure to Robot Preachers Undermines Religious Commitment. Journal of Experimental Psychology: General 152: 3344–58. [Google Scholar] [CrossRef] [PubMed]
  27. Jecker, Nancy S., and Eisuke Nakazawa. 2022. Bridging East-West Differences in Ethics Guidance for AI and Robotics. AI 3: 45. [Google Scholar] [CrossRef]
  28. Jurek, Krzysztof, Iwona Niewiadomska, and Leon Szot. 2023. Turning to Religion as a Mediator of the Relationship Between Hopelessness and Job Satisfaction During the COVID-19 Pandemic Among Individuals Representing the Uniformed Services or Working in Professions of Public Trust in Poland. PLoS ONE 18: e0291196. [Google Scholar] [CrossRef]
  29. Koivisto, Mika, and Simone Grassini. 2023. Best Humans Still Outperform Artificial Intelligence in a Creative Divergent Thinking Task. Scientific Reports 13: 13601. [Google Scholar] [CrossRef]
  30. Kozak, Jarosław. 2014. Dzieci postmoderny? Studium Socjologiczne nad Religijnością Studentów [Postmodern Children? A Sociological Study on the Religiosity of Students]. Sandomierz: Wydawnictwo Diecezjalne i Drukarnia. [Google Scholar]
  31. Kozak, Jarosław. 2015. Małżeństwo i Rodzina w Świadomości Nupturientów w Polsce i Wielkiej Brytanii. Studium Socjologiczne na 120-lecie Polskiej Misji Katolickiej Anglii i Walii [Marriage and Family in the Consciousness of Betrothed in Poland and Great Britain: A Sociological Study for the 120th Anniversary of the Polish Catholic Mission in England and Wales]. Lublin: Wydawnictwo Gaudium. [Google Scholar]
  32. Mantello, Peter, Manh-Tung Ho, Minh-Hoang Nguyen, and Quan-Hoang Vuong. 2023. Bosses without a heart: Socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace. AI & Society 38: 97–119. [Google Scholar] [CrossRef]
  33. Mariański, Janusz. 2004. Religijność Społeczeństwa Polskiego w Perspektywie Europejskiej: Próba Syntezy Socjologicznej [Religiosity of Polish Society in a European Perspective: An Attempt at a Sociological Synthesis]. Kraków: Nomos. [Google Scholar]
  34. Mariański, Janusz. 2023. Religijność Młodzieży Polskiej w Procesie Przemian w XXI Wieku (Analizy Socjologiczne) [Religiosity of Polish Youth in the Process of Transformation in the 21st Century (Sociological Analyses)]. Lublin: Akademia ANSiM. [Google Scholar]
  35. Markauskaite, Lina, Rebecca Marrone, Oleksandra Poquet, Simon Knight, Roberto Martinez-Maldonado, Sarah Howard, Jo Tondeur, Maarten De Laat, Simon Buckingham Shum, Dragan Gašević, and et al. 2022. Rethinking the entwinement between artificial intelligence and human learning: What capabilities do learners need for a world with AI? Computers and Education: Artificial Intelligence 3: 100056. [Google Scholar] [CrossRef]
  36. McGuire, Meredith B. 2008. Religion: The Social Context. Long Grove: Waveland Press. [Google Scholar]
  37. Modliński, Artur, Paweł Fortuna, and Bohdan Rożnowski. 2023. Human–machine trans roles conflict in the organization: How sensitive are customers to intelligent robots replacing the human workforce? International Journal of Consumer Studies 47: 100–17. [Google Scholar] [CrossRef]
  38. Montemayor, Carlos, Jodi Halpern, and Abrol Fairweather. 2022. In Principle Obstacles for Empathic AI: Why We Can’t Replace Human Empathy in Healthcare. AI & Society 37: 1353–59. [Google Scholar] [CrossRef]
  39. Nath, Rajakishore, and Riya Manna. 2023. From posthumanism to ethics of artificial intelligence. AI & Society 38: 185–96. [Google Scholar] [CrossRef]
  40. Noble, David F. 2013. The Religion of Technology: The Divinity of Man and The Spirit of Invention. New York: Alfrad a. Knopf Inc. [Google Scholar]
  41. Park, Jiyoung, and Sang Eun Woo. 2022. Who Likes Artificial Intelligence? Personality Predictors of Attitudes toward Artificial Intelligence. The Journal of Psychology 156: 68–94. [Google Scholar] [CrossRef] [PubMed]
  42. Park, Peter S., Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks. 2023. AI Deception: A Survey of Examples, Risks, and Potential Solutions. arXiv arXiv:2308.14752. [Google Scholar]
  43. Persson, Anders, Mikael Laaksoharju, and Hiroshi Koga. 2021. We Mostly Think Alike: Individual Differences in Attitude Towards AI in Sweden and Japan. The Review of Socionetwork Strategies 15: 123–42. [Google Scholar] [CrossRef]
  44. Prabhakaran, Vinodkumar, Rida Qadri, and Ben Hutchinson. 2022. Cultural Incongruencies in Artificial Intelligence. arXiv arXiv:2211.13069. [Google Scholar] [CrossRef]
  45. Qi, Weihong, Jinsheng Pan, Hanjia Lyu, and Jiebo Luo. 2023. Excitements and Concerns in the Post-ChatGPT Era: Deciphering Public Perception of AI through Social Media Analysis. arXiv arXiv:2307.05809. [Google Scholar]
  46. Reed, Randall. 2021. A.I. in Religion, A.I. for Religion, A.I. and Religion: Towards a Theory of Religious Studies and Artificial Intelligence. Religions 12: 401. [Google Scholar] [CrossRef]
  47. Robinson, Stephen Cory. 2020. Trust, Transparency, and Openness: How Inclusion of Cultural Values Shapes Nordic National Public Policy Dtrategies for Artificial Intelligence (AI). Technology in Society 63: 101421. [Google Scholar] [CrossRef]
  48. Rosenberg, Milton J., and Carl I. Hovland. 1960. Cognitive, Affective and Behavioral Components of Attitudes. In Attitude Organization and Change: An Analysis of Consistency among Attitude Components. Edited by Milton J. Rosenberg and Carl I. Hovland. New Haven: Yale University Press, pp. 1–14. [Google Scholar]
  49. Sadlon, Wojciech. 2022. The Social Activity of Polish Migrants in the Republic of Ireland from the Perspective of Their Religiosity. Review of Religious Research 64: 907–32. [Google Scholar] [CrossRef]
  50. Sætra, Henrik Skaug, and Eduard Fosch-Villaronga. 2021. Research in AI Has Implications for Society: How do We Respond? Morals & Machines 1: 60–73. [Google Scholar] [CrossRef]
  51. Salganik, Matthew J., and Douglas D. Heckathorn. 2004. 5. Sampling and Estimation in Hidden Populations Using Respondent-Driven Sampling. Sociological Methodology 34: 193–240. [Google Scholar] [CrossRef]
  52. Salganik, Matthew J. 2006. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling. Journal of Urban Health 83: 98–112. [Google Scholar] [CrossRef] [PubMed]
  53. Schepman, Astrid, and Paul Rodway. 2023. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. International Journal of Human–Computer Interaction 39: 2724–41. [Google Scholar] [CrossRef]
  54. Schilbrack, Kevin. 2022. The Concept of Religion. In The Stanford Encyclopedia of Philosophy. Edited by Edward N. Zalta. Stanford: Stanford University, pp. 1–7. [Google Scholar]
  55. Shibuya, Kazuhiko, and Tina Sikka. 2023. Genetic Science and New Digital Technologies: Science and Technology Studies and Health Praxis. In 4: An ‘Artificial’ Concept as the Opposite of Human Dignity. Bristol: Bristol University Press, pp. 81–102. [Google Scholar]
  56. Siriwardhana, Yushan, Pawani Porambage, Madhusanka Liyanage, and Mika Ylianttila. 2021. AI and 6G Security: Opportunities and Challenges. Paper presented at the 2021 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), Porto, Portugal, June 8–11. [Google Scholar]
  57. Soler, Urszula, Joanna Ejdys, Bohdan Rożnowski, and Grzegorz Adamczyk. 2023. Postawy Polaków Wobec Nowych Technologii. Znajomość Tematyki pola Elektromagnetycznego i Emocje Związane z PEM [Attitudes of Poles Towards New Technologies: Awareness of Electromagnetic Field Issues and Emotions Associated with EMF]. Warszawa and Lublin: Polska Izba Informatyki i Telekomunikacji & Katolicki Uniwersytet Lubelski Jana Pawła II. [Google Scholar]
  58. Stahl, Bernd Carsten, and Damian Eke. 2024. The Ethics of ChatGPT–Exploring the Ethical Issues of an Emerging Technology. International Journal of Information Management 74: 102700. [Google Scholar] [CrossRef]
  59. Tyldum, Guri. 2021. Surveying Migrant Populations with Respondent-Driven Sampling. Experiences From Surveys of East-West Migration in Europe. International Journal of Social Research Methodology 24: 341–53. [Google Scholar] [CrossRef]
  60. Uttam, Jitendra. 2023. Between Buddhist ‘Self-Enlightenment’ and ‘Artificial Intelligence’: South Korea Emerging as a New Balancer. Religions 14: 150. [Google Scholar] [CrossRef]
  61. Wodka, Marek, Stanislaw Fel, Beata Zarzycka, and Jaroslaw Kozak. 2022. How Does the Catholic Clergy Influence What Poles in the UK Know and Think about Brexit? Religions 13: 45. [Google Scholar] [CrossRef]
  62. Wódka, Marek, Stanisław Fel, and Jarosław Kozak. 2020. Religiosity of Polish Catholics in the UK: Attitude towards Faith, Affiliation, Membership and Religious Practices. Religions 11: 422. [Google Scholar] [CrossRef]
  63. Wojtynska, Anna. 2011. From Shortage of Labour to Shortage of Jobs: Polish Unemployed Migrants in Iceland. In Rannsoknir i Felagsvisindum XII. Edited by Ása Guðný Ásgeirsdóttir, Björnsdóttir Helga and Helga Ólafs. Reykjavík: University of Iceland, pp. 46–55. [Google Scholar]
  64. Yam, Kai Chi, Tiffany Tan, Joshua Conrad Jackson, Azim Shariff, and Kurt Gray. 2023. Cultural Differences in People’s Reactions and Applications of Robots, Algorithms, and Artificial Intelligence. Management and Organization Review 19: 859–75. [Google Scholar] [CrossRef]
  65. Yang, Qian, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-Examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Paper presented at the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25–30; New York: Association for Computing Machinery, pp. 1–13. [Google Scholar]
  66. Zarzycka, Beata, Rafał P. Bartczuk, and Radosław Rybarski. 2020. Centrality of Religiosity Scale in Polish Research: A Curvilinear Mechanism that Explains the Categories of Centrality of Religiosity. Religions 11: 64. [Google Scholar] [CrossRef]
  67. Zdun, Magdalena. 2016. Innowacje. Perspektywa Społeczno-Kulturowa [Innovations: A Socio-Cultural Perspective]. Lublin: Wydawnictwo KUL. [Google Scholar]
Figure 1. Means (M) of emotional responses to Artificial Intelligence, where ratings range from 0 (no emotional response) to 4 (strong emotional response). Source: authors’ calculation.
Figure 1. Means (M) of emotional responses to Artificial Intelligence, where ratings range from 0 (no emotional response) to 4 (strong emotional response). Source: authors’ calculation.
Religions 15 00331 g001
Figure 2. Comparison of emotional responses to AI across different religiosity levels, illustrating the means (M) of responses. Source: authors’ calculation.
Figure 2. Comparison of emotional responses to AI across different religiosity levels, illustrating the means (M) of responses. Source: authors’ calculation.
Religions 15 00331 g002
Table 1. Comparison of student numbers by field of study based on reports released by Statistics Poland and authors’ data.
Table 1. Comparison of student numbers by field of study based on reports released by Statistics Poland and authors’ data.
Statistics Poland
N
Statistics Poland %Authors’ Data NAuthors’ Data %
Humanities and the arts and individual interdisciplinary studies168,72013.814313.1
Design and technology, industry, construction, and information and communications technology231,91219.021219.5
Health and social welfare162,64313.314413.2
Agriculture20,9781.7232.1
Business, administration and law, social sciences, journalism, and information and education509,81841.745641.9
Natural sciences, mathematics, and statistics40,7723.3353.2
Other (including services)87,3527.1756.9
Total1,222,195100.01088100.0
Source: authors’ calculation.
Table 2. Frequency of religious practices.
Table 2. Frequency of religious practices.
N%
(1) Never31929.3
(2) Very rarely23521.6
(3) Irregularly23421.5
(4) Moderately often19217.6
(5) Very often1089.9
Total1088100.0
Source: authors’ calculation.
Table 3. Self-reported religious belief.
Table 3. Self-reported religious belief.
N%
(1) Non-believer21619.9
(2) Indifferent16515.2
(3) Undecided but attached to religious tradition28626.3
(4) Believer27625.4
(5) Strong believer14513.3
Total1088100.0
Source: authors’ calculation.
Table 4. Categories of religiosity.
Table 4. Categories of religiosity.
N%
(1) Irreligious39536.3
(2) Indifferent42939.4
(3) Religious26424.3
Total1088100.0
Source: authors’ calculation.
Table 5. Post hoc comparisons of emotional responses to AI across different levels of religiosity.
Table 5. Post hoc comparisons of emotional responses to AI across different levels of religiosity.
EmotionCategories of ReligiosityMDSEp95% Confidence Interval
LowerUpper
FearIrreligiousIndifferent−0.0480.06400.734−0.2000.100
Religious−0.3460.07500.000−0.520−0.170
IndifferentIrreligious0.0480.06400.734−0.1000.200
Religious−0.2980.07700.000−0.480−0.120
ReligiousIrreligious0.3460.07500.0000.1700.520
Indifferent0.2980.07700.0000.1200.480
SadnessIrreligiousIndifferent−0.0290.06200.888−0.1700.120
Religious−0.4370.07600.000−0.610−0.260
IndifferentIrreligious0.0290.06200.888−0.1200.170
Religious−0.4080.07800.000−0.590−0.230
ReligiousIrreligious0.4370.07600.0000.2600.610
Indifferent0.4080.07800.0000.2300.590
AngerIrreligiousIndifferent−0.0750.06300.458−0.2200.070
Religious−0.4050.07600.000−0.580−0.230
IndifferentIrreligious0.0750.06300.458−0.0700.220
Religious−0.330.07900.000−0.520−0.150
ReligiousIrreligious0.4050.07600.0000.2300.580
Indifferent0.330.07900.0000.1500.520
DisgustIrreligiousIndifferent−0.0510.06300.701−0.2000.100
Religious−0.3550.07800.000−0.540−0.170
IndifferentIrreligious0.0510.06300.701−0.1000.200
Religious−0.3040.08000.000−0.490−0.120
ReligiousIrreligious0.3550.07800.0000.1700.540
Indifferent0.3040.08000.0000.1200.490
Source: authors’ calculation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kozak, J.; Fel, S. The Relationship between Religiosity Level and Emotional Responses to Artificial Intelligence in University Students. Religions 2024, 15, 331. https://doi.org/10.3390/rel15030331

AMA Style

Kozak J, Fel S. The Relationship between Religiosity Level and Emotional Responses to Artificial Intelligence in University Students. Religions. 2024; 15(3):331. https://doi.org/10.3390/rel15030331

Chicago/Turabian Style

Kozak, Jaroslaw, and Stanislaw Fel. 2024. "The Relationship between Religiosity Level and Emotional Responses to Artificial Intelligence in University Students" Religions 15, no. 3: 331. https://doi.org/10.3390/rel15030331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop