Next Article in Journal
Restorative Practice and Therapeutic Jurisprudence in Court: A Case Study of Teesside Community Court
Previous Article in Journal
Gender and Sentencing in Lithuania: More Mercy for Women?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

People’s Attitudes towards Technologies in Courts

Institute of Psychology, Vilnius University, 01513 Vilnius, Lithuania
Laws 2022, 11(5), 71; https://doi.org/10.3390/laws11050071
Submission received: 5 July 2022 / Revised: 4 September 2022 / Accepted: 5 September 2022 / Published: 14 September 2022

Abstract

:
Courts are high-stakes environments; thus, the impact of implementing legal technologies is not limited to the people directly using the technologies. However, the existing empirical data is insufficient to navigate and anticipate the acceptance of legal technologies in courts. This study aims to provide evidence for a technology acceptance model in order to understand people’s attitudes towards legal technologies in courts and to specify the potential differences in the attitudes of people with court experience vs. those without it, in the legal profession vs. other, male vs. female, and younger vs. older. A questionnaire was developed, and the results were analyzed using partial least squares structural equation modeling (PLS-SEM). Multigroup analyses have confirmed the usefulness of the technology acceptance model (TAM) across age, gender, profession (legal vs. other), and court experience (yes vs. no) groups. Therefore, as in other areas, technology acceptance in courts is primarily related to perceptions of usefulness. Trust emerged as an essential construct, which, in turn, was affected by the perceived risk and knowledge. In addition, the study’s findings prompt us to give more thought to who decides about technologies in courts, as the legal profession, court experience, age, and gender modify different aspects of legal technology acceptance.

1. Introduction

Technologies promise more accessibility to justice and less complexity (Xu 2017; Alarie et al. 2018), as courts face many issues. For example, Brazil has a staggering backlog of 78 million lawsuits (Brehm et al. 2020). A great disruptor of life for the past two years—COVID-19—has also pushed courts to use technologies (Legg 2021), revealing practical issues and the need for simple tools (Fabri 2021). Ideally, the decisions on whether to implement technologies in courts would be based mostly on hard jurimetrics data, such as a scrupulous analysis of the judicial decisions both with and without the technological tool using a set of clearly defined criteria. However, such data might not be available for the initial decisions to build or to test the tools.
Moreover, most of the matters that are discussed in the field, such as algorithmic justice or the regulation of artificial intelligence, are navigated by people. People differ regarding their role within the legal system and their potential influence on court decisions. For example, a regulatory body member might decide to implement specific court tools due to their progressive attitudes. At the same time, a litigant might file a case regarding an unfair process because they think that the algorithms violate their right to due process (Freeman 2016; Liu et al. 2019). Moreover, a judge might over-rely on a decision aid (Engel and Grgić-Hlača 2021). In addition, a citizen might join a protest against a seemingly unjust use of technology in courts (Vetzo 2022). Furthermore, judges are a part of society and have a role in shaping it (Zalnieriute and Bell 2019). Despite these factors, there is a shortage of empirical investigations into how different people feel about legal technologies in courts. Thus, this paper explores whether the technology acceptance model can be applied in order to investigate the attitudes towards legal technologies in courts among people with different characteristics, such as court experience, the legal profession, age, and gender.
Although the most sensational tools—such as a robot judge—might not be a possible or desirable option for the foreseeable future (Ulenaers 2020), the legal technologies for courts are already quite progressive. Some countries were using such technologies in courts even before the pandemic. For example, some provinces of Canada were already settling small claims by using algorithms. Estonia claimed to be creating an algorithm for small claims in order to help with the court backlog. The USA used programs that help with recommendations on risk assessments (R. Wang 2020). Lithuania already had an e-filing system (European Judicial Network 2019). In addition, China employed a database in order to warn a judge if a sentence significantly differed from the sentences of similar cases and had launched an e-court (R. Wang 2020; N. Wang 2020).
It is essential to explore people’s attitudes towards legal technologies before their implementation. Notably, both court clients and lawyers might be unsatisfied with the most state-of-the-art technology (Hongdao et al. 2019; Sandefur 2019). For example, in 2019, France made it illegal to engage in judicial analytics for predicting individual judicial behavior, i.e., “the identity data of judges and members of the judicial registry cannot be used to aid in evaluating, analyzing, comparing or predicting their professional practices” (McGill and Salyzyn 2021). In the Netherlands, civil rights organizations brought the fraud detection system that has been used in Dutch courts since 2014 to the District Court of The Hague, which decided that it violates Article 8 of the European Convention on Human Rights (ECHR) (the right to respect for private and family life) (Vetzo 2022). The infamous Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) faced substantial public, scholarly, and legal criticism regarding its racial bias (Malek 2022; Zhang and Han 2022). However, there are only a handful of studies regarding in-advance legal technology perception; one study was conducted in order to capture the ethical concerns for technologies in law (Muhlenbach and Sayn 2019) and there were a couple of qualitative studies on robot lawyer acceptance in private legal practice (Xu et al. 2022; Xu and Wang 2019).
Technology development, as well as use, depends on humans, and technologies might challenge humans. For example, legal technologies require new knowledge and skills (Dubois 2021; Suarez 2020), but only a few educational institutions offer legal technology courses or modules (Ryan 2021). While some lawyers are more likely to adapt to the growing competition of legal services that are aided by technologies, others are skeptical about the change (Abdul Jalil and Mohd Sheriff 2020; Brooks et al. 2020; Muhlenbach and Sayn 2019). In addition, human cognitive and behavioral peculiarities might distort the tool’s intended use (Engel and Grgić-Hlača 2021). Understanding the human perceptions and attitudes towards related technologies might be crucial to the successful development and implementation of legal technologies in courts.
A critical question regarding legal technologies in courts is what people expect from the technologies concerning fairness. From the perspective of perceived fairness, people need distributive and procedural fairness in order to be satisfied with courts (Blader and Tyler 2003). On one hand, algorithmic fairness discusses the substance of the algorithm (Hellman 2020; Wachter et al. 2017, 2021; Xiang 2021)—in a sense, distributive justice. There is already some exciting research on moral judgment, suggesting that people think that it is more likely that an algorithm would make a conviction than a human judge (English et al. 2021). On the other hand, procedural fairness provides one of the primary sources of trust and legitimacy in courts (Burke 2020; Burke and Leben 2020). The principal procedural fairness concerns, such as voice, neutrality, respect, and trust, might become even more critical with the automation of courts (Binns et al. 2018). The emerging studies of automated decisions find that human involvement in decision-making makes a qualitative difference in the sense of fairness (Newman et al. 2020). Thus, it is crucial to investigate the fairness expectations for the court processes involving technology.
This study contributes to the expanding field of legal technologies by exploring the technology acceptance in courts among various groups of people. Firstly, this research provides a theoretical input by examining the validity of the technology acceptance model (TAM) (Davis 1989, 2014; Venkatesh and Davis 2000; Davis et al. 1989) in the court’s context for different groups of people. Furthermore, the TAM is extended with relevant constructs, such as the perceived risk and trust in technologies. Adding fairness expectations for technologies in courts to the research model contributes to the theorizing about fairness perceptions in courts. Finally, exploring personal innovativeness, age, court experience, and profession (lawyers vs. others) helps us to understand the potential differences in different groups of people.

2. Theoretical Framework and Research Hypotheses

The technology acceptance concept is not new (Taherdoost 2018). One of the most parsimonious and favored models that is used in order to investigate the intention to use technology in various domains (Wu et al. 2011) is the technology acceptance model (TAM) (Davis 1989; Venkatesh and Davis 2000; Davis et al. 1989; Venkatesh and Bala 2008). There are already about 20 meta-analyses of the TAM (Feng et al. 2021), and many studies exploring technology acceptance in various fields. The TAM has been used within the legal field (Xu et al. 2022; Xu and Wang 2019), however, only in qualitative studies, and only focused on a specific question of a robot lawyer. In accordance with the aim of the study, the main TAM variables, as well as the TAM extensions, have been specified and the hypotheses have been stated further.

2.1. Main TAM Variables

The original TAM posits that a few core variables affect the behavioral intention to use technology (BI)—such as the attitude toward using technologies (ATT), the perceived ease of use (PEOU), and the perceived usefulness (PU) (Davis 1989). Later versions (Venkatesh and Bala 2008), as well as most of the extensions of the TAM, omit the attitude toward using technology (Feng et al. 2021; King and He 2006; Liu and Tao 2022; Yoon 2018) and replace it with various external variables, i.e., variables that were not originally included in the TAM, such as self-efficacy and subjective norms (Feng et al. 2021). Following the TAM, the perceived usefulness and ease of use (construct definitions are presented in Table 1) are expected to positively influence the attitudes toward using technologies (Davis et al. 1989; Feng et al. 2021). Additionally, the perceived ease of use positively affects the perceived usefulness. Therefore, the following three main hypotheses have been tested:
Hypothesis 1.
The perceived usefulness significantly positively affects the behavioral intention to use legal technologies in courts.
Hypothesis 2.
The perceived ease of use significantly positively affects the behavioral intention to use legal technologies in courts.
Hypothesis 3.
The perceived ease of use significantly positively affects the perceived usefulness of legal technologies in courts.

2.2. TAM Extensions

The first version of the TAM already indicated the possible influences of other variables (Davis et al. 1989); additionally, subsequent reviews of the models TAM2 (Venkatesh and Davis 2000) and TAM3 (Venkatesh and Bala 2008) systematically added more external variables and relationships within the TAM. Various additions to the TAM are quite frequent in literature (Yoon 2018; Moon and Kim 2001; Ortega Egea and Román González 2011; Tsai et al. 2020; Tung et al. 2008). However, the external variables significantly differ across study fields (Feng et al. 2021). The current study has explored the impacts of trust, perceived risk, fairness expectations, knowledge about legal technologies, and personal innovativeness (the proposed model is presented in Figure 1).

2.2.1. Trust in Legal Technologies

Trust is essential, especially in the early stages of technology adoption (Ostrom et al. 2018). Furthermore, trust in technologies may be an even more critical issue in the legal domain (Silva et al. 2019). On the one hand, the legal sector, in general, might be characterized as being risk-averse, conservative, and slow to change (Brooks et al. 2020). On the other hand, the careful evaluation and consideration of the advantages and disadvantages of any innovation are crucial to legal services. Trust is often linked to the TAM in various fields of study (Choi and Ji 2015; Dhagarra et al. 2020; Dirsehan and Can 2020; Ejdys 2018; Gefen et al. 2003; Liu and Tao 2022; Ortega Egea and Román González 2011; Prakosa and Sumantika 2021; Wang et al. 2021; Wu et al. 2011; Zhang et al. 2020).
A meta-analysis was carried out in order to better understand the role of trust in information systems (Wu et al. 2011). The effect sizes for the relationships between trust and the perceived ease of use, the perceived usefulness, the attitude, and the behavioral intention to use technology were analyzed using data from 128 articles. The results revealed that trust is significantly related to all of the primary TAM constructs, as follows: trust–behavioral intention (r = 0.483), trust–perceived usefulness (r = 0.511), and trust–perceived ease of use (r = 0.472). The current research has explored the relationships between trust and the behavioral intention to use technologies in courts, the perceived ease of use, and the usefulness of legal technologies in courts with the following hypotheses:
Hypothesis 4.
Trust in legal technologies significantly positively affects the behavioral intention to use legal technologies in courts.
Hypothesis 5.
Trust in legal technologies significantly positively affects the perceived ease of use of legal technologies in courts.
Hypothesis 6.
Trust in legal technologies significantly positively affects the perceived usefulness of legal technologies in courts.

2.2.2. Knowledge about Legal Technologies

In addition, this study has attempted to capture the cognitive component of beliefs about legal technologies. In this study, the cognitive component was defined as the knowledge that an individual holds about the state of legal technologies (Taherdoost 2018). According to the diffusion innovation theory (Rogers 1983; Rogers et al. 2009), the first stage of technology adoption begins with the knowledge or awareness stage, when an individual is exposed to the innovation but lacks complete information. Therefore, although legal technologies are progressing rapidly in many corners of the world and often appear in the media, the general population should not be expected to be very knowledgeable about the topic, as reflected in the shortage of research. More importantly, the trust in legal technologies may be based on the knowledge about them. Therefore, the following two hypotheses were formulated:
Hypothesis 7.
Knowledge about existing technologies for courts has a significant positive influence on the trust in legal technologies.
Hypothesis 8.
Knowledge about existing technologies for courts has a significant positive influence on the perceived usefulness of legal technologies in courts.

2.2.3. Perceived Risk of Legal Technologies

New technologies, in general, are associated with many risks, such as confidentiality, uncertainty, and unpredictability (Ballell 2019; Osoba and Welser 2017). These concerns are critical in the legal domain (Salmerón-Manzano 2021; Guo 2021; Brooks et al. 2020), especially when considering the magnitude of damage in the case of a bias or a mistake.
Risk is an often-used addition to the TAM in various domains (Clothier et al. 2015; Tiwari and Tiwari 2020; Featherman and Pavlou 2003; Ortega Egea and Román González 2011). The perceived risk may predict the perceived usefulness (Featherman and Pavlou 2003) and the intention to use the technology directly (Galib et al. 2018; Ikhsan 2020). In the variant of TAM extension with both trust and risk, the risk directly influences the trust (Ortega Egea and Román González 2011). According to the hypothesizing of the TAM, the external variables would affect the behavioral intention to use technology through the perceived ease of use or the perceived usefulness of the technology. Therefore, the following two hypotheses were formulated:
Hypothesis 9.
The perceived risk of legal technologies has a significant negative effect on the perceived usefulness of legal technologies in courts.
Hypothesis 10.
The perceived risk of legal technologies has a significant negative effect on the trust in legal technologies.

2.2.4. Fairness Expectations for Legal Technologies in Courts

There is a heated debate about algorithmic fairness issues (Shin and Park 2019; Hellman 2020; Wieringa 2020; Wachter et al. 2021, 2017; Xiang 2021). Within the debate, distributive fairness receives most of the attention. Indeed, the fair distribution of resources, e.g., regarding discrimination issues (Xiang 2021), is critical. At the same time, procedural fairness is making its way into the discussion (Lee et al. 2019; Lee 2018; Woodruff et al. 2018). It is important to stress that procedural fairness—which is the fairness of the decision-making process—is not less important than the fairness of the outcomes, especially in the legal domain (Burke 2020). Moreover, the fairness expectations play an essential role in shaping the fairness perceptions on their own when the justice event occurs, e.g., when the litigant goes to trial (Petkevičiūtė-Barysienė and Valickas 2016). Arguably, the procedural fairness expectations are just as crucial as the distributive fairness expectations, and the latter are difficult to adequately assess for a layperson (Petkevičiūtė-Barysienė 2016; Hauenstein et al. 2001). Fairness is the key feature of any court service, and it is strongly related to trust in courts (Burke and Leben 2020). Thus, it could be expected that fairness expectations have an impact primarily on the perceived usefulness of legal technologies in courts, as follows:
Hypothesis 11.
Fairness expectations for legal technologies in courts have a significant positive effect on the perceived usefulness of legal technologies.

2.2.5. Personal Innovativeness in Information Technology

Individual characteristics, such as personal innovativeness in information technology (see definition in Table 1), might predict the core TAM variables, such as the behavioral intention to use technology (Schmidthuber et al. 2020; Ciftci et al. 2021). In the case of legal technologies, however, the degree to which an individual is relatively early in adopting new ideas might be more related to their experience of using various technologies in their daily life and, in turn, may have a direct influence on the perceived ease of use instead of the attitude toward using technology, as follows:
Hypothesis 12.
The personal innovativeness in information technology has a significant positive effect on the perceived ease of use of legal technologies.

2.3. TAM Moderators

In this section, moderators are explored—which are the variables that affect the strength of the relationship between the dependent and the independent variables, such as trust and the behavioral intention to use legal technologies in courts.

2.3.1. Profession

Previous literature has indicated the importance of contextual factors in predicting technology acceptance (Feng et al. 2021; King and He 2006); particular to the legal domain are the court experience and the legal profession factors. However, the attitudes of the representatives of the legal profession are rarely studied within these contexts. As lawyers are generally skeptical of any innovations (Bernal and Hagan 2020), the legal profession might play a role in legal technology acceptance. Thus, all hypotheses have been tested and compared between legal professionals and others.

2.3.2. Court Experience

Court experience is a known factor impacting the perceived trust, legitimacy, and fairness of the court and legal system (Alda et al. 2020; Benesh 2006; Burke and Leben 2020). Most people do not understand how courts operate, nor what to expect in a court hearing. Only a handful of citizens have been to court, and it provides a much stronger basis for attitudes than the media and other sources of information. Hence, court experience might also be critical for the TAM relations. Thus, all hypotheses have been tested and compared between people with and without court experience.

2.3.3. Age

Although chronological age might be problematic in understanding the attitudes toward technologies (Yang and Shih 2020), age might influence the attitude toward implementing and using legal technologies in courts. For example, age is related to court perceptions (Petkevičiūtė-Barysienė and Valickas 2016; Woolard et al. 2008) and fairness expectations (Bell et al. 2006). Older people are expected to trust courts less, at least in Lithuania (Valickas et al. 2016). Therefore, it might be expected that older people may believe that it is more difficult to use legal technologies in courts, due to them being digital immigrants (i.e., people who were born or brought up before the widespread use of digital technology) and being more used to courts with little to no technological aid. Thus, all hypotheses have been tested and compared between younger and older age groups.

2.3.4. Gender

Another commonly researched demographic characteristic is gender. Gender has also been studied within the context of the TAM (Faqih and Jaradat 2015; Sindermann et al. 2020; Hanham et al. 2021). However, the results are split in the following way: some studies show no gender effects (Faqih and Jaradat 2015; Sindermann et al. 2020; Hanham et al. 2021; Dutta et al. 2018), while others detect some (Subawa et al. 2021; Dutta et al. 2018). Due to the lack of data in the legal technology acceptance field, gender differences have been explored throughout the paths of the proposed model in this study.

3. Methods

A quantitative research strategy, in particular, surveying, was chosen for this study. Surveying suffers from several shortcomings, such as self-report biases, discrepancies in how the participants understand the survey items, and others, which may undermine the quality and the potential implications of the obtained results. Nevertheless, there are several reasons why it is crucial to analyze people’s perceptions empirically and quantitatively. First, empirical data on different people’s attitudes towards technologies in courts are needed to better address people’s concerns. Moreover, quantitative analysis allows testing whether a technology acceptance model, which is helpful in understanding and predicting people’s behavioral intentions to use technologies in other high-stakes environments, such as healthcare (Ammenwerth 2019), is also applicable to a court context. Testing the TAM statistically and verifying its components across different groups of participants in the study has helped us to achieve an initial understanding of the limits of the model’s applicability. In addition, a structural analysis of the model has enabled the assessment of the relationships among the constructs, thus helping us to better understand the significant points regarding how people think about technologies in courts. While quantitative analysis is not the most effective at capturing the individuals’ personal views of courts and technologies, it can provide evidence of people’s perceptions and a sound basis for further, more in-depth research.

3.1. Study Sample

The proposed research model was tested using survey data collected online. The study had only one exclusion criterion, i.e., being under 18 years of age. However, there were several criteria for inclusion in the study, as follows: legal profession, court experience, and age. It was preferable to survey younger people (18–39 years of age) and relatively older people (40 years of age and older). The age threshold was relative, based on previous research showing differences in TAM variables within 30 to 40 years (Yang and Shih 2020). Next, it was preferable to have a substantial part of the sample with a legal background, i.e., in the legal profession. Along the same lines, court experience was significant. The snowballing method was used to survey people from the relevant groups.

3.2. Questionnaire Development

The questionnaire included several sections—the first section evaluated participants’ knowledge of legal technologies related to courts. Participants were presented with six statements about various legal technologies. Therein, six types of relatively more complex technologies were listed, such as document submission and initial classification regarding the presence of a legal basis, a decision support system that suggested appropriate penalties for the case, and algorithms for solving small claim disputes. For example, participants of the study read the following statement: “In some countries, judges have access to a program that provides the judge with a detailed analysis of the case, evaluates arguments, and identifies possible outcomes of the case”. Then, the participants were asked to indicate their level of knowledge on a Likert scale from 1 to 5. The following five values reflect the meaningful differences in the knowledge levels of people with various backgrounds: “I know absolutely nothing about this”, “I have heard something about this”, “I have taken a closer look into these technologies”, “I am quite knowledgeable in these technologies”, and “I have tried this or a similar technology”.
The second section of the questionnaire measured participants’ technology acceptance constructs on a scale ranging from 1 (completely disagree) to 7 (completely agree). Seven values for a scale assessing technology acceptance are prevalent with the use of the TAM and are usually better in terms of statistical issues, e.g., distribution normality. These, and all other items, were revised or adopted from previous research, except for the knowledge about legal technologies construct and an item in the ATT scale (see Table 2).
The third and fourth sections concerned the additional constructs, such as perceived risk, fairness expectations, trust in technology, and personal innovativeness, and were also measured using a Likert scale ranging from 1 (completely disagree) to 7 (completely agree).
Lastly, participants were asked for their demographic information, such as age, sex, court experience, and the legal profession.
A modest pilot study (N = 37) was conducted to test the comprehensibility and reliability of the items. The pilot study participants filled in the survey and commented on each questionnaire block’s comprehensibility, wording, and content. There were also lawyers, people with court experience, and people from all three age groups. The reliability of the questionnaire items was satisfactory as the internal consistency coefficient, Cronbach’s alpha, was above 0.7 for all scales.

3.3. Data Analysis

The PLS-SEM approach was chosen for this study as it is more suitable for exploratory analyses and theoretical extensions than the CB-SEM (Hair et al. 2019; Sarstedt et al. 2021). In addition, the PLS-SEM is known for greater statistical power to detect truly significant relationships (Sarstedt et al. 2016). Following recommendations for PLS-SEM analysis (Sarstedt et al. 2021; Hair et al. 2019; Sarstedt et al. 2016), firstly, the measurement model was evaluated and then the structural model was evaluated. Lastly, multigroup analyses were performed to assess potential structural differences between the groups. SmartPLS v. 3.3.3 (Ringle et al. 2015) software was used.

4. Results

4.1. Sample Characteristics

The study was conducted in Lithuania. A total of 408 people participated in the study and Table 3 contains the characteristics of the study sample. The range of participants’ age was from 19 to 81 years, with an average of 37.44 years (SD = 13.28). There were 145 lawyers and law students in the sample (35.8%; 79 lawyers and 66 law students). More than half of the sample had court experience—245 people (60.2%), including 111 lawyers and law students who had been to court (45.7% out of the 245 who had court experience), and 132 others (54.3% out of the 245 who had court experience).

4.2. Measurement Model Assessment

The evaluation of the measurement model consisted of an assessment of the internal consistency, the convergent validity, and the discriminant validity (Hair 2014; Hair et al. 2019; Sarstedt et al. 2021). The results revealed a satisfactory internal consistency and convergent validity (see Table 4) as follows: the Cronbach’s α values were mainly in the range of 0.7–0.95, the composite reliability values were 0.7 < CR > 0.95, the average variance extracted was AVE > 0.5, and the factor loadings were all larger than 0.708 (Hair et al. 2019). The discriminant validity indicator Heterotrait-Monotrait ratio (HTMT) did not exceed 0.9 (Hair et al. 2019) (see Table 5).

4.3. Structural Model Assessment

The structural model was assessed adhering to the following steps (Sarstedt et al. 2021): checking for collinearity issues, evaluating the model’s overall quality, and testing the hypotheses. Notably, the analysis revealed that there were no critical levels (see Table 6) of collinearity among any of the predictor variables (VIF < 3) (Hair 2014; Sarstedt et al. 2021; Hair et al. 2019).
The overall quality of the research model was reflected in the coefficient of determination (R2) (Hair 2014; Sarstedt et al. 2021; Hair et al. 2019). The research model was able to explain the changes in the behavioral intention (R2 = 0.648), the perceived usefulness (R2 = 0.536), and the trust in legal technologies moderately well (R2 = 0.412) (see Table 7). The model was weaker in explaining the perceived ease of use of legal technologies in courts (R2 = 0.296).
Another critical measure of the quality of the research model was the Stone–Geiser’s Q2 value (Hair et al. 2019; Sarstedt et al. 2021). Blindfolding was performed in SmartPLS (Ringle et al. 2015). The research model had considerable predictive relevance for the behavioral intention (Q2 = 0.573), moderate relevance for the trust in legal technologies (Q2 = 0.324), and moderate perceived usefulness of legal technologies in courts (Q2 = 0.421). Again, it was weaker for the perceived ease of use of legal technologies in courts (Q2 = 0.211).
The last step in evaluating the quality of the research model was to evaluate its predictive performance. Thus, following (Sarstedt et al. 2021), a 10-fold cross-validation was conducted with the PLSpredict function (Ringle et al. 2015). The root mean squared error (RMSE) was compared between the partial least squares (PLS) and the linear regression model (LM) for each indicator of those variables (Sarstedt et al. 2021). The results revealed that the research model had a medium predictive power. The PLS model yielded lower prediction errors in terms of RMSE than LM for most of the indicators (see Appendix A Table A1).
This study tested the relationships between the dependent and the independent variables using the path coefficient (β) and t-statistics (Hair 2014). Bootstrapping with 5000 samples was performed in order to obtain the significance of the path coefficients. Table 7 and Figure 2 represent the hypothesis testing results.
The primary hypotheses regarding the TAM relationships were supported (H1: β = 0.437, p < 0.001, H2: β = 0.148, p < 0.001, H3: β = 0.127, p < 0.001). Next, all three of the hypotheses regarding the trust in legal technologies were supported, suggesting that trust significantly positively affects all of the main TAM constructs (H4: β = 0.345, p < 0.001, H5: β = 0.407, p < 0.001, H6: β = 0.470, p < 0.001). Although the knowledge about legal technologies affects the trust in legal technologies (H7: β = 0.179, p < 0.001), this is not true for their perceived usefulness (H8: β = 0.024, p = 0.441). Moreover, the perceived risk of legal technologies in courts significantly negatively affects their perceived usefulness (H9: β = −0.180, p < 0.001) as well as the trust in legal technologies (H10: β = −0.587, p < 0.001). In addition, as expected, the fairness expectation for legal technologies significantly positively affects the perceived usefulness of legal technologies in courts (H11: β = 0.204, p < 0.001). Lastly, the personal innovativeness in information technology has a significant positive effect on the perceived ease of use of legal technologies in courts (H12: β = 0.240, p < 0.001).

4.4. Multigroup Analyses

The multigroup analyses were performed in order to explore the role of age, gender, the legal profession, and court experience. The missing cases were not included in the analyses and each subsample fulfilled the sample size requirements (Hair 2014). The path coefficients (see Appendix B Table A2, Table A3, Table A4 and Table A5) and the coefficient of determination (R2) were compared between the groups (see Appendix B Table A6). In addition, cross-tabulation and a comparison of means between the groups were performed (see Appendix B Table A7 for cross-tabulation and Table A8 for mean comparisons).

4.4.1. Legal Profession

The multigroup analysis did not reveal significant differences among the research model paths (see Appendix B Table A2). Moreover, the proposed model was able to explain the behavioral intention to use and the perceived usefulness of technologies in courts similarly for both the legal (R2 = 0.665) and the other professions (R2 = 0.647) (see Appendix B Table A6). However, it was less suited to explain the perceived ease of use for the legal professions (R2 = 0.198) than the other professions (R2 = 0.360). Interestingly, the model explains the trust in legal technologies better for the legal professions (R2 = 0.528) than the other professions (R2 = 0.374).

4.4.2. Court Experience

The trust in legal technologies (TRST) had a more significant effect on the behavioral intention to use legal technologies (BI) in courts for people with no court experience (β = 0.487, p < 0.001) compared to people with court experience (β = 0.254, p < 0.001) (see Appendix B Table A3). Accordingly, the proposed model explained the perceived usefulness of legal technologies significantly better for people without court experience (R2 = 0.643) compared to those with court experience (R2 = 0.474) (see Appendix B Table A6). What is even more interesting is that in the group without court experience, the perceived ease of use (PEOU) did not affect the behavioral intention to use technologies in courts (BI) (β = −0.031, p = 0.607). In the group with court experience, the effect was present (β = 0.263, p < 0.001) (see Appendix B Table A3).

4.4.3. Age

The multigroup analysis by age yielded even more exciting results. The effect that was insignificant in all of the analyzed models was present in the group of 18–39-year-olds; the knowledge about legal technologies had a slight but significant positive effect on the perceived usefulness of legal technologies in courts (β = 0.087, p = 0.042), while it was still non-significant for the older people (>40 years old) (β = −0.053, p = 0.278) (see Appendix B Table A4). Moreover, the model better explained the beliefs about the perceived ease of use (R2 = 0.451) and the usefulness of legal technologies for the older people (R2 = 0.666) compared to the younger people (PEOU: R2 = 0.225, PU: R2 = 0.485) (see Appendix B Table A6).

4.4.4. Gender

Lastly, the gender effects were tested. The path analysis revealed no significant differences between females and males (see Appendix B Table A5). The proposed model explained the male intentions to use legal technologies better (R2 = 0.743) compared to females (R2 = 0.580) (see Appendix B Table A6).

4.5. Interaction between Profession and Court Experience

The analysis of the mean differences in Appendix B Table A7 prompted testing the interaction effects. A two-way analysis of variance (two-way ANOVA) was performed in order to test the interaction effects of the profession and court experience (see Table 8). The interaction was significant for the trust in legal technologies and the behavioral intention to use legal technologies in courts. Although the profession and court experience both affected the knowledge about legal technologies, there was no interaction effect.
The results in Table 9 show that people with court experience, both lawyers and others, similarly trusted legal technologies. However, if a person did not have court experience, their profession became important, for example, the lawyers without court experience trusted legal technologies less than other people. The results in Table 10 show a similar pattern of results—having no court experience or legal profession resulted in the lowest intention to use legal technologies in courts. Meanwhile, the lawyers with court experience were the most inclined to use legal technologies in courts. Notably, the 34 participants with the legal profession and without court experience were mostly law students (31 students).

5. Discussion

The research that is presented in this paper offers a small window into people’s perceptions of legal technologies in courts. While the study cannot encompass the depth and variety of concerns about technology in courts, and in society as a whole, the results revolve around the issue of how much people would be willing to support technologies in courts. The technology acceptance approach allows us to answer some questions about how various people think of technologies in courts. Importantly, the research does not imply that courts should implement more technologies, instead, the attitudes of various people are examined.
Technology acceptance models were present before 2000 (Davis 1989) and have been applied to many different fields (Gunasinghe et al. 2020; Li et al. 2018; Rahi and Abd. Ghani 2018; Chang 2012; Wang et al. 2021; Kayali and Alaaraj 2020). Although some legal technologies have been in use for more than 20 years, the public perceptions of those technologies have not been researched. Further complications come from the legal technologies themselves—as some are designed only for lawyers, some are made to automate document submission, and other processes and involve clients, not lawyers. Moreover, there is no definite answer to the question of the extent of the public influence on courts and artificial intelligence, given that they may not have the relevant experience or expertise (Deeks 2019). Technology acceptance seems to be the most appropriate concept, as it interests scientists, developers, and stakeholders—the factors of technology acceptance may guide the design of the technology and predict the response that it receives (Taherdoost 2019).
At the beginning of the current research, only one paper presented an empirical study of legal technology acceptance (Xu and Wang 2019); now, there are two (Xu et al. 2022). Taken together, the results of both the aforementioned studies (Xu and Wang 2019; Xu et al. 2022) and the current study support the idea that the widely used technology acceptance model (Hasani et al. 2017; Ammenwerth 2019; Venkatesh and Bala 2008) is also applicable to the legal technology field. The hypotheses on the primary TAM constructs—the behavioral intention to use technologies, the perceived usefulness, and the ease of use—were confirmed in the overall sample. Notably, in this study, the behavioral intention to use technologies was operationalized as the strength of one’s intention to support legal technologies in courts. Thus, the results imply that in order to be willing to support legal technologies in courts, people have to perceive the technologies as both useful and easy to use.

5.1. Trust in Technologies

Every field has its peculiarities and contextual factors that are usually taken into account in the TAM. In this study, the TAM was extended with several legal technology-relevant constructs. Firstly, the trust in legal technologies was chosen because trust in technologies is a crucial and exceptionally researched construct in the context of technology acceptance (Wu et al. 2011). Notably, the trust in legal technologies scale does not specify the type of or the other characteristics of the legal technologies, and the participants of the study were introduced to several kinds of more complex legal technologies, such as a decision support tool for judicial decision making, or a document automation tool that also classified the documents into having a legal basis and not. It was found that the trust in legal technologies affects how much people would support legal technologies in courts, how they see the usefulness, and the perceived ease of use of legal technologies. Interestingly, trust is more important for the perceived ease of using legal technologies in courts—possibly because courts might be seen as very complicated systems. In turn, trust is affected by the knowledge about legal technologies and the perceived risk.

5.2. Perceived Risk

People will likely have little in-depth knowledge about legal technologies. Therefore, the general risk that was associated with legal technology use in courts was measured in this study. The perceived risk negatively affected the trust and the perceived usefulness of legal technologies. These results reflect the findings of technology acceptance research in other fields (Wang et al. 2021; Zhang et al. 2019). Given the effect size of risk on trust, the perceived risk of legal technologies seems to be quite crucial for people. Notably, other studies measure the direct influence of perceived risk on the behavioral intention to use technologies (Tiwari and Tiwari 2020; Jeon et al. 2020). Clarifying the role of the perceived risk in people’s intentions to support technology use in courts could be valuable for a better understanding and navigation of people’s attitudes towards legal technologies. In addition, this study did not focus on the particular facets of risk, e.g., financial or privacy risks (only sensitive information was mentioned) (Featherman and Pavlou 2003). The particular facets of risk could be explored in a more detailed study where people were given more detailed information about the legal technologies in courts.

5.3. Knowledge about Legal Technologies

Another contextual variable that was added to the model is the knowledge about legal technologies. The cognitive component of knowledge is missing in some technology acceptance studies (Taherdoost 2018). The TAM does not explicitly anticipate the role of knowledge about a particular technology. Therefore, a construct for entry-level knowledge was borrowed from the diffusion of innovation theory (Rogers 1983; Rogers et al. 2009; Tariq et al. 2017; Dearing and Cox 2018). Arguably, the knowledge factor is vital in the legal field, especially in the court context, as people usually lack the legal and practical knowledge about how courts work and how to measure their performance. Thus, it is expected that people would not be very knowledgeable in the legal technology field either. Indeed, this was the case in this study.
Moreover, knowledge about legal technologies affects the trust in those technologies. In particular, knowledge about the existing legal technologies boosts the trust in them. However, knowledge did not affect the perceived usefulness of legal technologies in courts in this study, except for a subsample of 18–39-year-old people (the effect was relatively weak). The lack of a strong relationship between the knowledge and the perceived usefulness could suggest that the perceived usefulness of legal technologies in courts may have come from sources other than the knowledge about what technologies exist. For example, understanding how courts work in general, the trust in courts, and the fairness expectations for court processes with legal technologies might have a more prominent effect on the perceived usefulness of legal technologies than just the knowledge about existing legal technologies.

5.4. Fairness Expectations

The fairness expectations for legal technologies are a new potential component of attitudes toward technologies in courts. Given that the main focus of court work is justice, the fairness expectations directly relate to the perceived usefulness of the legal technologies that are used in courts. In this study, the fairness expectations were based in the procedural fairness paradigm (de Cremer and Tyler 2007; Legg 2021; Tyler and Lind 1992; Blader and Tyler 2003). Therefore, the participants were asked to evaluate how they would see the ethicality, the voice, and other features of court processes where legal technologies would be incorporated. People have high fairness expectations for court processes involving legal technologies. The fairness expectations directly influence the perceived usefulness of legal technologies in courts. That is, the more fairness that is expected of the processes involving legal technologies, the more useful they seem. The high fairness expectations do not mean that people are not concerned about the fairness of the automated processes. The results might indicate that people have high hopes for the automated processes and that they anticipate that courts would solve any arising issues. The relationship between the fairness expectations for court processes with incorporated legal technologies and the perceived usefulness of those technologies emphasize the need to explore the components of the perceived usefulness regarding legal technologies.

5.5. Personal Innovativeness

Given the relatively conservative nature of the legal field, personal innovativeness in information technology was added to the model of this study. Interestingly, it was found that the people with legal professions were not less innovative than the people with other professions. The personal innovativeness in information technology positively affects the perceived ease of use of legal technologies in courts. Thus, the more innovative a person is, the easier it seems for them to use legal technologies in courts. These results mirror those from other fields (Şahin et al. 2022; Yang et al. 2022).

5.6. The Legal Profession, Court Experience, Age, and Gender

The profession, the court experience, the age, and the gender moderate some of the relationships of the analyzed model. Most importantly, the legal profession and court experience affect the trust and the behavioral intention to support legal technology use in courts. Surprisingly, lawyers without court experience trust legal technologies less than others and are the least supportive of legal technology use in courts. Similarly, lawyers with court experience are the most supportive of legal technologies in courts. One possible explanation for these discrepancies in the perceptions of legal technologies is that the lawyers are more knowledgeable about legal technologies than the other people in this study1. However, it would be interesting to explore why legal knowledge and no court experience affect the legal technology acceptance. In this study, most of the lawyers without court experience are law students; however, this does not mean that they should be the least trusting of legal technologies in courts simply because they are young. It is feasible that academics could have influenced these students in the university. Not many universities teach about legal technologies yet (Janoski-Haehlen 2019). Possibly, academics, and sometimes even researchers of legal technologies, might be more or less accepting of legal technologies.
Additionally, the people with legal professions had lower fairness expectations than the other people. The fairness expectations might be lower due to the general skepticism towards legal technologies. An alternative explanation is that lawyers might feel that court processes are less fair, and, together with some skepticism, they might not think that legal technologies would add much fairness to them. Lawyers are rarely included in studies of court fairness perceptions; therefore, studying both the public and lawyers’ perceptions could enrich our understanding of fairness in courts.
Moreover, for people without court experience (both lawyers and others), the perceived ease of use does not affect the behavioral intention to support legal technologies in courts. The lack of relationship between the perceived ease of use and the behavioral intention suggests that it would be essential to investigate the perceived ease of use of legal technologies further. It could be hypothesized that the people without court experience do not care about the ease of use of legal technologies in courts. Undoubtedly, the ease of use predicts the behavioral intention to support legal technologies for people with court experience. Therefore, it is crucial to take into account the court experience factor.
Furthermore, both age and gender might be other factors to consider in legal technology acceptance. For the younger people, knowledge about legal technologies slightly affected the perceived usefulness of the legal technologies, but there was no such effect for the older people. In essence, these results suggest that knowledge about legal technologies might change the opinions of younger people more than the opinions of older people. More research should be conducted in order to address the age differences in the perceptions of legal technologies, as human perceptions of AI might vary with age (Lee and Rich 2021).
Interestingly, the analyzed model is better suited to explain the male intentions than the female intentions to support legal technologies in courts. Although more participants were female, gender was distributed relatively evenly through all of the subsamples (profession, court experience, and age). Therefore, gender should not be related to the other sample characteristics. In this study, the females thought that the technologies were more helpful than the males. However, there were no other differences in the perceptions between the gender groups.

5.7. Limitations and Future Research

Although this study is important in building the knowledge base on legal technology acceptance, it has some limitations. One of the limitations is related to the sample that was used. In particular, this study made use of a mixed subsample of lawyers by adding in law students. It could be argued that law students might quickly become technology users and even decision makers. However, additional investigations with more concentrated samples would greatly benefit future work in this area.
The results of the current study highlight the need for more in-depth research. Some of the variations in the attitudes toward legal technologies in courts could be found in different cultures. For example, the support for legal technologies might depend on the trust in courts, not only on the trust in the legal technologies themselves. Lithuania is a post-Soviet country with a tradition of a general distrust of the legal system. It would be helpful to compare Eastern European countries with others. However, distrust of courts might also appear due to well-known cases and other events, such as the discovery of a fraud detecting system in the Netherlands (Vetzo 2022).
However, would some distrust in courts ensure the support for automation? Research into other settings suggests that people might want to exchange the consistency that is provided by automation for the ability to influence decisions due to human factors (Langer et al. 2020; Schlicker et al. 2021). These fascinating assumptions could be tested in experimental studies within court contexts.
Personal innovativeness might also vary according to the cultural values (Klein et al. 2021). In particular, Lithuania has many online public services in health care, migration, taxes, and other areas. Thus, the state of the technological advancement in the country could also play a role. For example, being used to technical solutions in daily affairs might lead to more positive attitudes towards legal technologies.
Notably, technological progress has not been even among different countries; different levels of automation that have already been reached in courts may impact the behavioral intention to use legal technologies.
The presented model might strongly benefit from data on actual judicial decision-making, both before and after the implementation of a certain tool. The combination of jurimetrics, along with people’s perceptions of the different characteristics of the tool, might provide the best insights into the actual usefulness of the tool. Additionally, people’s opinions and perceptions might change given the hard data. Therefore, more studies are needed in order to address these issues.
Finally, the technology acceptance model and the quantitative strategy of this study cannot fully address the underlying influences on people’s attitudes and their awareness of them. This study risks construing cognitive processes that were never there (Sanz and Lado 2008), e.g., some people might never think about technologies in courts. At the same time, following the innovation diffusion theory, technology awareness is the first stage of the innovation diffusion process, and the study participants were, first and foremost, made aware of several types of technologies that are used in courts. Moreover, people form expectations toward courts, even if they are unaware of the legal processes and if their expectations do not match reality. Given the complexity of the awareness concept, this research cannot be used in order to analyze the need for technologies in courts critically.

6. Conclusions

What are people’s attitudes towards legal technologies in courts? What is the most critical factor in predicting people’s willingness to adopt such technologies, given their potentially limited knowledge? How do individual differences factor into people’s opinions about AI and other technologies in courts? This paper adds to the growing research on people’s perceptions and attitudes towards legal technologies by providing data on people’s intentions to support legal technologies in courts.
The perceived usefulness of legal technologies is the most crucial factor in predicting the intentions to support legal technologies in courts. The perceived usefulness, in turn, may form through the trust in legal technologies, the perceived risk, and the knowledge about legal technologies. The results suggest that people are concerned with the usefulness, the ease of use, and other issues, similarly to other technologies in other settings, such as health or education. In addition, the fairness expectations play a role in the acceptance, for example higher expectations may strengthen the perceptions of usefulness. Notably, having more in-depth knowledge and data on the performance of technologies could alter the perceptions. Nevertheless, people with different levels of knowledge may still hold a variety of opinions, depending on other factors. Additionally, the multigroup analyses that were conducted in this study have allowed us to assume that the technology acceptance model could be used in order to investigate both lawyers’ and the general populations’ technology acceptance. This provides guidance for the implementation and the design of technologies.
To the author’s knowledge, this study is one of the first steps toward having theory-driven empirical data on the legal technology acceptance in courts, given its rapid progress. A qualitative exploration of people’s perceptions regarding technologies in courts could reveal some more specific concerns. A study into whether judges and other court staff need technology is clearly overdue. In general, more studies are needed in order to better grasp the differences in opinion that various groups of society might have towards legal technologies in courts. The current research shows that personal innovativeness, the legal profession, the court experience, the age, and even gender might direct people’s opinions. These results might have various implications for the development and the implementation of technology. Thus far, it could be advised to carefully choose the members of AI committees and other regulatory bodies.

Funding

This project has received funding from European Social Fund (project No. 09.3.3-LMT-K-712-19-0116) under grant agreement with the Research Council of Lithuania (LMTLT).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Psychological Research Ethics Committee of Vilnius University (protocol code no. 26/(1.3) 250000-KP-25 and date of approval: 13 April 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study will be openly available in MIDAS repository at 10.18279/MIDAS.AttitudestowardslegaltechnologiesincourtsusingTAM.csv.189143s.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Table A1. Comparison of the root mean squared error (RMSE) for the partial least squares (PLS) and linear regression model (LM).
Table A1. Comparison of the root mean squared error (RMSE) for the partial least squares (PLS) and linear regression model (LM).
IndicatorRMSEPLSRMSEMLRMSEPLS–RMSEML
BI11.3231.2670.056
BI21.241.1440.096
PEOU11.4191.456−0.037
PEOU21.3231.33−0.007
PEOU31.4691.5−0.031
PU11.2981.2820.016
PU21.291.299−0.009
PU31.1871.1590.028
TRST11.2961.309−0.013
TRST21.3241.335−0.011
TRST31.2671.273−0.006
Note. BI = behavioral intention to use legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, PU = perceived usefulness of legal technologies in courts, and TRST = trust in legal technologies.

Appendix B

Table A2. Multigroup analysis by legal profession (legal vs. other).
Table A2. Multigroup analysis by legal profession (legal vs. other).
Hypothesisβ (Legal) (n = 145)β (Other) (n = 260)β (Legal)–β (other)p-Value
H1 PU → BI 0.4020.454−0.0530.577
H2 PEOU → BI0.1890.1180.0710.395
H3 PEOU → PU 0.1410.1210.0200.826
H4 TRST → BI0.3770.3430.0330.758
H5 TRST → PEOU0.3420.438−0.0960.324
H6 TRST → PU0.5050.4560.0490.637
H7 KNW → TRST0.2200.1860.0340.617
H8 KNW → PU0.0420.0030.0390.562
H9 PR → PU−0.112−0.2080.0970.350
H10 PR → TRST−0.654−0.550−0.1040.125
H11 FE → PU0.1550.228−0.0730.357
H12 PIIT → PEOU0.1900.270−0.0810.422
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, TRST = trust in legal technologies, KNW = knowledge about legal technologies, PR = perceived risk of legal technologies in courts, FE = fairness expectations for legal technologies in courts, and PIIT = personal innovativeness in information technology.
Table A3. Multigroup analysis by court experience (yes vs. no).
Table A3. Multigroup analysis by court experience (yes vs. no).
β (No) (n = 162)β (Yes) (n = 245)β (Yes)–β (No)p-Value
H1 PU → BI 0.4040.4570.0520.569
H2 PEOU → BI−0.0310.2630.2930.000
H3 PEOU → PU 0.1340.131−0.0030.973
H4 TRST → BI0.4870.254−0.2340.026
H5 TRST → PEOU0.4120.401−0.0110.892
H6 TRST → PU0.5670.379−0.1880.059
H7 KNW → TRST0.1190.2400.1210.074
H8 KNW → PU0.0030.0610.0580.334
H9 PR → PU−0.153−0.208−0.0550.546
H10 PR → TRST−0.609−0.5800.0280.687
H11 FE → PU0.2410.181−0.0590.407
H12 PIIT → PEOU0.1560.2890.1320.179
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, TRST = trust in legal technologies, KNW = knowledge about legal technologies, PR = perceived risk of legal technologies in courts, FE = fairness expectations for legal technologies in courts, and PIIT = personal innovativeness in information technology; statistically significant R2 coefficients appear in bold.
Table A4. Multigroup analysis by age (18–39 y. vs. > 40 y.).
Table A4. Multigroup analysis by age (18–39 y. vs. > 40 y.).
Hypothesisβ (18–39) (n = 236)β (>40) (n = 149)β (18–39)–β (>40)p-Value
H1 PU → BI 0.4190.488−0.0680.495
H2 PEOU → BI0.1100.240−0.1300.126
H3 PEOU → PU 0.0880.189−0.1000.251
H4 TRST → BI0.4050.1920.2130.064
H5 TRST → PEOU0.3540.497−0.1430.128
H6 TRST → PU0.4390.555−0.1160.204
H7 KNW → TRST0.1430.219−0.0760.274
H8 KNW → PU0.087−0.0530.1400.030
H9 PR → PU−0.162−0.148−0.0140.882
H10 PR → TRST−0.607−0.606−0.0010.999
H11 FE → PU0.2380.1600.0790.291
H12 PIIT → PEOU0.2240.280−0.0560.577
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, TRST = trust in legal technologies, KNW = knowledge about legal technologies, PR = perceived risk of legal technologies in courts, FE = fairness expectations for legal technologies in courts, and PIIT = personal innovativeness in information technology; statistically significant R2 coefficients appear in bold.
Table A5. Multigroup analysis by gender (male vs. female).
Table A5. Multigroup analysis by gender (male vs. female).
Hypothesisβ (Female) (n = 245)β (Male) (n = 153)β (Female)–β (Male)p-Value
H1 PU → BI 0.4450.458−0.0130.884
H2 PEOU → BI0.1730.0880.0850.293
H3 PEOU → PU 0.1120.131−0.0200.827
H4 TRST → BI0.2850.406−0.1210.249
H5 TRST → PEOU0.3700.485−0.1150.230
H6 TRST → PU0.4050.522−0.1170.213
H7 KNW → TRST0.2310.1470.0840.198
H8 KNW → PU0.0170.058−0.0410.540
H9 PR → PU−0.228−0.154−0.0740.421
H10 PR → TRST−0.553−0.6490.0950.168
H11 FE → PU0.2470.1720.0760.305
H12 PIIT → PEOU0.2810.1640.1170.249
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, TRST = trust in legal technologies, KNW = knowledge about legal technologies, PR = perceived risk of legal technologies in courts, FE = fairness expectations for legal technologies in courts, and PIIT = personal innovativeness in information technology.
Table A6. Comparison of coefficients of determination (R2) in models by groups: legal profession, court experience, gender, and age.
Table A6. Comparison of coefficients of determination (R2) in models by groups: legal profession, court experience, gender, and age.
BIPEOUPUTRST
The proposed modelR2 (overall)0.6480.2960.5360.412
Legal professionR2 (legal)0.6650.1980.5100.528
R2 (other)0.6470.3600.5510.374
R2 (legal)–R2 (other)0.018−0.162−0.0420.154
p-value0.7640.0480.5770.048
Court experienceR2 (yes)0.6670.3350.4740.438
R2 (no)0.6740.2410.6430.413
R2 (yes)–R2 (no)−0.0060.094−0.1690.025
p-value0.9000.2570.0220.765
GenderR2 (female)0.5800.2880.4830.394
R2 (male)0.7430.3290.6220.482
R2 (female)–R2 (male)−0.163−0.040−0.139−0.088
p-value0.0050.6340.0630.279
AgeR2 (18–39 years)0.6370.2250.4850.427
R2 (>40 years)0.6790.4510.6660.441
R2 (18–39 y.)–R2 (>40 y.)−0.042−0.226−0.181−0.014
p-value0.4760.0090.0060.867
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, and TRST = trust in legal technologies; statistically significant R2 coefficients appear in bold.
Table A7. Cross-tabulation of the study sample.
Table A7. Cross-tabulation of the study sample.
LegalOtherχ2NoYesχ2MaleFemaleχ2
No23.4% (34)49.2% (128)25.78 **
Yes76.6% (111)50.8% (132)
Male41.4% (58)36.3% (93)0.99835.0% (56)40.8% (97)1.340
Female58.6% (82)63.7% (163)65.0% (104)59.2% (141)
18–39 y.82.7% (115)48.8% (119)42.97 **67.9% (108)56.6% (128)5.012 *65.0% (93)57.9% (135)1.869100% (236)
>40 y.17.3% (24)51.2% (125)32.1% (51)43.4% (98)35.0% (50)42.1% (98)100% (146)
100% (145)100% (260) 100% (162)100% (246) 100% (153)100% (245)
** p < 0.001, * p < 0.05.
Table A8. Comparison of means between groups.
Table A8. Comparison of means between groups.
ProfessionLegal Other
MSDMSDtdfp-value
BI4.681.794.561.590.7664020.444
PEOU4.251.654.311.53−0.2274030.820
PU4.981.575.241.52−1.5664030.118
TRST4.511.634.831.47−1.9544030.052
FE6.231.145.971.222.3414020.020
PR3.551.633.451.660.1274020.899
PIIT5.341.455.141.531.1564030.248
KNW2.061.091.670.834.3244030.000
COURT EXPERIENCEYES NO
MSDMSDtdfp-value
BI4.931.494.661.571.7364040.083
PEOU3.971.383.881.320.7024050.483
PU5.31.325.311.38−0.5834050.560
TRST4.551.384.601.43−0.3494050.727
FE6.141.086.161.07−0.1504040.881
PR3.951.434.041.35−0.6554040.513
PIIT4.701.424.431.421.8674050.063
KNW1.780.811.500.613.7474050.000
AGE18–39 >40
MSDMSDtdfp-value
BI4.851.524.771.540.4743820.636
PEOU3.941.343.871.390.4553830.650
PU5.341.275.141.461.3953830.164
TRST4.631.424.311.390.8133830.416
FE6.360.915.941.193.8843820.000
PR4.001.314.001.550.0183820.986
PIIT4.701.394.421.471.8813830.061
KNW1.730.751.580.771.9053830.058
GENDERMALE FEMALE
MSDMSDtdfp-value
BI4.781.604.821.47−0.2093960.835
PEOU3.991.393.881.340.7683960.443
PU5.051.515.391.22−2.3393960.014
TRST4.511.544.591.28−0.5553950.579
FE6.181.046.121.110.5173950.606
PR3.831.434.031.37−0.6823950.496
PIIT4.691.484.521.391.1613960.246
KNW1.720.801.610.691.4253960.155
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, TRST = trust in legal technologies, M = mean, SD = standard deviation, t = t-statistic, and df = degrees of freedom; statistically significant mean differences appear in bold.

References

  1. Abdul Jalil, Juriah, and Shukriah Mohd Sheriff. 2020. Legal Tech in Legal Service: Challenging the Traditional Legal Landscape in Malaysia. IIUM Law Journal 28: 279–301. [Google Scholar] [CrossRef]
  2. Alarie, Benjamin, Anthony Niblett, and Albert H. Yoon. 2018. How Artificial Intelligence Will Affect the Practice of Law. University of Toronto Law Journal 68: 106–24. [Google Scholar] [CrossRef]
  3. Alda, Erik, Richard Bennett, Nancy Marion, Melissa Morabito, and Sandra Baxter. 2020. Antecedents of Perceived Fairness in Criminal Courts: A Comparative Analysis. International Journal of Comparative and Applied Criminal Justice 44: 201–19. [Google Scholar] [CrossRef]
  4. Ammenwerth, Elske. 2019. Technology Acceptance Models in Health Informatics: TAM and UTAUT. Studies in Health Technology and Informatics 263: 64–71. [Google Scholar] [CrossRef]
  5. Ballell, Teresa Rodríguez De Las Heras. 2019. Legal Challenges of Artificial Intelligence: Modelling the Disruptive Features of Emerging Technologies and Assessing Their Possible Legal Impact. Uniform Law Review 24: 302–14. [Google Scholar] [CrossRef]
  6. Bell, Bradford S., Ann Marie Ryan, Darin Wiechmann, and Bristol-Myers Squibb. 2004. Justice Expectations and Applicant Perceptions. International Journal of Selection and Assessment 12: 24–38. [Google Scholar] [CrossRef]
  7. Bell, Bradford S., Darin Wiechmann, and Ann Marie Ryan. 2006. Consequences of Organizational Justice Expectations in a Selection System. Journal of Applied Psychology 91: 455–66. [Google Scholar] [CrossRef] [PubMed]
  8. Benesh, Sara C. 2006. Understanding Public Confidence in American Courts. Journal of Politics 68: 697–707. [Google Scholar] [CrossRef]
  9. Bernal, Daniel W., and Margaret Hagan. 2020. Redesigning Justice Innovation: A Standardized Methodology. Stanford Journal of Civil Rights & Civil Liberties 16: 335–84. [Google Scholar]
  10. Binns, Reuben, Max van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ‘It’s Reducing a Human Being to a Percentage’; Perceptions of Justice in Algorithmic Decisions. Paper presented at the Conference on Human Factors in Computing Systems, Montréal, QC, Canada, April 21–26. [Google Scholar] [CrossRef]
  11. Blader, Steven L., and Tom R. Tyler. 2003. A Four-Component Model of Procedural Justice: Defining the Meaning of a ‘Fair’ Process. Personality and Social Psychology Bulletin 29: 747–58. [Google Scholar] [CrossRef]
  12. Brehm, Katie, Momori Hirabayashi, Clara Langevin, Bernardo Rivera Munozcano, Katsumi Sekizawa, and Jiayi Zhu. 2020. The Future of AI in the Brazilian Judicial System. Available online: https://itsrio.org/wp-content/uploads/2020/06/SIPA-Capstone-The-Future-of-AI-in-the-Brazilian-Judicial-System-1.pdf (accessed on 10 April 2022).
  13. Brooks, Chay, Cristian Gherhes, and Tim Vorley. 2020. Artificial Intelligence in the Legal Sector: Pressures and Challenges of Transformation. Cambridge Journal of Regions, Economy and Society 13: 135–52. [Google Scholar] [CrossRef]
  14. Burke, Kevin S. 2020. Procedural Fairness Can Guide Court Leaders. Court Review 56: 76–78. [Google Scholar]
  15. Burke, Kevin S., and Steven Leben. 2020. Procedural Fairness in a Pandemic: It’s Still Critical to Public Trust. Drake Law Review 68: 685–706. [Google Scholar]
  16. Chang, Andreas. 2012. UTAUT and UTAUT 2: A Review and Agenda for the Future Research. Journal The WINNERS 13: 106–14. [Google Scholar]
  17. Choi, Jong Kyu, and Yong Gu Ji. 2015. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. International Journal of Human-Computer Interaction 31: 692–702. [Google Scholar] [CrossRef]
  18. Ciftci, Olenaand, Katerina Berezina, and Minsoo Kang. 2021. Effect of Personal Innovativeness on Technology Adoption in Hospitality and Tourism: Meta-Analysis. In Information and Communication Technologies in Tourism 2021. Edited by Jason L. Stienmetz, Wolfgang Wörndl and Chulmo Koo. Cham: Springer International Publishing, pp. 162–74. [Google Scholar]
  19. Clothier, Reece A., Dominique A. Greer, Duncan G. Greer, and Amisha M. Mehta. 2015. Risk Perception and the Public Acceptance of Drones. Risk Analysis 35: 1167–83. [Google Scholar] [CrossRef]
  20. Colquitt, Jason A. 2001. On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology 86: 386–400. [Google Scholar] [CrossRef]
  21. Davis, Fred D. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly: Management Information Systems 13: 319–39. [Google Scholar] [CrossRef]
  22. Davis, Fred D. 2014. A Technology Acceptance Model for Empirically Testing New End-User Information Systems. Available online: https://www.researchgate.net/publication/35465050 (accessed on 10 April 2022).
  23. Davis, Fred D., Richard P. Bagozzi, and Paul R. Warshaw. 1989. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 35: 982–1003. [Google Scholar] [CrossRef]
  24. de Cremer, David, and Tom R. Tyler. 2007. The Effects of Trust in Authority and Procedural Fairness on Cooperation. Journal of Applied Psychology 92: 639–49. [Google Scholar] [CrossRef]
  25. Dearing, James W., and Jeffrey G. Cox. 2018. Diffusion of Innovations Theory, Principles, and Practice. Health Affairs 37: 183–90. [Google Scholar] [CrossRef]
  26. Deeks, Ashley. 2019. The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Review 119: 1829–50. Available online: https://www.jstor.org/stable/26810851 (accessed on 10 April 2022).
  27. Dhagarra, Devendra, Mohit Goswami, and Gopal Kumar. 2020. Impact of Trust and Privacy Concerns on Technology Acceptance in Healthcare: An Indian Perspective. International Journal of Medical Informatics 141: 104164. [Google Scholar] [CrossRef]
  28. Dirsehan, Taşkın, and Ceren Can. 2020. Examination of trust and sustainability concerns in autonomous vehicle adoption. Technology in Society 63: 101361. [Google Scholar] [CrossRef]
  29. Dubois, Christophe. 2021. How Do Lawyers Engineer and Develop LegalTech Projects? A Story of Opportunities, Platforms, Creative Rationalities, and Strategies. Law, Technology and Humans 3: 68–81. [Google Scholar] [CrossRef]
  30. Dutta, Bireswar, Mei Hui Peng, and Shu Lung Sun. 2018. Modeling the Adoption of Personal Health Record (PHR) among Individual: The Effect of Health-Care Technology Self-Efficacy and Gender Concern. Libyan Journal of Medicine 13: 1500349. [Google Scholar] [CrossRef]
  31. Ejdys, Joanna. 2018. Building technology trust in ICT application at a university. International Journal of Emerging Markets 13: 980–97. [Google Scholar] [CrossRef]
  32. Engel, Christoph, and Nina Grgić-Hlača. 2021. Machine Advice with a Warning about Machine Limitations: Experimentally Testing the Solution Mandated by the Wisconsin Supreme Court. Journal of Legal Analysis 13: 284–340. [Google Scholar] [CrossRef]
  33. English, Sarah D., Stephanie Denison, and Ori Friedman. 2021. The Computer Judge: Expectations about Algorithmic Decision-Making. Proceedings of the Annual Meeting of the Cognitive Science Society 43: 1991–96. Available online: https://escholarship.org/uc/item/1866q7s7 (accessed on 10 April 2022).
  34. European Judicial Network. 2019. How to Bring a Case to Court. Lithuania. Available online: https://e-justice.europa.eu/home?action=home&plang=en (accessed on 10 April 2022).
  35. Fabri, Marco. 2021. Will COVID-19 Accelerate Implementation of ICT in Courts? International Journal for Court Administration 12: 1–13. [Google Scholar] [CrossRef]
  36. Faqih, Khaled M.S., and Mohammed Issa Riad Mousa Jaradat. 2015. Assessing the Moderating Effect of Gender Differences and Individualism-Collectivism at Individual-Level on the Adoption of Mobile Commerce Technology: TAM3 Perspective. Journal of Retailing and Consumer Services 22: 37–52. [Google Scholar] [CrossRef]
  37. Featherman, Mauricio S., and Paul A. Pavlou. 2003. Predicting E-Services Adoption: A Perceived Risk Facets Perspective. International Journal of Human Computer Studies 59: 451–74. [Google Scholar] [CrossRef]
  38. Feng, Guangchao Charles, Xianglin Su, Zhiliang Lin, Yiru He, Nan Luo, and Yuting Zhang. 2021. Determinants of Technology Acceptance: Two Model-Based Meta-Analytic Reviews. Journalism and Mass Communication Quarterly 98: 83–104. [Google Scholar] [CrossRef]
  39. Freeman, Katherine. 2016. Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis. North Carolina Journal of Law & Technology 18: 75–106. [Google Scholar]
  40. Galib, Mohammad Hasan, Khalid Ait Hammou, and Jennifer Steiger. 2018. Predicting Consumer Behavior: An Extension of Technology Acceptance Model. International Journal of Marketing Studies 10: 73–90. [Google Scholar] [CrossRef]
  41. Gefen, David, and Elena Karahanna und Detmar W Straub. 2003. Trust and TAM in Online Shopping: An Integrated Model. MIS Quarterly 27: 51–90. [Google Scholar] [CrossRef]
  42. Gunasinghe, Asanka, Junainah Abd Hamid, Ali Khatibi, and S. M. Ferdous Azam. 2020. The Adequacy of UTAUT-3 in Interpreting Academician’s Adoption to e-Learning in Higher Education Environments. Interactive Technology and Smart Education 17: 86–106. [Google Scholar] [CrossRef]
  43. Guo, Meirong. 2021. Internet Court’s Challenges and Future in China. Computer Law and Security Review 40: 105522. [Google Scholar] [CrossRef]
  44. Hair, Joseph F. 2014. A Primer on Partial Least Squares Structural Equations Modeling (PLS-SEM). Thousand Oaks: SAGE. [Google Scholar]
  45. Hair, Joseph F., Jeffrey J. Risher, Marko Sarstedt, and Christian M. Ringle. 2019. When to Use and How to Report the Results of PLS-SEM. European Business Review 31: 2–24. [Google Scholar] [CrossRef]
  46. Hanham, José, Chwee Beng Lee, and Timothy Teo. 2021. The Influence of Technology Acceptance, Academic Self-Efficacy, and Gender on Academic Achievement through Online Tutoring. Computers and Education 172: 104252. [Google Scholar] [CrossRef]
  47. Hasani, Imane, Razane Chroqui, Chafik Okar, Mohamed Talea, and Ahmed Ouiddad. 2017. Literature Review: All about IDT and TAM. Available online: https://www.researchgate.net/publication/317106305_Literature_review_All_about_IDT_and_TAM (accessed on 10 April 2022).
  48. Hauenstein, Neil M. A., Tim Mcgonigle, and Sharon W. Flinder. 2001. A Meta-Analysis of the Relationship Between Procedural Justice and Distributive Justice: Implications for Justice Research. Employee Responsibilities and Rights Journal 13: 39–56. [Google Scholar] [CrossRef]
  49. Hellman, Deborah. 2020. Measuring Algorithmic Fairness. Virginia Law Review 106. Available online: https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html (accessed on 10 April 2022).
  50. Hongdao, Qian, Sughra Bibi, Asif Khan, Lorenzo Ardito, and Muhammad Bilawal Khaskheli. 2019. Legal Technologies in Action: The Future of the Legal Market in Light of Disruptive Innovations. Sustainability 11: 1015. [Google Scholar] [CrossRef]
  51. Huang, Chi Yo, Hui Ya Wang, Chia Lee Yang, and Steven J.H. Shiau. 2020. A derivation of factors influencing the diffusion and adoption of an open source learning platform. Sustainability 12: 7532. [Google Scholar] [CrossRef]
  52. Ikhsan, Khairul. 2020. Technology Acceptance Model, Social Influence and Perceived Risk in Using Mobile Applications: Empirical Evidence in Online Transportation in Indonesia. Jurnal Dinamika Manajemen 11: 127–38. [Google Scholar] [CrossRef]
  53. Janoski-Haehlen, Emily. 2019. Robots, Blockchain, ESI, Oh My!: Why Law Schools Are (or Should Be) Teaching Legal Technology. Legal Reference Services Quarterly 38: 77–101. [Google Scholar] [CrossRef]
  54. Jeon, Hyeon Mo, Hye Jin Sung, and Hyun Young Kim. 2020. Customers’ Acceptance Intention of Self-Service Technology of Restaurant Industry: Expanding UTAUT with Perceived Risk and Innovativeness. Service Business 14: 533–51. [Google Scholar] [CrossRef]
  55. Kayali, Mohammad, and Saad Alaaraj. 2020. Adoption of Cloud Based E-Learning in Developing Countries: A Combination of DOI, TAM and UTAUT. International Journal of Contemporary Management and Information Technology 1. Available online: www.ijcmit.com (accessed on 10 April 2022).
  56. King, William R., and Jun He. 2006. A Meta-Analysis of the Technology Acceptance Model. Information and Management 43: 740–55. [Google Scholar] [CrossRef]
  57. Klein, Andreas, Sven Horak, Sabine Bacouel, and Xiaomei Li. 2021. Does Culture Frame Technological Innovativeness? A Study of Millennials in Triad Countries. Journal of Retailing and Consumer Services 15: 564–94. [Google Scholar] [CrossRef]
  58. Langer, Markus, Cornelius J. König, Diana Ruth Pelipez Sanchez, and Sören Samadi. 2020. Highly Automated Interviews: Applicant Reactions and the Organizational Context. Journal of Managerial Psychology 35: 301–14. [Google Scholar] [CrossRef]
  59. Lee, Min Kyung. 2018. Understanding Perception of Algorithmic Decisions: Fairness, Trust, and Emotion in Response to Algorithmic Management. Big Data and Society 5: 2053951718756684. [Google Scholar] [CrossRef]
  60. Lee, Min Kyung, and Kate Rich. 2021. Who Is Included in Human Perceptions of AI?: Trust and Perceived Fairness around Healthcare Ai and Cultural Mistrust. Paper presented at the Conference on Human Factors in Computing Systems, Yokohama, Japan, May 8–13. [Google Scholar] [CrossRef]
  61. Lee, Min Kyung, Anuraag Jain, Hae J.I.N. Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. Proceedings of the ACM on Human-Computer Interaction 3: 1–26. [Google Scholar] [CrossRef]
  62. Legg, Michael. 2021. The COVID-19 Pandemic, the Courts and Online Hearings: Maintaining Open Justice, Procedural Fairness and Impartiality. Federal Law Review 49: 161–84. [Google Scholar] [CrossRef]
  63. Li, Ya Zheng, Tong Liang He, Yi Ran Song, Zheng Yang, and Rong Ting Zhou. 2018. Factors Impacting Donors’ Intention to Donate to Charitable Crowd-Funding Projects in China: A UTAUT-Based Model. Information Communication and Society 21: 404–15. [Google Scholar] [CrossRef]
  64. Liu, Han Wei, Ching Fu Lin, and Yu Jie Chen. 2019. Beyond state v loomis: Artificial intelligence, government algorithmization and accountability. International Journal of Law and Information Technology 27: 122–41. [Google Scholar] [CrossRef]
  65. Liu, Kaifeng, and Da Tao. 2022. The Roles of Trust, Personalization, Loss of Privacy, and Anthropomorphism in Public Acceptance of Smart Healthcare Services. Computers in Human Behavior 127: 107026. [Google Scholar] [CrossRef]
  66. Lu, June, James E. Yao, and Chun Sheng Yu. 2005. Personal innovativeness, social influences and adoption of wireless Internet services via mobile technology. Journal of Strategic Information Systems 14: 245–68. [Google Scholar] [CrossRef]
  67. Malek, Md. Abdul. 2022. Criminal Courts’ Artificial Intelligence: The Way It Reinforces Bias and Discrimination. AI and Ethics 2: 233–45. [Google Scholar] [CrossRef]
  68. McGill, Jena, and Amy Salyzyn. 2021. Judging by Numbers: Judicial Analytics, the Justice System and Its Stakeholders. Dalhousie Law Journal 44: 249–84. Available online: https://www.legifrance.gouv.fr/jorf/articlejo/ (accessed on 10 April 2022).
  69. Moon, Ji-Won, and Young-Gul Kim. 2001. Extending the TAM for a World-Wide-Web Context. Information and Management 38: 217–30. Available online: http://www.cc.gatech.edu/gvu/user_surveys/papers/ (accessed on 10 April 2022).
  70. Muhlenbach, Fabrice, and Isabelle Sayn. 2019. Artificial Intelligence and Law: What Do People Really Want?: Example of a French Multidisciplinary Working Croup. Paper presented at the 17th International Conference on Artificial Intelligence and Law (ICAIL ’19), Montréal, QC, Canada, June 17–21. [Google Scholar]
  71. Newman, David T., Nathanael J. Fast, and Derek J. Harmon. 2020. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes 160: 149–67. [Google Scholar] [CrossRef]
  72. Ortega Egea, José Manuel, and María Victoria Román González. 2011. Explaining Physicians’ Acceptance of EHCR Systems: An Extension of TAM with Trust and Risk Factors. Computers in Human Behavior 27: 319–32. [Google Scholar] [CrossRef]
  73. Osoba, Osonde A., and William Welser IV. 2017. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica: RAND Corporation. [Google Scholar] [CrossRef]
  74. Ostrom, Amy L, Darima Fotheringham, and Mary Jo Bitner. 2018. Customer Acceptance of AI in Service Encounters: Understanding Antecedents and Consequences. In Handbook of Service Science. Edited by Paul P. Maglio, Cheryl A. Kieliszewski, James C. Spohrer, Kelly Lyons, Lia Patrício and Yuriko Sawatani. Cham: Springer, vol. 2, pp. 77–103. [Google Scholar]
  75. Patil, Pushp, Kuttimani Tamilmani, Nripendra P. Rana, and Vishnupriya Raghavan. 2020. Understanding consumer adoption of mobile payment in India: Extending Meta-UTAUT model with personal innovativeness, anxiety, trust, and grievance redressal. International Journal of Information Management 54: 102144. [Google Scholar] [CrossRef]
  76. Petkevičiūtė-Barysienė, Dovilė. 2016. The Factors of the Judgment of Judicial Behavior Fairness in Civil Justice. Ph.D. dissertation, Vilnius University, Vilnius, Lithuania. [Google Scholar]
  77. Petkevičiūtė-Barysienė, Dovilė, and Gintautas Valickas. 2016. The factors of litigants’ perceived judicial behavior fairness. Teisė 99: 25–42. [Google Scholar] [CrossRef]
  78. Prakosa, Adhi, and Ahsan Sumantika. 2021. An Analysis of Online Shoppers’ Acceptance and Trust toward Electronic Marketplace using TAM Model. Journal of Physics: Conference Series 1823: 012008. [Google Scholar] [CrossRef]
  79. Rahi, Samar, and Mazuri Abd. Ghani. 2018. The Role of UTAUT, DOI, Perceived Technology Security and Game Elements in Internet Banking Adoption. World Journal of Science, Technology and Sustainable Development 15: 338–56. [Google Scholar] [CrossRef]
  80. Ringle, Christian, Dirceu da Silva, and Diógenes Bido. 2015. Structural Equation Modeling with the SmartPLS. Brazilian Journal of Marketing 13: 56–73. Available online: https://ssrn.com/abstract=2676422 (accessed on 10 April 2022).
  81. Rogers, Everett M. 1983. Diffusion of Innovations. New York: Free Press. [Google Scholar]
  82. Rogers, Everett M., Arvind Singhal, and Margaret M. Quinlan. 2009. Diffusion of Innovations. In An Integrated Approach to Communication Theory and Research, 3rd ed. Edited by Don W. Stacks and Michael B. Salwon. New York: Taylor and Francis, pp. 418–34. [Google Scholar] [CrossRef]
  83. Rousseau, Denise M., Sim B. Sitkin, Ronald S. Burt, and Colin Camerer. 1998. Not so Different after All: A Cross-Discipline View of Trust. Academy of Management Review 23: 393–404. [Google Scholar] [CrossRef]
  84. Ryan, Francine. 2021. Rage against the Machine? Incorporating Legal Tech into Legal Education. Law Teacher 55: 392–404. [Google Scholar] [CrossRef]
  85. Şahin, Ferhan, Ezgi Doğan, Muhammet Recep Okur, and Yusuf Levent Şahin. 2022. Emotional Outcomes of E-Learning Adoption during Compulsory Online Education. Education and Information Technologies 27: 7827–49. [Google Scholar] [CrossRef]
  86. Salmerón-Manzano, Esther. 2021. Legaltech and Lawtech: Global Perspectives, Challenges, and Opportunities. Laws 10: 24. [Google Scholar] [CrossRef]
  87. Sandefur, Rebecca L. 2019. Legal Tech for Non-Lawyers: Report of the Survey of US Legal Technologies. Available online: https://iaals.du.edu/sites/default/files/documents/publications/abf_us_digital_legal_tech_for_nonlawyers.pdf (accessed on 10 April 2022).
  88. Sanz, Cristina, and Beatriz Lado. 2008. Technology and the Study of Awareness. In Encyclopedia of Language and Education. Edited by Nancy H. Hornberger. New York: Springer, pp. 2050–63. [Google Scholar] [CrossRef]
  89. Sarstedt, Marko, Christian M. Ringle, and Joseph F. Hair. 2021. Partial Least Squares Structural Equation Modeling. In Handbook of Market Research. Cham: Springer International Publishing, pp. 1–47. [Google Scholar] [CrossRef]
  90. Sarstedt, Marko, Joseph F. Hair, Christian M. Ringle, Kai O. Thiele, and Siegfried P. Gudergan. 2016. Estimation Issues with PLS and CBSEM: Where the Bias Lies! Journal of Business Research 69: 3998–4010. [Google Scholar] [CrossRef]
  91. Schlicker, Nadine, Markus Langer, Sonja K. Ötting, Kevin Baum, Cornelius J. König, and Dieter Wallach. 2021. What to Expect from Opening up ‘Black Boxes’? Comparing Perceptions of Justice between Human and Automated Agents. Computers in Human Behavior 122: 106837. [Google Scholar] [CrossRef]
  92. Schmidthuber, Lisa, Daniela Maresch, and Michael Ginner. 2020. Disruptive Technologies and Abundance in the Service Sector—Toward a Refined Technology Acceptance Model. Technological Forecasting and Social Change 155: 119328. [Google Scholar] [CrossRef]
  93. Shin, Donghee, and Yong Jin Park. 2019. Role of Fairness, Accountability, and Transparency in Algorithmic Affordance. Computers in Human Behavior 98: 277–84. [Google Scholar] [CrossRef]
  94. Silva, J., E. Scherf, and M. Silva. 2019. In Tech We Trust?: Some General Remarks of Law in the Technological Era from a Third World Perspective. Journal Juridical Opinion (Revista Opinião Jurídica) 17: 107–23. Available online: https://ssrn.com/abstract=3306288 (accessed on 10 April 2022).
  95. Sindermann, Cornelia, René Riedl, and Christian Montag. 2020. Investigating the Relationship between Personality and Technology Acceptance with a Focus on the Smartphone from a Gender Perspective: Results of an Exploratory Survey Study. Future Internet 12: 110. [Google Scholar] [CrossRef]
  96. Suarez, Christopher A. 2020. Disruptive Legal Technology, COVID-19, and Resilience in the Profession. South Carolina Law Review 72: 393–444. Available online: https://heinonline.org/HOL/License (accessed on 10 April 2022).
  97. Subawa, Nyoman Sri, Ni Komang Arista Dewi, and Adie Wahyudi Oktavia Gama. 2021. Differences of Gender Perception in Adopting Cashless Transaction Using Technology Acceptance Model. Journal of Asian Finance, Economics and Business 8: 617–24. [Google Scholar] [CrossRef]
  98. Taherdoost, Hamed. 2018. A Review of Technology Acceptance and Adoption Models and Theories. Procedia Manufacturing 22: 960–67. [Google Scholar] [CrossRef]
  99. Taherdoost, Hamed. 2019. Importance of Technology Acceptance Assessment for Successful Implementa-Tion and Development of New Technologies. Global Journal of Engineering Sciences 1: 3. [Google Scholar] [CrossRef]
  100. Tariq, Muhammad Farooq, Faizuniah Pangil, and Arfan Shahzad. 2017. Diffusion of Innovation Theory: Beyond Decision Stage. International Journal of Advanced and Appied Sciences 4: 12–18. [Google Scholar] [CrossRef]
  101. Tiwari, Prashant, and Shiv Kant Tiwari. 2020. Integration of Technology Acceptance Model with Perceived Risk, Perceived Trust and Perceived Cost: Customer’s Adoption of M-Banking. International Journal on Emerging Technologies 11: 447–52. Available online: www.researchtrend.net (accessed on 10 April 2022).
  102. Tsai, Tsai Hsuan, Wen Yen Lin, Yung Sheng Chang, Po Cheng Chang, and Ming Yih Lee. 2020. Technology Anxiety and Resistance to Change Behavioral Study of a Wearable Cardiac Warming System Using an Extended TAM for Older Adults. PLoS ONE 15: e0227270. [Google Scholar] [CrossRef]
  103. Tung, Feng Cheng, Su Chao Chang, and Chi Min Chou. 2008. An Extension of Trust and TAM Model with IDT in the Adoption of the Electronic Logistics Information System in HIS in the Medical Industry. International Journal of Medical Informatics 77: 324–35. [Google Scholar] [CrossRef]
  104. Turan, Aygül, Ayşegül Özbebek Tunç, and Cemal Zehir. 2015. A Theoretical Model Proposal: Personal Innovativeness and User Involvement as Antecedents of Unified Theory of Acceptance and Use of Technology. Procedia-Social and Behavioral Sciences 210: 43–51. [Google Scholar] [CrossRef]
  105. Tyler, Tom R., and E. Allan Lind. 1992. A Relational Model of Authority in Groups. Advances in Experimental Social Psychology 25: 115–91. [Google Scholar] [CrossRef]
  106. Ulenaers, Jasper. 2020. The Impact of Artificial Intelligence on the Right to a Fair Trial: Towards a Robot Judge? Asian Journal of Law and Economics 11. [Google Scholar] [CrossRef]
  107. Valickas, Gintautas, Dovilė Šeršniovaitė, and Vita Mikuličiūtė. 2016. The External and Internal Images of Judges and Courts. Teisė 97: 38. [Google Scholar] [CrossRef]
  108. Venkatesh, Viswanath, and Fred D. Davis. 2000. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. vol. 46. Available online: https://about.jstor.org/terms (accessed on 10 April 2022).
  109. Venkatesh, Viswanath, and Hillol Bala. 2008. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences 39: 273–315. [Google Scholar] [CrossRef]
  110. Vetzo, Max. 2022. The Netherlands-Algorithmic Fraud Detection System Violates Human Rights-the Case of SyRI. Public Law 3: 650–52. [Google Scholar] [CrossRef]
  111. Vimalkumar, M., Sujeet Kumar Sharma, Jang Bahadur Singh, and Yogesh K. Dwivedi. 2021. ‘Okay google, what about my privacy?’: User’s privacy perceptions and acceptance of voice based digital assistants. Computers in Human Behavior 120: 106763. [Google Scholar] [CrossRef]
  112. Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2021. Why Fairness Cannot Be Automated: Bridging the Gap between EU Non-Discrimination Law and AI. Computer Law & Security Review 41: 105567. Available online: https://ssrn.com/abstract=3547922 (accessed on 10 April 2022).
  113. Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. 2017. Transparent, Explainable, and Accountable AI for Robotics. Science Robotics 2: 6080. [Google Scholar] [CrossRef]
  114. Wang, Junze, Sheng Zhao, Wei Zhang, and Richard Evans. 2021. Why People Adopt Smart Transportation Services: An Integrated Model of TAM, Trust and Perceived Risk. Transportation Planning and Technology 44: 629–46. [Google Scholar] [CrossRef]
  115. Wang, Nu. 2020. ‘Black Box Justice’: Robot Judges and AI-Based Judgment Processes in China’s Court System. Paper presented at the International Symposium on Technology and Society, Virtual, November 12–15. [Google Scholar] [CrossRef]
  116. Wang, Ran. 2020. Legal Technology in Contemporary USA and China. Computer Law and Security Review 39: 105459. [Google Scholar] [CrossRef]
  117. Wieringa, Maranke. 2020. What to Account for When Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability. Paper presented at the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20), Barcelona, Spain, January 27–30. [Google Scholar]
  118. Woodruff, Allison, Sarah E. Fox, Steven Rousso-Schindler, and Jeff Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. Paper presented at the Conference on Human Factors in Computing Systems, Montréal, QC, Canada, April 21–26. [Google Scholar]
  119. Woolard, Jennifer L., Samantha Harvell, and Sandra Graham. 2008. Anticipatory Injustice among Adolescents: Age and Racial/Ethnic Differences in Perceived Unfairness of the Justice System. Behavioral Sciences and the Law 26: 207–26. [Google Scholar] [CrossRef]
  120. Wu, Kewen, Yuxiang Zhao, Qinghua Zhu, Xiaojie Tan, and Hua Zheng. 2011. A Meta-Analysis of the Impact of Trust on Technology Acceptance Model: Investigation of Moderating Influence of Subject and Context Type. International Journal of Information Management 31: 572–81. [Google Scholar] [CrossRef]
  121. Xiang, Alice. 2021. Reconciling Legal and Technical Approaches to Algorithmic Bias. Tennessee Law Review 88. Available online: https://ssrn.com/abstract=3650635 (accessed on 10 April 2022).
  122. Xu, Alison (Lu). 2017. Chinese Judicial Justice on the Cloud: A Future Call or a Pandora’s Box? An Analysis of the ‘Intelligent Court System’ of China. Information and Communications Technology Law 26: 59–71. [Google Scholar] [CrossRef]
  123. Xu, Ni, and Kung Jeng Wang. 2019. Adopting Robot Lawyer? The Extending Artificial Intelligence Robot Lawyer Technology Acceptance Model for Legal Industry by an Exploratory Study. Journal of Management and Organization, 1–19. [Google Scholar] [CrossRef]
  124. Xu, Ni, Kung-Jeng Wang, and Chen-Yang Lin. 2022. Technology Acceptance Model for Lawyer Robots with AI: A Quantitative Survey. International Journal of Social Robotics 14: 1043–55. [Google Scholar] [CrossRef]
  125. Yang, Huijun, Hanqun Song, Catherine Cheung, and Jieqi Guan. 2022. Are Prior Smart Hotel Visiting Experience and Personal Innovativeness Critical to Future Visit Intention? Journal of China Tourism Research, 1–24. [Google Scholar] [CrossRef]
  126. Yang, Keng Chieh, and Po Hong Shih. 2020. Cognitive Age in Technology Acceptance: At What Age Are People Ready to Adopt and Continuously Use Fashionable Products? Telematics and Informatics 51: 101400. [Google Scholar] [CrossRef]
  127. Yoon, Cheolho. 2018. Extending the TAM for Green IT: A Normative Perspective. Computers in Human Behavior 83: 129–39. [Google Scholar] [CrossRef]
  128. Zalnieriute, Monika, and Felicity Bell. 2019. Technology and the Judicial Role. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  129. Zhang, Junkai, and Yuxuan Han. 2022. Algorithms Have Built Racial Bias in Legal System-Accept or Not? Advances in Social Science, Education and Humanities Research 631: 1217–21. [Google Scholar] [CrossRef]
  130. Zhang, Tingru, Da Tao, Xingda Qu, Xiaoyan Zhang, Rui Lin, and Wei Zhang. 2019. The Roles of Initial Trust and Perceived Risk in Public’s Acceptance of Automated Vehicles. Transportation Research Part C: Emerging Technologies 98: 207–20. [Google Scholar] [CrossRef]
  131. Zhang, Tingru, Da Tao, Xingda Qu, Xiaoyan Zhang, Jihong Zeng, Haoyu Zhu, and Han Zhu. 2020. Automated Vehicle Acceptance in China: Social Influence and Initial Trust Are Key Determinants. Transportation Research Part C: Emerging Technologies 112: 220–33. [Google Scholar] [CrossRef]
1
Differences of means in the knowledge about legal technologies scale are reflected in Appendix B, Table A8.
Figure 1. Proposed research model. The solid lines represent original TAM relationships; the dashed lines represent relationships proposed in the current study.
Figure 1. Proposed research model. The solid lines represent original TAM relationships; the dashed lines represent relationships proposed in the current study.
Laws 11 00071 g001
Figure 2. Structural model assessments results. The solid lines represent original TAM relationships; the dotted lines represent relationships proposed in the current study. *** p < 0.001, ** p < 0.01.
Figure 2. Structural model assessments results. The solid lines represent original TAM relationships; the dotted lines represent relationships proposed in the current study. *** p < 0.001, ** p < 0.01.
Laws 11 00071 g002
Table 1. Definitions of the constructs of the current study.
Table 1. Definitions of the constructs of the current study.
ConstructDefinitionAdopted Definition for the Current Study
Behavioral intention to use technology (BI)The measurement of the strength of one’s intention to perform a specified behavior (Ammenwerth 2019).The strength of one’s intention to support the use of legal technologies in courts.
Perceived usefulness (PU)The degree to which a person believes that using a particular system would enhance their job performance (Davis 1989).The degree to which a person believes that using legal technologies in courts would enhance the court’s performance.
Perceived ease of use (PEOU)The degree to which a person believes that using a particular system would be free of effort (Davis 1989).The degree to which a person believes that using legal technologies in courts would be free of effort.
Trust in technologies (TRST)A psychological state composed of the intention to accept vulnerability, based upon positive expectations of the intentions or behavior of another under conditions of risk and interdependence (Rousseau et al. 1998).The expected degree of expertise, integrity, and benevolence of legal technologies in courts.
Personal innovativeness in information technology (PIIT)The degree to which an individual or other unit of adoption is relatively early in adopting new ideas compared to other members of a social system (Rogers et al. 2009; Rogers 1983; Turan et al. 2015).The degree to which an individual is relatively earlier in adopting new ideas compared to other social system members.
Fairness expectations (FAIR)An individual’s belief that he or she will experience fairness in a future event or interaction (Bell et al. 2004).The expected degree of fairness in the court processes where legal technologies would be used.
Perceived risk (RISK)The possible loss when pursuing a desired outcome (Featherman and Pavlou 2003).The expected degree of potential losses associated with the implementation and use of legal technologies in courts.
Knowledge about legal technologies in courts (KNW)The information a person holds about an object, issue, or person (Taherdoost 2018).The knowledge a person has about legal technologies in courts.
Table 2. Constructs of the study, items, and their sources.
Table 2. Constructs of the study, items, and their sources.
Construct Revised/Adopted fromItem
Trust in information technologies (TRST)TRST1(Lee et al. 2019)I think that legal technologies would help us to make fair decisions in court processes.
TRST2(Vimalkumar et al. 2021)I think that it would be entirely safe to entrust legal technologies with some parts of the litigation (e.g., accepting documents and analyzing the arguments for the ruling).
TRST3(Zhang et al. 2020)Overall, I could trust legal technologies in courts.
Perceived usefulness (PU)PU2(Davis 1989)In my opinion, using legal technologies would improve courts’ performance (e.g., improve the quality of court documents).
PU3(Davis 1989)I think that using legal technologies would increase courts’ productivity.
PU4(Davis 1989)In my opinion, using legal technologies would enhance courts’ effectiveness.
Perceived ease of use (PEOU)PEOU1(Davis 1989)I think that learning how to operate legal technologies would be easy.
PEOU3(Davis 1989)I think that it would be easy to become skillful at using legal technologies.
PEOU4(Davis 1989)I think that legal technologies would be easy to use from the first time.
Behavioral intention to use technologies (BI)BI2(Huang et al. 2020)I look forward to the implementation of legal technologies in courts.
BI5(Lu et al. 2005)If I had an opportunity, I would definitely use legal technologies in courts.
Personal innovativeness in information technology (PIIT)PIIT1(Lu et al. 2005; Patil et al. 2020)If I heard about new relevant information technology, I would look for ways to experiment with it.
PIIT2(Lu et al. 2005; Patil et al. 2020)I am usually the first to explore new information technologies among my peers.
PIIT3(Lu et al. 2005; Patil et al. 2020)I like to experiment with new information technologies.
Perceived risk (PR)PR1(Featherman and Pavlou 2003)I think that legal technologies in courts would not perform well and would create problems.
PR2(Featherman and Pavlou 2003)In my opinion, legal technologies would put the sensitive information that courts manage at risk.
PR3(Featherman and Pavlou 2003)I think that legal technologies in courts could slow down court processes due to technical mistakes.
PR4(Vimalkumar et al. 2021)I am concerned that legal technologies in courts could be misused.
Fairness expectations (FE)FE1(Colquitt 2001)I expect that court processes involving legal technologies were based on accurate information.
FE2(Colquitt 2001)I expect that court processes involving legal technologies did not violate ethical and moral standards.
FE3(Colquitt 2001)I expect that court processes involving legal technologies allow the expression of a person’s view and opinion.
FE4(Colquitt 2001)I expect that court processes involving legal technologies ensured that people are treated with respect.
FE5(Colquitt 2001)I expect a clear explanation of the operation and role of legal technologies in court processes in which they would be used.
Knowledge about legal technologies (KNW)KNW1Composed by the authorSome countries use a program that handles complaints, requests, and other documents submitted to the court by classifying them into those with no legal basis.
KNW2Composed by the authorA system has been created to automate various legal processes as follows: analyze the claim, determine what information is missing, and submit a draft document required for the response.
KNW3Composed by the authorIn some countries, judges have access to a program that provides the judge with a detailed analysis of the case, evaluates arguments, and identifies possible outcomes.
KNW4Composed by the authorIn some countries, a program is being used that offers the judge the penalties that are appropriate for the accused in the case in question, though the judge makes the final decision.
KNW5Composed by the authorIn some countries, an algorithm is used that formulates appropriate arguments for the case and it can be used by the judge to justify the decision in the case.
KNW6Composed by the authorIn some countries, specific small claims disputes are resolved by algorithms.
Table 3. Characteristics of the study sample.
Table 3. Characteristics of the study sample.
CharacteristicCategoriesFrequency/StatisticsPercentage
Age, years18–3923657.8
>4014936.5
Missing235.6
Mean37.4
Standard deviation13.2
Minimum19
Maximum81
GenderMale15337.8
Female24560.5
I did not want to indicate71.7
Missing30.6
Court experienceYes24560.2
No16239.8
Missing10.2
Legal professionLawyers (and law students)145 (66)35.8
Other26064.2
Missing30.6
Table 4. Means (M), standard deviations (SD), factor loadings, Cronbach’s Alphas (α), composite reliability (CR), and average variance extracted (AVE) of the constructs.
Table 4. Means (M), standard deviations (SD), factor loadings, Cronbach’s Alphas (α), composite reliability (CR), and average variance extracted (AVE) of the constructs.
ConstructItemMinMaxMSDFactor LoadingαCRAVE
Behavioral intention to use technologies (BI)BI1174.591.670.9450.8820.9440.895
BI2175.041.570.947
Fairness expectations (FE)FE1176.071.200.8510.9300.9470.781
FE2176.081.290.883
FE3176.181.210.886
FE4176.221.200.908
FE5176.221.190.888
Perceived ease of use (PEOU)PEOU1174.291.570.8890.8350.9000.750
PEOU2174.461.530.921
PEOU3173.051.600.782
Perceived risk (PR)PR1173.491.640.8850.8480.8970.686
PR2174.251.750.844
PR3173.741.620.848
PR4174.501.750.729
Perceived usefulness (PU)PU1175.141.540.8710.8790.9250.805
PU2175.371.480.899
PU3175.311.460.921
Personal innovativeness in information technology (PIIT)PIIT1175.211.510.8080.8530.9110.774
PIIT2174.111.660.910
PIIT3174.481.690.918
Trust in information technologies (TRST)TRST1174.721.540.8650.8740.9230.799
TRST2174.571.600.889
TRST3174.431.570.927
Knowledge about legal technologies (KNW)KNW1151.810.960.7760.8990.9220.665
KNW2151.710.900.776
KNW3151.690.920.843
KNW4151.650.920.850
KNW5151.570.900.856
KNW6151.660.960.787
Note. M = mean, SD = standard deviation, α = Cronbach’s α coefficient, CR = composite reliability, and AVE = average variance extracted.
Table 5. Discriminant validity of the constructs: Heterotrait-Monotrait ratio (HTMT).
Table 5. Discriminant validity of the constructs: Heterotrait-Monotrait ratio (HTMT).
ConstructBIFEKNWPEOUPRPUPIIT
Behavioral intention to use technologies (BI)
Fairness expectations (FAIR)0.251
Knowledge (KNW)0.2770.089
Perceived ease of use (PEOU)0.5940.0720.214
Perceived risk (PR)0.6770.0890.1800.584
Perceived usefulness (PU)0.8390.3050.2500.5240.593
Personal innovativeness in information technology (PIIT)0.5960.1430.2500.4590.4550.399
Trust in legal technologies (TRST)0.8150.1540.3120.5680.7010.7730.440
Table 6. Variance inflation factor (VIF) among variables.
Table 6. Variance inflation factor (VIF) among variables.
ConstructBIPEOUPUTRST
Behavioral intention to use technologies (BI)
Fairness expectations (FAIR) 1.038
Knowledge (KNW) 1.0901.029
Perceived ease of use (PEOU)1.382 1.446
Perceived risk (PR) 1.7781.029
Perceived usefulness (PU)1.932
Personal innovativeness in information technology (PIIT) 1.166
Trust in legal technologies (TRST)2.0141.1661.876
Table 7. Hypothesis testing results: path coefficient estimation and bootstrapping results.
Table 7. Hypothesis testing results: path coefficient estimation and bootstrapping results.
Hypothesis: PathPath Coefficient (β)t-Valuep-ValueConclusion
H1 PU → BI0.4379.7180.000Supported
H2 PEOU → BI0.1483.5850.000Supported
H3 PEOU → PU 0.1272.9560.003Supported
H4 TRST → BI0.3456.4700.000Supported
H5 TRST → PEOU0.4078.7440.000Supported
H6 TRST → PU0.47010.1020.000Supported
H7 KNW → TRST0.1795.2330.000Supported
H8 KNW → PU0.0240.7700.441Unsupported
H9 PR → PU−0.1804.0620.000Supported
H10 PR → TRST−0.58716.7280.000Supported
H11 FE → PU0.2045.5940.000Supported
H12 PIIT → PEOU0.2404.8290.000Supported
Note. PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, BI = behavioral intention to use legal technologies in courts, TRST = trust in legal technologies, KNW = knowledge about legal technologies, PR = perceived risk of legal technologies in courts, FE = fairness expectations for legal technologies in courts, and PIIT = personal innovativeness in information technology.
Table 8. Interaction effects of the legal profession and court experience.
Table 8. Interaction effects of the legal profession and court experience.
ProfessionCourt ExperienceProfession × Court Experience
FEF (df)6.318 (1)0.310 (1)0.445 (1)
p0.0120.5780.505
TRSTF (df)6.896 (1)0.988 (1)4.430 (1)
p0.0090.3210.036
PRF (df)1.945 (1)1.964 (1)2.186 (1)
p0.1620.1620.140
PIITF (df)0.009 (1)3.352 (1)0.467 (1)
p0.9230.0680.495
KNWF (df)7.909 (1)8.367 (1)0.880 (1)
p0.0050.0040.349
BIF (df)0.220 (1)6.149 (1)6.276 (1)
p06390.0140.013
PUF (df)0.284 (1)0.000 (1)1.578 (1)
p0.5940.9850.210
PEOUF (df)0.189 (1)0.402 (1)0.009 (1)
p0.6640.5270.923
Note. BI = behavioral intention to use legal technologies in courts, PU = perceived usefulness of legal technologies in courts, PEOU = perceived ease of use of legal technologies in courts, TRST = trust in legal technologies, F = Fisher’s exact test, df = degrees of freedom, and p = p-value. The statistically significant effects appear in bold.
Table 9. Means of trust in legal technologies according to court experience and profession.
Table 9. Means of trust in legal technologies according to court experience and profession.
Trust in Legal Technologies
Court ExperienceProfessionMSDN
YesLegal4.50151.42975111
Other4.58591.34922132
NoLegal4.00001.5461534
Other4.76561.37045128
Note. M = mean, SD = standard deviation, N = number of participants, legal = legal profession, and other = other professions.
Table 10. Means of behavioral intention to use legal technologies in courts according to court experience and profession.
Table 10. Means of behavioral intention to use legal technologies in courts according to court experience and profession.
Behavioral Intention to Use Technologies
ProfessionCourt ExperienceMSDN
LegalYes5.12731.49531110
No4.25001.8432934
OtherYes4.76891.47421132
No4.77341.48199128
Note. M = mean, SD = standard deviation, N = number of participants, yes = had court experience, and no = did not have court experience.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Barysė, D. People’s Attitudes towards Technologies in Courts. Laws 2022, 11, 71. https://doi.org/10.3390/laws11050071

AMA Style

Barysė D. People’s Attitudes towards Technologies in Courts. Laws. 2022; 11(5):71. https://doi.org/10.3390/laws11050071

Chicago/Turabian Style

Barysė, Dovilė. 2022. "People’s Attitudes towards Technologies in Courts" Laws 11, no. 5: 71. https://doi.org/10.3390/laws11050071

APA Style

Barysė, D. (2022). People’s Attitudes towards Technologies in Courts. Laws, 11(5), 71. https://doi.org/10.3390/laws11050071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop