Next Article in Journal
A Proposal for a Retrofit Master’s Degree in University Education: Bridging the Skill Gap
Previous Article in Journal
Logistics Performance Assessment in the Ceramic Industry: Applying Pareto Diagram and FMEA to Improve Operational Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Public Perceptions of Algorithmic Bias and Fairness in Cloud-Based Decision Systems

1
Computer Science and Director Academic Program, College of Innovation and Technology, University of Michigan-Flint, Flint, MI 48502, USA
2
Department of Electronic Marketing and Social Media, Zarqa University, Zarqa 13110, Jordan
*
Author to whom correspondence should be addressed.
Standards 2026, 6(1), 2; https://doi.org/10.3390/standards6010002
Submission received: 22 September 2025 / Revised: 11 December 2025 / Accepted: 17 December 2025 / Published: 25 December 2025

Abstract

Cloud-based machine learning systems are increasingly used in sectors such as healthcare, finance, and public services, where they influence decisions with significant social consequences. While these technologies offer scalability and efficiency, they raise significant concerns regarding security, privacy, and compliance. One of the central issues is algorithmic bias, which can emerge from data, design choices, or system interactions, and is often amplified when deployed at scale through cloud infrastructures. This study examines the relationship between algorithmic bias, social equity, and cloud-based innovation. Drawing on a survey of public perceptions, we find strong recognition of the risks posed by biased systems, including diminished trust, harm to vulnerable populations, and erosion of fairness. Participants overwhelmingly supported regulatory oversight, developer accountability, and greater transparency in algorithmic decision-making. Building on these findings, this paper proposes measures to integrate fairness auditing, representative datasets, and bias mitigation techniques into cloud security and compliance frameworks. We argue that addressing bias is not only an ethical responsibility but also an essential requirement for safeguarding public trust and meeting evolving legal and regulatory standards.

1. Introduction

Cloud computing underpins most modern data and AI pipelines, enabling scalable storage, distributed computation, and automated model deployment across industries. Major providers such as AWS, Google Cloud, and Microsoft Azure support full machine learning operations (MLOps) ecosystems for ingestion, feature engineering, training, monitoring, and orchestration [1,2]. These platforms accelerate innovation but introduce heterogeneous workflows in which bias may propagate across services and teams.
Algorithmic bias has become a central concern in AI governance because automated decisions now influence outcomes in healthcare, employment, education, finance, and criminal justice [3,4,5,6]. When deployed at scale in cloud environments, biased systems can amplify inequitable outcomes more broadly and consistently than traditional human decision-making.
We use the term bias neutrally, meaning a deviation from an intended norm or ground truth, where its appropriateness depends on the context [7]. Algorithmic bias refers specifically to systematic errors that disproportionately affect certain groups [8,9,10]. In public administration, expectations of equity and fairness also reflect longstanding principles [11,12].
Kordzadeh and Ghasemaghaei [13] categorize bias into data-driven, design-driven, and emergent forms. These sources interact throughout the AI lifecycle and become more complex in cloud environments, where distributed services and automated processes obscure provenance and make bias harder to detect.
Cloud-native systems also operate under governance expectations such as the NIST AI Risk Management Framework [14], IEEE 7003 [15], the OECD AI Principles, and sector-specific guidelines including the STANDING Together healthcare recommendations [16]. However, these frameworks seldom address the realities of multi-tenant cloud platforms where fairness must coexist with performance, reliability, and operational constraints.
Despite significant research on fairness metrics and bias mitigation, there remains limited guidance on how fairness risks manifest within cloud-native pipelines including lineages, data transformations, automated scaling, and model versioning [17,18]. This gap motivates the present study, which analyzes public perceptions of algorithmic bias and connects them to cloud engineering practices. Accordingly, we address the following research question: how do members of the public perceive algorithmic bias and fairness in cloud-based decision systems, and what implications do these perceptions have for fairness auditing and governance in cloud-native machine learning pipelines?

2. Related Work

Recent work on robust fairness has shown that adversarial training and hard example mining can help models perform more consistently across demographic groups. These approaches are increasingly relevant in cloud environments that automate retraining, evaluation, and deployment.

2.1. Algorithmic Bias in Information Systems

Kordzadeh and Ghasemaghaei [13] provide a structured synthesis of algorithmic bias, distinguishing among data-driven, design-driven, and emergent sources. Their review spans technical, social, and ethical dimensions and identifies research gaps that informed the structure and survey design of the present study.

2.2. Bias in AI-Driven Hiring Systems

Dailey [6] examines biased hiring systems and the legal landscape governing algorithmic discrimination. This work demonstrates how historical data, modeling choices, and insufficient oversight can produce disparate impacts, reinforcing the need for transparency and accountability in high-stakes cloud deployments.

2.3. Bias in Healthcare Algorithms

Siddique et al. [5] conducted a systematic review of healthcare algorithms and found recurring evidence that models can exacerbate racial and ethnic disparities. Complementary work includes fairness-drift research [18] and qualitative studies on ethical implications in clinical AI [19]. Sector-specific equity guidance, such as the STANDING Together recommendations [16], also informs the governance considerations discussed in this study.

2.4. Mitigating Bias in Recruitment

Soleimani et al. [20] propose a grounded theoretical framework for mitigating bias in recruitment systems, emphasizing transparency, documentation, and fairness audits—principles directly relevant to cloud-based MLOps governance.

2.5. Fairness Metrics and Bias Mitigation Techniques

Pessach and Shmueli [21] provide a comprehensive survey of fairness definitions and mitigation techniques. Their taxonomy highlights tensions among fairness criteria and underscores the need for clear documentation—reinforced by tools such as Datasheets and Model Cards [22,23]. Taken together, existing work on algorithmic fairness asks three broad types of questions:
  • Where does bias come from, and how can it be characterized? [3,4,13,21]
  • How does bias manifest in specific high-stakes sectors, such as hiring and healthcare, and what are the legal or equity implications? [5,6,16,24]
  • What technical and governance mechanisms can be used to mitigate or manage these risks? [10,15,21,25]
These research questions have produced rich taxonomies of fairness metrics, mitigation strategies, and sector-specific harms, but they are typically investigated from the perspective of designers, regulators, or domain experts rather than everyday users of algorithmic systems.
Table 1 summarizes six representative studies that examine (1) where bias comes from and what effects it has, (2) how it plays out in specific sectors, and (3) the standards or practices proposed to address it. Together, these studies offer important foundations for understanding algorithmic fairness, but they leave open a key question for this paper: how do members of the public think about algorithmic bias and fairness, and what are the safeguards they see as necessary for trustworthy cloud-based decision systems. By focusing on public perceptions across different domains, our study shifts the focus from mainly expert perspectives to the expectations of affected users—insights that are essential for shaping cloud governance practices that are both technically robust and socially acceptable.
The research questions in Table 1 mainly treat algorithmic bias as a technical, legal, or organizational problem. Our study builds on this work by examining how non-expert users understand these same issues and how their expectations can guide fairness auditing in cloud-native MLOps pipelines.
Figure 1 shows a five-step flowchart for a literature review on algorithmic bias: identify key papers, extract research questions and methods, analyze findings, synthesize insights into a framework, and integrate them into the literature review and survey design.

2.6. Additional Foundational Literature

A broad body of scholarship provides additional context for algorithmic fairness, governance, and equity across domains. Ethical analyses chart the conceptual landscape of algorithmic accountability [26,27], while legal and policy frameworks highlight tensions around explanation rights and privacy [28]. Technical studies examine dataset governance [22], production readiness in ML pipelines [1,2,17], serverless and cloud compute implications [29], and methodological approaches to detecting and mitigating bias [4,10,30].
Empirical research across sectors—including criminal justice, healthcare, hiring, and speech recognition—provides evidence of disparate model performance and algorithmic harms [31,32,33,34]. Together, these studies complement the core works reviewed above and place our study within a wider interdisciplinary understanding of fairness in automated systems.
Against this backdrop, our study investigates public perceptions of algorithmic bias and fairness in cloud-based decision systems, with the goal of translating these perceptions into concrete recommendations for fairness auditing and governance in cloud environments.

3. Methods

The research followed a structured, stepwise process from data collection through interpretation. First, all survey responses were screened for completeness and attention-check accuracy. Second, demographic information and item-level responses were organized into their respective domains. Third, reliability was assessed using established psychometric procedures (e.g., Cronbach’s α ) commonly applied in early-stage perception studies [35,36,37]. Fourth, descriptive statistics were generated for each item and domain, including means, standard deviations, and confidence intervals. Fifth, subgroup summaries by age and gender were calculated to provide contextual information, without conducting inferential comparisons. Finally, the results were reviewed holistically to identify consistent patterns in participants’ perceptions of algorithmic bias and fairness. This sequence ensured systematic processing and transparent analytical procedures.

3.1. Recruitment and Eligibility

Participants (N = 30) were recruited through multiple channels, including university-wide email announcements, department mailing lists, and social media postings targeted at adults with an interest in technology, digital systems, or public policy. The target population consisted of individuals aged 18 or older who use digital or algorithmic systems in everyday contexts (e.g., healthcare, employment, and consumer platforms).
Inclusion criteria included the following: (1) age over 18, (2) English fluency, and (3) ability to provide informed consent. Exclusion criteria included incomplete responses and failure on the attention-check item. Although a sample size of 30 is modest, it aligns with methodological guidance for exploratory descriptive and pilot studies aiming to identify general trends rather than estimate population parameters [35,36,37]. Similar sample sizes are also common in early human–computer interaction and technology perception research. In this context, the consistency of responses and narrow confidence intervals indicates stable descriptive findings appropriate for an exploratory design.

3.2. Ethical Approval and Consent

This study complied with the Declaration of Helsinki. Ethical approval was obtained from the University’s Institutional Review Board (HUM00273162; approval date: 20 May 2025). Informed consent was obtained electronically prior to participation.

3.3. Instrument

The survey consisted of 25 items rated on a 4-point Likert scale (1 = strongly disagree; 4 = strongly agree). A 4-point format was intentionally selected to avoid a neutral midpoint and encourage directional responses—an approach commonly used in exploratory attitude research where ambivalence provides limited interpretive value. All items analyzed in this study used the 4-point format; previous references to 5-point formats were corrected for consistency.

3.4. Reliability and Validity

Internal consistency for each domain was assessed using Cronbach’s α , supported by item–total correlations and α if-item-deleted diagnostics. Although no latent factor models were estimated, the Kaiser–Meyer–Olkin (KMO) statistic was calculated as a general adequacy check. This approach is consistent with early-stage perception studies where the focus is descriptive rather than inferential.

3.5. Missing Data and Exclusions

Missing data were minimized through required response fields. Residual missing values were handled using listwise deletion for domain-level scores and pairwise deletion for item-level summaries, consistent with the preregistered plan. Respondents who did not pass the attention-check item (“select ‘agree’ for this item”) were excluded from domain-level analyses but were retained in frequency counts in Table 2 and Table 3.

3.6. Sample Characteristics

The sample included 59% aged 18–29 (N = 17), 21% aged 30–40 (N = 6), and 21% over 50 (N = 6). The gender composition was 55% female (N = 16) and 45% male (N = 13). Participants identified as White/Caucasian (66%, N = 19), Middle Eastern (17%, N = 5), Hispanic (10%, N = 3), and Black/African American (7%, N = 2). Educational backgrounds ranged from high school to doctoral degrees: 59% (N = 17) held a college degree, 17% (N = 5) a master’s, 10% (N = 3) a high school diploma, and 7% (N = 2) a Ph.D., with 7% (N = 2) preferring not to disclose. Employment status included 41% unemployed (N = 12), 31% full-time employed (N = 9), 14% part-time employed (N = 4), and 14% self-employed (N = 4).

3.7. Survey Domains

The 25-item instrument measured five domains:
  • Awareness and knowledge of algorithmic bias (Q6–Q10);
  • Perceptions of bias impact and fairness (Q11–Q15);
  • Attitudes toward solutions, responsibility, and regulation (Q16–Q20);
  • Trust in technology and fairness expectations (Q21–Q23);
  • Prioritization of fairness in technology (Q24–Q25).

3.8. Procedure

The survey was administered online and collected anonymously. Participants reviewed an informed consent statement before beginning. The survey included demographic items, technology use questions, and the 25 attitudinal items. Quality controls included required fields, range checks, and a single attention check. Completion time averaged 10–15 min. No identifying information was collected, and IP logging was disabled. Data were exported as CSV and stored on encrypted drives.

3.9. Data Analysis

Descriptive statistics were used to summarize central tendency and variability. Frequencies and proportions of agreement/disagreement were calculated for each item. Means, standard deviations, and 95% confidence intervals were summarized at both the item and domain levels. Subgroup analyses by gender and age provided contextual detail but were not used for inferential comparisons due to the exploratory design.

4. Results

4.1. Overview of Result Presentation

The results follow the analytic sequence described in the Methods Section. Reliability statistics are presented first, followed by item-level descriptives and domain-level summaries. Subgroup summaries by age and gender are provided for context. Figures and tables highlight trends across domains, and the narrative focuses on consistent patterns rather than outlier responses.
Table 2 provides descriptive statistics for all 25 items in the survey. Items in the harms, fairness, and accountability domains show consistently high agreement, while awareness/knowledge items are comparatively lower, reflecting a gap between recognition of bias and confidence in understanding its mechanisms.
Table 3 summarizes domain-level means (Likert 1–4) by averaging the corresponding item means from Table 2. The prioritization domain is highest (3.34), indicating strong endorsement that addressing algorithmic bias should be a societal priority. Solutions/regulation (3.24) and trust and expectations (3.17) are also high, reflecting broad support for fairness safeguards, transparency, and accountability as prerequisites for trust. Harms and fairness (3.12) show clear recognition of the negative impacts of bias. In contrast, awareness/knowledge is comparatively lower (2.73), suggesting that respondents acknowledge the issue yet feel less confident in their own understanding—an actionable gap for education and outreach efforts.

4.2. Awareness and Understanding of Algorithmic Bias

The mean value (M) was calculated for each dataset, and most respondents (89%) agreed that algorithms in everyday technology can exhibit bias (Q6; M = 3.21). Roughly 79% indicated that they understood how bias is introduced (Q7; M = 2.89), though only 53% felt confident in their knowledge (Q8; M = 2.46). Two-thirds were familiar with examples of algorithmic bias (Q9; M = 2.64), yet less than half considered themselves fully informed (Q10; M = 2.46). This mirrors broader findings in the literature that public awareness of algorithmic harms is rising, but the depth of understanding remains limited [24,38].

4.3. Perceived Harms and Fairness Concerns

Participants expressed strong concerns about harms from algorithmic bias. Nearly 90% agreed that biased algorithms can lead to unfair treatment (Q11) and undermine fairness (Q15). Concerns about harms to vulnerable communities (Q14; 82%) align with documented inequities in domains such as healthcare [5] and employment [6]. These perceptions underscore broad recognition of bias as a meaningful fairness issue.

4.4. Support for Solutions, Regulation, and Accountability

Participants showed strong support for solutions and oversight. Nearly all (97%) supported regulations ensuring fairness (Q17), aligning with current trends toward formal governance frameworks such as the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) [14], the Institute of Electrical and Electronics Engineers (IEEE) [15], and EU/US regulatory initiatives. Developer responsibility (96%, Q18) and transparency (89%, Q20) also received substantial support, consistent with the emphasis on accountability in the audit literature [39].

4.5. Trust in Technology and Fairness Expectations

Fairness was strongly linked to trust. A large majority (89%) reported that it is important for the technology they use to be bias-free (Q21). Roughly 82% said that they would lose trust if a system used biased algorithms (Q22), consistent with prior findings linking algorithmic bias to diminished public trust. Almost all respondents agreed that fairness is as important as accuracy (Q24) and that addressing algorithmic bias should be a societal priority (Q25), echoing broader social equity principles [11,12].
Figure 2 presents participants’ self-reported knowledge and awareness of algorithmic bias. The majority indicated general awareness (approximately 90%) and understanding (around 80%) of the concept. However, confidence in explaining algorithmic bias was notably lower (just above 50%). Similarly, while about two-thirds of respondents reported being able to provide examples, fewer than half felt fully informed. These findings suggest that although recognition of algorithmic bias is widespread, there remains a clear gap between surface-level awareness and the depth of knowledge required for confident explanation and informed engagement.

4.6. Perceived Harms and Fairness Concerns

Strong concerns emerged about algorithmic bias impacts: 89% agreed or strongly agreed that biased algorithms can lead to unfair treatment (Q11; M = 3.11, σ = 0.68), and 82% worried about harm to vulnerable communities (Q14; M = 3.07, σ = 0.77). A total of 93% felt that biased algorithms undermine fairness (Q15; M = 3.26, σ = 0.52). Further, 82% indicated that algorithmic bias negatively affects people’s lives (Q12; M = 3.11, σ = 0.68), and 85% saw bias as a serious fairness issue (Q13; M = 3.07, σ = 0.60).
Figure 3 shows participants’ views on the harms and fairness concerns of algorithmic bias. Most agreed that bias in algorithms is unfair (about 90%) and that it can undermine trust (over 90%). A strong majority also saw it as harmful, negative, and a serious issue, with each of these items receiving agreement above 80%. Taken together, the results indicate that participants recognize algorithmic bias as both unfair and potentially damaging, and they view it as an issue that deserves serious attention.

4.7. Support for Solutions, Regulation, and Accountability

Participants strongly supported interventions: A total of 79% believed that bias can be reduced through careful design and testing (Q16; M = 3.04, σ = 0.68), and 86% endorsed technical solutions (Q19; M = 3.07, σ = 0.70). Nearly all respondents (97%) supported regulations ensuring algorithmic fairness (Q17; M = 3.39, σ = 0.56), and 96% agreed that developers are responsible for minimizing bias (Q18; M = 3.36, σ = 0.55). Transparency was endorsed by 89% (Q20; M = 3.32, σ = 0.66), and 79% felt that companies should be held accountable for bias (Q23; M = 3.18, σ = 0.85).
Figure 4 shows participants’ support for different solutions and accountability measures related to algorithmic bias. Support was strongest for regulation (about 97%) and for developer responsibility (around 96%), indicating strong agreement that oversight and responsibility at the creation stage are essential. High levels of agreement were also reported for technical solutions (about 86%) and transparency (close to 89%). Support was somewhat lower for design-focused approaches and for accountability in general, with both receiving agreement around 79%. Overall, participants expressed broad support for measures to address algorithmic bias, with a particular emphasis on regulatory oversight and the responsibility of developers.

4.8. Trust in Technology and Fairness Expectations

Concerns about fairness strongly influenced trust: A total of 89% indicated that it was important that the technology they use is free from algorithmic bias (Q21; M = 3.14, σ = 0.58), and 82% said that they would lose trust if they learned a system used biased algorithms (Q22; M = 3.18, σ = 0.89). Additionally, 93% agreed that fairness is as important as accuracy in algorithmic decisions (Q24; M = 3.36, σ = 0.61), and 97% believed that addressing algorithmic bias should be a high societal priority (Q25; M = 3.32, σ = 0.54).
Figure 5 shows participants’ trust and fairness expectations regarding algorithmic systems. A large majority expected algorithms to be bias-free (about 89%) and reported that they would lose trust if biases were present (around 82%). Even higher agreement was found for the belief that fairness is as important as accuracy (about 93%) and that fairness should be a priority (nearly 97%). These results highlight that participants place a strong value on fairness in algorithmic systems and view it as essential for maintaining trust.

5. Discussion

The descriptive patterns observed in this study indicate that participants approach algorithmic systems with a consistent emphasis on equity, fairness, and accountability. Respondents expressed the view that fairness, transparency, and responsible governance are essential—not optional—conditions for trustworthy technology. These findings align closely with long-standing equity frameworks in public administration [11,12] and reinforce more recent analyses showing that public expectations for fairness extend into digital and cloud-based systems [6,13].
The results also align with empirical work demonstrating that biased algorithms can cause disproportionate harm to vulnerable groups, particularly in healthcare [5,9] and employment [20]. Although this study does not draw causal inferences or estimate population-level effects, the consistency of responses across domains underscores a clear pattern: the public expects organizations to anticipate and address fairness risks in cloud-based machine learning pipelines.
Furthermore, participants expressed strong support for formal oversight, which aligns with emerging governance frameworks such as the NIST AI Risk Management Framework [14], the IEEE 7003 Standard for Algorithmic Bias [15], and broader policy trends in the United States and globally. This reinforces the need for cloud engineering teams to operationalize fairness expectations—not merely as ethical add-ons but as integrated components of reliable, compliant AI systems.

Integration with Cloud-Based Data Query Systems

Emerging cloud-based verification technologies offer promising pathways for implementing these expectations. Systems such as the Verifiable Query Layer (VQL) and related blockchain-supported audit mechanisms provide ways to document and verify the provenance of datasets, model inputs, and fairness checks across distributed cloud environments. Prior work on transparency artifacts—such as model cards [23] and datasheets for datasets [22]—provides conceptual foundations for these tools. Integrating fairness auditing with verifiable queries may strengthen confidence in cloud-based AI systems by enabling independent validation of the information that guides model development and deployment.
While this study does not empirically evaluate cloud verification systems, the alignment between public expectations and the goals of verifiability technologies suggests a valuable direction for future interdisciplinary work connecting human perceptions, governance frameworks, and technical infrastructure.

6. Recommendations

Building on the survey findings and the broader literature, we propose the following recommendations for organizations and policymakers developing or regulating cloud-based machine learning systems.

6.1. Education and Awareness

Participants exhibited strong awareness of algorithmic bias yet comparatively low confidence in explaining how it arises. Similar gaps are documented across multiple studies of algorithmic literacy [24,38]. To address this,
  • Expand public-facing algorithmic literacy initiatives.
  • Integrate fairness and ethics modules into computer science, data science, public policy, and engineering curricula.
  • Provide practitioner training on bias measurement, mitigation, and governance standards.

6.2. Transparency and Communication

Participants strongly supported transparency, consistent with audit frameworks in the literature [15,39]. We recommend
  • Mandating standardized documentation for datasets (e.g., datasheets [22]) and models (e.g., model cards [23]).
  • Encouraging organizations to publicly disclose algorithmic audit results.
  • Using communication strategies that make fairness metrics understandable to non-expert audiences.

6.3. Regulation and Policy

Support for regulatory measures in our study aligns with evolving national and international standards. Policymakers should
  • Enforce regular audits of high-stakes algorithms.
  • Establish independent oversight bodies for algorithmic accountability.
  • Require lifecycle fairness assessments as part of cloud compliance regimes.

6.4. Technical Measures

The literature provides a large toolbox for bias mitigation, including pre-processing, in-processing, and post-processing techniques [3,21]. Cloud-native pipelines should
  • Incorporate fairness checks into CI/CD workflows.
  • Use representative, high-quality datasets aligned with STANDING Together recommendations for dataset governance [16].
  • Employ robustness tests that explicitly evaluate subgroup performance [18].

6.5. Developer and Organizational Responsibility

Participants assigned high responsibility to developers—consistent with the governance literature emphasizing accountability [23,39]. Organizations should
  • Elevate fairness as a key performance metric on par with accuracy and latency.
  • Build cross-functional teams including ethicists, domain experts, and affected stakeholders.
  • Maintain documentation and audit trails to support traceability and incident responses.

7. Conclusions

This study examined public perceptions of algorithmic bias in cloud-based decision systems. Respondents showed consistent agreement that biased algorithms undermine fairness, erode trust, and warrant stronger oversight. These findings align with extensive work documenting the harms of biased algorithms in domains such as healthcare [5,9], employment [6], and public services.
Participants expressed strong support for transparency, developer accountability, and regulation—mirroring the direction of modern AI governance frameworks [14,15]. Although the sample size was small and exploratory, the consistency of responses across domains provides useful early insight into how fairness concerns shape public expectations of cloud-based AI.
The results underscore the importance of integrating fairness safeguards—such as representative training data, regular audits, and clear documentation—into cloud-native machine learning pipelines. Ensuring equity in cloud-enabled systems is essential for maintaining public trust, meeting regulatory obligations, and preventing the amplification of historical inequities at scale.
Future work should build on this exploratory study by examining more diverse samples, combining perception data with technical case studies, and applying fairness auditing frameworks to real-world cloud deployments in sectors such as healthcare, hiring, finance, and public administration.

Author Contributions

Conceptualization, A.A., R.G. and H.A.-A.; methodology, A.A.; software, A.A., R.G. and H.A.-A.; validation, A.A., R.G. and H.A.-A.; formal analysis, H.A.-A.; investigation, R.G.; resources, R.G.; data curation, A.A., R.G. and H.A.-A.; writing—original draft preparation, A.A.; writing—review and editing, A.A.; visualization, A.A., R.G. and H.A.-A.; supervision, A.A., R.G. and H.A.-A.; project administration, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data contains in this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Breck, E.; Polyzotis, N.; Roy, S.; Whang, S.E.; Zinkevich, M. Data Validation for Machine Learning. 2019. Available online: https://mlsys.org/Conferences/2019/doc/2019/167.pdf (accessed on 20 September 2025).
  2. Breck, E.; Cai, S.; Nielsen, E.; Salib, M.; Sculley, D. The ML Test Score: A Rubric for ML Production Readiness. In Proceedings of the Machine Learning Systems Workshop at NIPS, Boston, MA, USA, 11–14 December 2017. [Google Scholar]
  3. Barocas, S.; Hardt, M.; Narayanan, A. Fairness and Machine Learning: Limitations and Opportunities; MIT Press: Cambridge, MA, USA, 2023. [Google Scholar]
  4. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2021, 54, 1–35. [Google Scholar] [CrossRef]
  5. Siddique, S.M.; Tipton, K.; Leas, B.; Jepson, C.; Aysola, J.; Cohen, J.B.; Flores, E.; Harhay, M.O.; Schmidt, H.; Weissman, G.E. The Impact of Health Care Algorithms on Racial and Ethnic Disparities: A Systematic Review. Ann. Intern. Med. 2024, 177, 484–496. [Google Scholar] [CrossRef] [PubMed]
  6. Dailey, J. Algorithmic Bias: AI and the Challenge of Modern Employment Practices. UC Law Bus. J. 2025, 21, 215–240. [Google Scholar]
  7. Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, 19–25 August 2017; pp. 4691–4697. [Google Scholar]
  8. Bolukbasi, T.; Chang, K.W.; Zou, J.; Saligrama, V.; Kalai, A. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  9. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting Racial Bias in an Algorithm Used To Manage the Health of Populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  10. Friedler, S.; Scheidegger, C.; Venkatasubramanian, S.; Choudhary, S.; Hamilton, E.P.; Roth, D. A Comparative Study of Fairness-Enhancing Interventions in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 29–31 January 2019; pp. 329–338. [Google Scholar]
  11. Frederickson, G. The State of Social Equity in American Public Administration. Natl. Civ. Rev. 2005, 94, 31–38. [Google Scholar] [CrossRef]
  12. Guy, M.E.; McCandless, S.A. Social Equity: Its Legacy, Its Promise. Public Adm. Rev. 2012, 72, S5–S13. [Google Scholar] [CrossRef]
  13. Kordzadeh, N.; Ghasemaghaei, M. Algorithmic Bias: Review, Synthesis, and Future Research Directions. Eur. J. Inf. Syst. 2022, 31, 388–409. [Google Scholar] [CrossRef]
  14. National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0); Technical Report NIST AI 100-1; NIST: Gaithersburg, MD, USA, 2023. [Google Scholar]
  15. IEEE Std 7003™–2024; IEEE Standard for Algorithmic Bias Considerations. IEEE Systems and Software Engineering Standards Committee: New York, NY, USA, 2025. [CrossRef]
  16. Alderman, J.E.; Palmer, J.; Laws, E.; McCradden, M.D.; Ordish, J.; Ghassemi, M.; Pfohl, S.R.; Rostamzadeh, N.; Cole-Lewis, H.; Glocker, B. Tackling Algorithmic Bias and Promoting Transparency in Health Datasets: The STANDING Together Consensus Recommendations. Lancet Digit. Health 2025, 7, e64–e88. [Google Scholar] [CrossRef]
  17. Schelter, S.; Lange, D.; Schmidt, P.; Böhm, S. Automating Large-Scale Data Quality Verification. In Proceedings of the IEEE ICDE, Paris, France, 16–18 April 2018. [Google Scholar]
  18. Davis, S.E.; Dorn, C.; Park, D.J.; Matheny, M.E. Emerging Algorithmic Bias: Fairness Drift as the Next Dimension of Model Maintenance and Sustainability. J. Am. Med. Inform. Assoc. 2025, 32, 845–854. [Google Scholar] [CrossRef]
  19. Aquino, Y.S.J.; Carter, S.M.; Houssami, N.; Braunack-Mayer, A.; Win, K.T.; Degeling, C.; Wang, L.; Rogers, W.A. Practical, Epistemic, and Normative Implications of Algorithmic Bias in Healthcare AI: A Qualitative Study of Expert Perspectives. J. Med. Ethics 2025, 51, 420–428. [Google Scholar] [CrossRef]
  20. Soleimani, M.; Intezari, A.; Arrowsmith, J.; Pauleen, D.J.; Taskin, N. Reducing AI Bias in Recruitment and Selection: An Integrative Grounded Approach. Int. J. Hum. Resour. Manag. 2025, 36, 2480–2515. [Google Scholar] [CrossRef]
  21. Pessach, D.; Shmueli, E. Algorithmic Fairness. In Machine Learning for Data Science Handbook; Springer: Berlin/Heidelberg, Germany, 2023; pp. 867–886. [Google Scholar] [CrossRef]
  22. Gebru, T.; Morgenstern, J.; Vecchione, B.; Wortman Vaughan, J.; Wallach, H.; Daumé, H., III; Crawford, K. Datasheets for Datasets. Commun. ACM 2021, 64, 86–92. [Google Scholar] [CrossRef]
  23. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model Cards for Model Reporting. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAT*), Atlanta, GA, USA, 29–31 January 2019; pp. 220–229. [Google Scholar] [CrossRef]
  24. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown: New York, NY, USA, 2016. [Google Scholar]
  25. Fraile-Rojas, B.; De-Pablos-Heredero, C.; Mendez-Suarez, M. Female Perspectives on Algorithmic Bias: Implications for AI Researchers and Practitioners. Manag. Decis. 2025, 63, 3042–3065. [Google Scholar] [CrossRef]
  26. Mittelstadt, B.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The Ethics of Algorithms: Mapping the Debate. Big Data Soc. 2016, 3, 1–21. [Google Scholar] [CrossRef]
  27. Jobin, A.; Ienca, M.; Vayena, E. The Global Landscape of AI Ethics Guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  28. Wachter, S.; Mittelstadt, B.; Floridi, L. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the GDPR. Int. Data Privacy Law 2017, 7, 76–99. [Google Scholar] [CrossRef]
  29. Jonas, E.; Schleier-Smith, J.; Sreekanti, V.; Tsai, C.C.; Khandelwal, A.; Pu, Q.; Shankar, V.; Carreira, J.; Krauth, K.; Yadwadkar, N.; et al. Serverless Computing: State of the Art and Research Challenges. arXiv 2019, arXiv:1902.03383. [Google Scholar]
  30. Hardt, M.; Price, E.; Srebro, N. Equality of Opportunity in Supervised Learning. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  31. Dressel, J.; Farid, H. The Accuracy, Fairness, and Limits of Predicting Recidivism. Sci. Adv. 2018, 4, eaao5580. [Google Scholar] [CrossRef]
  32. Cheng, J.; Nam, A.; Lake, E.; Nudell, J.; Quartey, M.; Mengesha, Z.; Toups, C.; Rickford, J.R.; Jurafsky, D.; Goel, S. Racial Disparities in Automated Speech Recognition. Proc. Natl. Acad. Sci. USA 2021, 117, 7684–7689. [Google Scholar]
  33. Cowgill, B.; Dell’Acqua, F.; Matz, S. The Managerial Effects of Algorithmic Fairness Activism. In AEA Papers and Proceedings; American Economic Association: Nashville, TN, USA, 2020. [Google Scholar]
  34. Panch, T.; Mattie, H.; Atun, R. Artificial Intelligence and Algorithmic Bias: Implications for Health Systems. J. Glob. Health 2019, 9, 020318. [Google Scholar] [CrossRef]
  35. Johanson, G.A.; Brooks, P.G. Initial scale development: Sample size for pilot studies. Educ. Psychol. Meas. 2010, 70, 394–400. [Google Scholar] [CrossRef]
  36. Hertzog, M.A. Considerations in determining sample size for pilot studies. Res. Nurs. Health 2008, 31, 180–191. [Google Scholar] [CrossRef]
  37. Van Teijlingen, E.; Hundley, V. How large should a pilot study be? Soc. Res. Updat. 1998, 35, 1–4. [Google Scholar]
  38. Sambasivan, N.; Kapania, S.; Highfill, E.; Akrong, D.; Paritosh, P.; Aroyo, L. Everyone Wants To Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI. In Proceedings of the Proceedings of the CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 8–13 May 2021. [Google Scholar] [CrossRef]
  39. Raji, I.D.; Smart, A.; White, R.N.; Mitchell, M.; Gebru, T.; Hutchinson, B.; Smith-Loud, J.; Theron, D.; Barnes, P. Closing the AI Accountability Gap: Defining, Evaluating, and Achieving Audits. In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), New York, NY, USA, 27–30 January 2020. [Google Scholar] [CrossRef]
Figure 1. Process of reviewing and synthesizing the literature on algorithmic bias.
Figure 1. Process of reviewing and synthesizing the literature on algorithmic bias.
Standards 06 00002 g001
Figure 2. Knowledge and awareness of algorithmic bias across five questions: awareness, understanding, confidence, exposure to examples, and self-perceived informedness. Percentages represent agreement or strong agreement.
Figure 2. Knowledge and awareness of algorithmic bias across five questions: awareness, understanding, confidence, exposure to examples, and self-perceived informedness. Percentages represent agreement or strong agreement.
Standards 06 00002 g002
Figure 3. Perceived harms and fairness concerns related to algorithmic bias. Responses demonstrate broad recognition of unfair treatment, harm to vulnerable communities, and concerns about undermining fairness.
Figure 3. Perceived harms and fairness concerns related to algorithmic bias. Responses demonstrate broad recognition of unfair treatment, harm to vulnerable communities, and concerns about undermining fairness.
Standards 06 00002 g003
Figure 4. Support for solutions and accountability measures addressing algorithmic bias, including regulations, developer responsibility, transparency, and technical interventions.
Figure 4. Support for solutions and accountability measures addressing algorithmic bias, including regulations, developer responsibility, transparency, and technical interventions.
Standards 06 00002 g004
Figure 5. Trust in technology and the importance of fairness. High agreement indicates the need for bias-free technology, the maintenance of trust, and the prioritization of fairness alongside accuracy.
Figure 5. Trust in technology and the importance of fairness. High agreement indicates the need for bias-free technology, the maintenance of trust, and the prioritization of fairness alongside accuracy.
Standards 06 00002 g005
Table 1. Summary of the key literature.
Table 1. Summary of the key literature.
PaperResearch QuestionMethod & Citation
Algorithmic Bias: Review, Synthesis, and Future Research DirectionsWhat are the origins, effects, and mitigation strategies of algorithmic bias?Systematic review [13]
Algorithmic Bias: AI and the Challenge of Modern Employment Practices. UC Law Business JournalHow does algorithmic hiring impact social justice and employment law?Legal analysis, case study [6]
The Impact of Health Care Algorithms on Racial and Ethnic Disparities: A Systematic Review. Annals of Internal MedicineHow do healthcare algorithms affect racial and ethnic disparities?Systematic review [5]
IEEE Standard for Algorithmic Bias ConsiderationsWhat is a comprehensive standard for identifying and mitigating algorithmic bias?Standards development (IEEE Standards Association [15])
Reducing AI Bias in Recruitment and Selection: An Integrative Grounded ApproachHow can AI bias be reduced in recruitment and selection processes?Grounded theory [20]
Algorithmic FairnessHow can algorithmic fairness be defined, measured, and achieved?Technical survey and taxonomy [21]
Table 2. Item-level results (Likert 1–4).
Table 2. Item-level results (Likert 1–4).
QuestionStatementMean σ 95% CI
6Algorithms in everyday tech can be biased3.210.62[2.98, 3.44]
7Understands how algorithmic biases are introduced2.890.77[2.60, 3.18]
8Confident in personal knowledge about bias2.460.82[2.15, 2.77]
9Has read/heard examples of algorithmic bias2.640.77[2.35, 2.93]
10Considers self informed about algorithmic-bias issues2.460.73[2.19, 2.73]
11Biased algorithms can lead to unfair treatment3.110.68[2.86, 3.36]
12Algorithmic bias negatively affects people’s lives3.110.68[2.86, 3.36]
13Bias is a serious fairness issue3.070.60[2.85, 3.29]
14Worried about harm to vulnerable communities3.070.77[2.78, 3.36]
15Biased algorithms undermine fairness3.260.52[3.07, 3.45]
16Bias can be reduced via careful design/testing3.040.68[2.79, 3.29]
17Support regulations ensuring algorithmic fairness3.390.56[3.18, 3.60]
18Developers are responsible for minimizing bias3.360.55[3.15, 3.57]
19Endorse technical solutions to reduce bias3.070.70[2.81, 3.33]
20Endorse transparency about algorithms and data3.320.66[3.07, 3.57]
21Important that technology used is bias-free3.140.58[2.92, 3.36]
22Would lose trust if a system used biased algorithms3.180.89[2.85, 3.51]
23Companies should be held accountable for bias3.180.85[2.86, 3.50]
24Fairness is as important as accuracy3.360.61[3.13, 3.59]
25Addressing algorithmic bias should be a high societal priority3.320.54[3.12, 3.52]
Table 3. Domain-level descriptives (Likert 1–4). Domain means are the arithmetic averages of the item means in Table 2.
Table 3. Domain-level descriptives (Likert 1–4). Domain means are the arithmetic averages of the item means in Table 2.
DomainQuestionMean
Awareness/KnowledgeQ6–Q102.73
Harms and FairnessQ11–Q153.12
Solutions/RegulationQ16–Q203.24
Trust and ExpectationsQ21–Q233.17
PrioritizationQ24–Q253.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alhosban, A.; Gaire, R.; Al-Ababneh, H. Public Perceptions of Algorithmic Bias and Fairness in Cloud-Based Decision Systems. Standards 2026, 6, 2. https://doi.org/10.3390/standards6010002

AMA Style

Alhosban A, Gaire R, Al-Ababneh H. Public Perceptions of Algorithmic Bias and Fairness in Cloud-Based Decision Systems. Standards. 2026; 6(1):2. https://doi.org/10.3390/standards6010002

Chicago/Turabian Style

Alhosban, Amal, Ritik Gaire, and Hassan Al-Ababneh. 2026. "Public Perceptions of Algorithmic Bias and Fairness in Cloud-Based Decision Systems" Standards 6, no. 1: 2. https://doi.org/10.3390/standards6010002

APA Style

Alhosban, A., Gaire, R., & Al-Ababneh, H. (2026). Public Perceptions of Algorithmic Bias and Fairness in Cloud-Based Decision Systems. Standards, 6(1), 2. https://doi.org/10.3390/standards6010002

Article Metrics

Back to TopTop