Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (786)

Search Parameters:
Keywords = algorithmic trust

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 541 KB  
Article
Perceiving AI as an Epistemic Authority or Algority: A User Study on the Human Attribution of Authority to AI
by Frida Milella and Federico Cabitza
Mach. Learn. Knowl. Extr. 2026, 8(2), 36; https://doi.org/10.3390/make8020036 (registering DOI) - 5 Feb 2026
Abstract
The increasing integration of artificial intelligence (AI) in decision-making processes has amplified discussions surrounding algorithmic authority—the perceived epistemic legitimacy of AI systems over human judgment. This study investigates how individuals attribute epistemic authority to AI, focusing on psychological, contextual, and sociotechnical factors. Existing [...] Read more.
The increasing integration of artificial intelligence (AI) in decision-making processes has amplified discussions surrounding algorithmic authority—the perceived epistemic legitimacy of AI systems over human judgment. This study investigates how individuals attribute epistemic authority to AI, focusing on psychological, contextual, and sociotechnical factors. Existing research highlights the importance of trust in automation, perceived performance, and moral frameworks in shaping such attributions. Unlike prior conceptual or philosophical accounts of algorithmic authority, our study adopts a relational and empirically grounded perspective by operationalizing algority through psychometric measures and contextual assessments. To address knowledge gaps in the micro-level dynamics of this phenomenon, we conducted an empirical study using psychometric tools and scenario-based assessments. Here, we report key findings from a survey of 610 participants, revealing significant correlations between trust in automation (TiA), perceptions of automated performance (PAS), and the propensity to defer to AI, particularly in high-stakes scenarios like criminal justice and job-matching. Trust in automation emerged as a primary factor, while moral attitudes moderated deference in ethically sensitive contexts. Our findings highlight the practical relevance of transparency and explainability for supporting critical engagement with AI outputs and for informing the design of contextually appropriate decision support. This study contributes to understanding algorithmic authority as a multidimensional construct, offering empirically grounded insights for designing AI systems that are trustworthy and context-sensitive. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

34 pages, 2216 KB  
Review
Big Data Analytics and AI for Consumer Behavior in Digital Marketing: Applications, Synthetic and Dark Data, and Future Directions
by Leonidas Theodorakopoulos, Alexandra Theodoropoulou and Christos Klavdianos
Big Data Cogn. Comput. 2026, 10(2), 46; https://doi.org/10.3390/bdcc10020046 - 2 Feb 2026
Viewed by 77
Abstract
In the big data era, understanding and influencing consumer behavior in digital marketing increasingly relies on large-scale data and AI-driven analytics. This narrative, concept-driven review examines how big data technologies and machine learning reshape consumer behavior analysis across key decision-making areas. After outlining [...] Read more.
In the big data era, understanding and influencing consumer behavior in digital marketing increasingly relies on large-scale data and AI-driven analytics. This narrative, concept-driven review examines how big data technologies and machine learning reshape consumer behavior analysis across key decision-making areas. After outlining the theoretical foundations of consumer behavior in digital settings and the main data and AI capabilities available to marketers, this paper discusses five application domains: personalized marketing and recommender systems, dynamic pricing, customer relationship management, data-driven product development and fraud detection. For each domain, it highlights how algorithmic models affect targeting, prediction, consumer experience and perceived fairness. This review then turns to synthetic data as a privacy-oriented way to support model development, experimentation and scenario analysis, and to dark data as a largely underused source of behavioral insight in the form of logs, service interactions and other unstructured records. A discussion section integrates these strands, outlines implications for digital marketing practice and identifies research needs related to validation, governance and consumer trust. Finally, this paper sketches future directions, including deeper integration of AI in real-time decision systems, increased use of edge computing, stronger consumer participation in data use, clearer ethical frameworks and exploratory work on quantum methods. Full article
(This article belongs to the Section Big Data)
Show Figures

Figure 1

20 pages, 13249 KB  
Article
Multimodal Dynamic Weighted Authentication Trust Evaluation Under Zero Trust Architecture
by Jianhua Gu, Jianhua Feng and Zefang Gao
Electronics 2026, 15(3), 592; https://doi.org/10.3390/electronics15030592 - 29 Jan 2026
Viewed by 138
Abstract
With the improvement of computing power in terminal devices and their widespread application in emerging technology fields, ensuring secure access to terminals has become an important challenge in the current network environment. Traditional security authentication and trust evaluation methods have many shortcomings in [...] Read more.
With the improvement of computing power in terminal devices and their widespread application in emerging technology fields, ensuring secure access to terminals has become an important challenge in the current network environment. Traditional security authentication and trust evaluation methods have many shortcomings in dealing with dynamic and complex network environments, such as limited ability to respond to new threats and inability to adjust evaluation strategies in real time. In response to these issues, this article proposes a dynamic weighted authentication trust evaluation method driven by multimodal data under zero trust architecture. The method introduces user operation risk values and time coefficients, which can dynamically reflect the behavior changes of users and devices in different times and environments, achieving more flexible and accurate trust evaluation. In order to further improve the accuracy of the evaluation, this article also uses the dynamic entropy weight method to calculate the weights of the evaluation indicators. By coupling with the evaluation values, the terminal access security authentication trust score is obtained, and the current authentication trust level is determined to ensure the overall balance of the trust evaluation results. The experimental results show that compared with traditional evaluation algorithms based on information entropy and collaborative reputation, the average error of the method proposed in this study has been reduced by 87.5% and 75%, respectively. It has significant advantages in dealing with complex network attacks, reducing security vulnerabilities, and improving system adaptability. Full article
Show Figures

Figure 1

46 pages, 8562 KB  
Article
Quantifying AI Model Trust as a Model Sureness Measure by Bidirectional Active Processing and Visual Knowledge Discovery
by Alice Williams and Boris Kovalerchuk
Electronics 2026, 15(3), 580; https://doi.org/10.3390/electronics15030580 - 29 Jan 2026
Viewed by 119
Abstract
Trust in machine-learning models is critical for deployment by users, especially for high-risk tasks such as healthcare. Model trust involves much more than performance metrics such as accuracy, precision, or recall. It includes user readiness to allow a model to make decisions. Model [...] Read more.
Trust in machine-learning models is critical for deployment by users, especially for high-risk tasks such as healthcare. Model trust involves much more than performance metrics such as accuracy, precision, or recall. It includes user readiness to allow a model to make decisions. Model trust is a multifaceted concept commonly associated with the stability of model predictions under variations in training data, noise, algorithmic parameters, and model explanations. This paper extends existing model trust concepts by introducing a novel Model Sureness measure. Some alternatively purposed Model Sureness measures have been proposed. Here, Model Sureness quantitatively measures the model accuracy stability under training data variations. For any model, this is carried out by combining the proposed Bidirectional Active Processing and Visual Knowledge Discovery. The proposed Bidirectional Active Processing method iteratively retrains a model on varied training data until a user-defined stopping criterion is met; in this work, this criterion is set to a 95% accuracy when the model is evaluated on the test data. This process further finds a minimal sufficient training dataset required for a model to satisfy this criterion. Accordingly, the proposed Model Sureness measure is defined as the ratio of the number of unnecessary cases to all cases in the training data along with variations of these ratios. Higher ratios indicate a greater Model Sureness under this measure, while trust in a model is ultimately a human decision based on multiple measures. Case studies conducted on three benchmark datasets from biology, medicine, and handwritten digit recognition demonstrate a well-preserved model accuracy with Model Sureness scores that reflect the capabilities of the evaluated models. Specifically, unnecessary case removal ranged from 20% to 80%, with an average reduction of approximately 50% of the training data. Full article
(This article belongs to the Special Issue Women's Special Issue Series: Artificial Intelligence)
Show Figures

Figure 1

21 pages, 649 KB  
Review
Smart Lies and Sharp Eyes: Pragmatic Artificial Intelligence for Cancer Pathology: Promise, Pitfalls, and Access Pathways
by Mohamed-Amine Bani
Cancers 2026, 18(3), 421; https://doi.org/10.3390/cancers18030421 - 28 Jan 2026
Viewed by 106
Abstract
Background: Whole-slide imaging and algorithmic advances have moved computational pathology from research to routine consideration. Despite notable successes, real-world deployment remains limited by generalization, validation gaps, and human-factor risks, which can be amplified in resource-constrained settings. Content/Scope: This narrative review and [...] Read more.
Background: Whole-slide imaging and algorithmic advances have moved computational pathology from research to routine consideration. Despite notable successes, real-world deployment remains limited by generalization, validation gaps, and human-factor risks, which can be amplified in resource-constrained settings. Content/Scope: This narrative review and implementation perspective summarizes clinically proximate AI capabilities in cancer pathology, including lesion detection, metastasis triage, mitosis counting, immunomarker quantification, and prediction of selected molecular alterations from routine histology. We also summarize recurring failure modes, dataset leakage, stain/batch/site shifts, misleading explanation overlays, calibration errors, and automation bias, and distinguish applications supported by external retrospective validation, prospective reader-assistance or real-world studies, and regulatory-cleared use. We translate these evidence patterns into a practical checklist covering dataset design, external and temporal validation, robustness testing, calibration and uncertainty handling, explainability sanity checks, and workflow-safety design. Equity Focus: We propose a stepwise adoption pathway for low- and middle-income countries: prioritize narrow, high-impact use cases; match compute and storage requirements to local infrastructure; standardize pre-analytics; pool validation cohorts; and embed quality management, privacy protections, and audit trails. Conclusions: AI can already serve as a reliable second reader for selected tasks, reducing variance and freeing expert time. Safe, equitable deployment requires disciplined validation, calibrated uncertainty, and guardrails against human-factor failure. With pragmatic scoping and shared infrastructure, pathology programs can realize benefits while preserving trust and accountability. Full article
Show Figures

Figure 1

23 pages, 4070 KB  
Article
Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory
by Ruiqi Huang, Zhanyou Ma and Nana He
Mathematics 2026, 14(3), 456; https://doi.org/10.3390/math14030456 - 28 Jan 2026
Viewed by 108
Abstract
In multi-agent systems, the interactions between autonomous agents within dynamic and uncertain environments are crucial for achieving their objectives. Current research leverages model checking techniques to verify these interactions, with social accessibility relations commonly used to formalize agent interactions. In multi-agent systems that [...] Read more.
In multi-agent systems, the interactions between autonomous agents within dynamic and uncertain environments are crucial for achieving their objectives. Current research leverages model checking techniques to verify these interactions, with social accessibility relations commonly used to formalize agent interactions. In multi-agent systems that incorporate generalized possibility measures, the quantification, computation, and model checking of trust properties present significant challenges. This paper introduces an indirect model checking algorithm designed to transform social trust under uncertainty into quantifiable properties for verification. A Generalized Possibilistic Trust Interpreted System (GPTIS) is proposed to model and characterize multi-agent systems with trust-related uncertainties. Subsequently, the trust operators are extended based on Generalized Possibilistic Computation Tree Logic (GPoCTL) to develop the Generalized Possibilistic Trust Computation Tree Logic (GPTCTL), which is employed to express the trust properties of the system. Then, a model checking algorithm that maps trust accessibility relations to trust actions is introduced, thereby transforming the model checking of GPTCTL on GPTIS into model checking of GPoCTL on Generalized Possibility Kripke Structures (GPKSs). The proposed algorithm is provided with a correctness proof and complexity analysis, followed by an example demonstrating its practical feasibility. Full article
(This article belongs to the Section D2: Operations Research and Fuzzy Decision Making)
42 pages, 4980 KB  
Article
Socially Grounded IoT Protocol for Reliable Computer Vision in Industrial Applications
by Gokulnath Chidambaram, Shreyanka Subbarayappa and Sai Baba Magapu
Future Internet 2026, 18(2), 69; https://doi.org/10.3390/fi18020069 - 27 Jan 2026
Viewed by 178
Abstract
The Social Internet of Things (SIoT) enables collaborative service provisioning among interconnected devices by leveraging socially inspired trust relationships. This paper proposes a socially driven SIoT protocol for trust-aware service selection, enabling dynamic friendship formation and ranking among distributed service-providing devices based on [...] Read more.
The Social Internet of Things (SIoT) enables collaborative service provisioning among interconnected devices by leveraging socially inspired trust relationships. This paper proposes a socially driven SIoT protocol for trust-aware service selection, enabling dynamic friendship formation and ranking among distributed service-providing devices based on observed execution behavior. The protocol integrates detection accuracy, round-trip time (RTT), processing time, and device characteristics within a graph-based friendship model and employs PageRank-based scoring to guide service selection. Industrial computer vision workloads are used as a representative testbed to evaluate the proposed SIoT trust-evaluation framework under realistic execution and network constraints. In homogeneous environments with comparable service-provider capabilities, friendship scores consistently favor higher-accuracy detection pipelines, with F1-scores in the range of approximately 0.25–0.28, while latency and processing-time variations remain limited. In heterogeneous environments comprising resource-diverse devices, trust differentiation reflects the combined influence of algorithm accuracy and execution feasibility, resulting in clear service-provider ranking under high-resolution and high-frame-rate workloads. Experimental results further show that reducing available network bandwidth from 100 Mbps to 10 Mbps increases round-trip communication latency by approximately one order of magnitude, while detection accuracy remains largely invariant. The evaluation is conducted on a physical SIoT testbed with three interconnected devices, forming an 11-node, 22-edge logical trust graph, and on synthetic trust graphs with up to 50 service-providing nodes. Across all settings, service-selection decisions remain stable, and PageRank-based friendship scoring is completed in approximately 20 ms, incurring negligible overhead relative to inference and communication latency. Full article
(This article belongs to the Special Issue Social Internet of Things (SIoT))
Show Figures

Graphical abstract

41 pages, 4245 KB  
Article
Blockchain-Integrated Stackelberg Model for Real-Time Price Regulation and Demand-Side Optimization in Microgrids
by Abdullah Umar, Prashant Kumar Jamwal, Deepak Kumar, Nitin Gupta, Vijayakumar Gali and Ajay Kumar
Energies 2026, 19(3), 643; https://doi.org/10.3390/en19030643 - 26 Jan 2026
Viewed by 178
Abstract
Renewable-driven microgrids require transparent and adaptive coordination mechanisms to manage variability in distributed generation and flexible demand. Conventional pricing schemes and centralized demand-side programs are often insufficient to regulate real-time imbalances, leading to inefficient renewable utilization and limited prosumer participation. This work proposes [...] Read more.
Renewable-driven microgrids require transparent and adaptive coordination mechanisms to manage variability in distributed generation and flexible demand. Conventional pricing schemes and centralized demand-side programs are often insufficient to regulate real-time imbalances, leading to inefficient renewable utilization and limited prosumer participation. This work proposes a blockchain-integrated Stackelberg pricing model that combines real-time price regulation, optimal demand-side management, and peer-to-peer energy exchange within a unified operational framework. The Microgrid Energy Management System (MEMS) acts as the Stackelberg leader, setting hourly prices and demand response incentives, while prosumers and consumers respond through optimal export and load-shifting decisions derived from quadratic cost models. A distributed supply–demand balancing algorithm iteratively updates prices to reach the Stackelberg equilibrium, ensuring system-level feasibility. To enable trust and tamper-proof execution, smart-contract architecture is deployed on the Polygon Proof-of-Stake network, supporting participant registration, day-ahead commitments, real-time measurement logging, demand-response validation, and automated settlement with negligible transaction fees. Experimental evaluation using real-world demand and PV profiles shows improved peak-load reduction, higher renewable utilization, and increased user participation. Results demonstrate that the proposed framework enhances operational reliability while enabling transparent and verifiable microgrid energy transactions. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

20 pages, 652 KB  
Review
Trust as Behavioral Architecture: How E-Commerce Platforms Shape Consumer Judgment and Agency
by Anupama Peter Mattathil, Babu George and Tony L. Henthorne
Platforms 2026, 4(1), 2; https://doi.org/10.3390/platforms4010002 - 26 Jan 2026
Viewed by 216
Abstract
In digital marketplaces, trust in e-commerce platforms has evolved from a protective heuristic into a powerful mechanism of behavioral conditioning. This review interrogates how trust cues such as star ratings, fulfillment badges, and platform reputation shape consumer cognition, systematically displace critical evaluation, and [...] Read more.
In digital marketplaces, trust in e-commerce platforms has evolved from a protective heuristic into a powerful mechanism of behavioral conditioning. This review interrogates how trust cues such as star ratings, fulfillment badges, and platform reputation shape consumer cognition, systematically displace critical evaluation, and create asymmetries in perceived quality. Drawing on over 47 high-quality studies across experimental, survey, and modeling methodologies, we identify seven interlocking dynamics: (1) cognitive outsourcing via platform trust, (2) reputational arbitrage by low-quality sellers, (3) consumer loyalty despite disappointment, (4) heuristic conditioning through trust signals, (5) trust inflation through ratings saturation, (6) false security masking structural risks, and (7) the shift in consumer trust from brands to platforms. Anchored in dual process theory, this synthesis positions trust not merely as a transactional enabler but as a socio-technical artifact engineered by platforms to guide attention, reduce scrutiny, and manage decision-making at scale. Eventually, platform trust functions as both lubricant and leash: streamlining choice while subtly constraining agency, with profound implications for digital commerce, platform governance, and consumer autonomy. Full article
Show Figures

Figure 1

25 pages, 3825 KB  
Review
Balancing Personalization, Privacy, and Value: A Systematic Literature Review of AI-Enabled Customer Experience Management
by Ristianawati Dwi Utami and Wang Aimin
Information 2026, 17(2), 115; https://doi.org/10.3390/info17020115 - 26 Jan 2026
Viewed by 344
Abstract
Artificial intelligence (AI) is transforming customer experience management (CXM) by enabling real-time, data-driven, and personalized interactions across digital touchpoints, including chatbots, voice assistants, generative AI, and immersive platforms. This study presents a PRISMA-based systematic literature review of 59 peer-reviewed studies published between 2021 [...] Read more.
Artificial intelligence (AI) is transforming customer experience management (CXM) by enabling real-time, data-driven, and personalized interactions across digital touchpoints, including chatbots, voice assistants, generative AI, and immersive platforms. This study presents a PRISMA-based systematic literature review of 59 peer-reviewed studies published between 2021 and 2026, examining how AI-enabled personalization, privacy concerns, and customer value interact within AI-mediated customer experiences. Drawing on the Personalization–Privacy–Value (PPV) framework, the review synthesizes evidence on how AI-driven personalization enhances utilitarian, hedonic, experiential, relational, and emotional value, thereby strengthening satisfaction, engagement, loyalty, and behavioral intentions. At the same time, the findings reveal persistent tensions, as privacy concerns, perceived surveillance, algorithmic bias, and contextual moderators—including generational differences, cultural expectations, and technological literacy—frequently constrain value creation and erode trust. The review highlights that personalization benefits are highly contingent on transparency, perceived control, and ethical alignment, rather than personalization intensity alone. The study contributes by integrating ethical AI considerations into CXM research and clarifying conditions under which AI-enabled personalization leads to value creation versus value destruction. Managerially, the findings underscore the importance of ethical governance, transparent data practices, and customer-centered AI design to sustain trust and long-term customer relationships. Future research should prioritize longitudinal analyses of trust development, demographic heterogeneity, and cross-sector comparisons of AI governance as AI technologies become increasingly embedded in service ecosystems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

15 pages, 436 KB  
Article
Artificial Intelligence in Sustainable Marketing: How AI Personalization Impacts Consumer Purchase Decisions
by Enas Alsaffarini and Bahaa Subhi Awwad
Sustainability 2026, 18(2), 1123; https://doi.org/10.3390/su18021123 - 22 Jan 2026
Viewed by 417
Abstract
The study explores how consumer buying behavior is influenced by artificial intelligence (AI) personalization, with a specific focus on responsible and sustainability-aligned digital marketing. Using an explanatory sequential mixed-methods design, the study analyzes a quantitative survey and qualitative interviews. Results show that purchase [...] Read more.
The study explores how consumer buying behavior is influenced by artificial intelligence (AI) personalization, with a specific focus on responsible and sustainability-aligned digital marketing. Using an explanatory sequential mixed-methods design, the study analyzes a quantitative survey and qualitative interviews. Results show that purchase behavior is strongly affected by exposure to AI messages—especially when recommendations are relevant, timely, and emotionally appealing—and by trust in AI, while perceived lack of trust inhibits purchasing. Qualitative findings underscore affective responses alongside ethical concerns, perceived transparency, and perceived control over data. Overall, the study shows that effective personalization depends not only on algorithmic sophistication but also on users’ sense of relevance and autonomy and on ethical data governance. The conclusions highlight sustainability-consistent implications for marketers: increase data transparency, segment customers by privacy sensitivity, and adopt accountable, consent-based personalization to build durable trust and loyalty. Future research should examine longitudinal effects and cultural differences, acknowledging limits of small purposive qualitative samples for generalization and exploring how consumer trust, ethical perceptions, and responses to AI personalization evolve over time. Full article
(This article belongs to the Special Issue Sustainable Digital Marketing Policy and Studies of Consumer Behavior)
Show Figures

Figure 1

21 pages, 583 KB  
Article
Beyond Accuracy: The Cognitive Economy of Trust and Absorption in the Adoption of AI-Generated Forecasts
by Anne-Marie Sassenberg, Nirmal Acharya, Padmaja Kar and Mohammad Sadegh Eshaghi
Forecasting 2026, 8(1), 8; https://doi.org/10.3390/forecast8010008 - 21 Jan 2026
Viewed by 174
Abstract
AI Recommender Systems (RecSys) function as personalised forecasting engines, predicting user preferences to reduce information overload. However, the efficacy of these systems is often bottlenecked by the “Last Mile” of forecasting: the end-user’s willingness to adopt and rely on the prediction. While the [...] Read more.
AI Recommender Systems (RecSys) function as personalised forecasting engines, predicting user preferences to reduce information overload. However, the efficacy of these systems is often bottlenecked by the “Last Mile” of forecasting: the end-user’s willingness to adopt and rely on the prediction. While the existing literature often assumes that algorithmic accuracy (e.g., low RMSE) automatically drives utilisation, empirical evidence suggests that users frequently reject accurate forecasts due to a lack of trust or cognitive friction. This study challenges the utilitarian view that users adopt systems simply because they are useful, instead proposing that sustainable adoption requires a state of Cognitive Absorption—a psychological flow state enabled by the Cognitive Economy of trust. Grounded in the Motivation–Opportunity–Ability (MOA) framework, we developed the Trust–Absorption–Intention (TAI) model. We analysed data from 366 users of a major predictive platform using Partial Least Squares Structural Equation Modelling (PLS-SEM). The Disjoint Two-Stage Approach was employed to model the reflective–formative Higher-Order Constructs. The results demonstrate that Cognitive Trust (specifically the relational dimensions of Benevolence and Integrity) operates via a dual pathway. It drives adoption directly, serving as a mechanism of Cognitive Economy where users suspend vigilance to rely on the AI as a heuristic, while simultaneously freeing mental resources to enter a state of Cognitive Absorption. Affective Trust further drives this immersion by fostering curiosity. Crucially, Cognitive Absorption partially mediates the relationship between Cognitive Trust and adoption intention, whereas it fully mediates the impact of Affective Trust. This indicates that while Cognitive Trust can drive reliance directly as a rational shortcut, Affective Trust translates to adoption only when it successfully triggers a flow state. This study bridges the gap between algorithmic forecasting and behavioural adoption. It introduces the Cognitive Economy perspective: Trust reduces the cognitive cost of verifying predictions, allowing users to outsource decision-making to the AI and enter a state of effortless immersion. For designers of AI forecasting agents, the findings suggest that maximising accuracy may be less effective than minimising cognitive friction for sustaining long-term adoption. To solve the cold start problem, platforms should be designed for flow by building emotional rapport and explainability, thereby converting sporadic users into continuous data contributors. Full article
(This article belongs to the Section AI Forecasting)
Show Figures

Figure 1

19 pages, 1580 KB  
Article
Truth and Trust in the News: How Young People in Portugal and Finland Perceive Information Operations in the Media
by Niina Meriläinen and Ana Melro
Journal. Media 2026, 7(1), 13; https://doi.org/10.3390/journalmedia7010013 - 20 Jan 2026
Viewed by 380
Abstract
This study explores how young people in Finland and Portugal perceive media trust and vulnerability to information operations in the digital era. While both groups rely heavily on digital platforms for news, they view online sources as less reliable due to disinformation and [...] Read more.
This study explores how young people in Finland and Portugal perceive media trust and vulnerability to information operations in the digital era. While both groups rely heavily on digital platforms for news, they view online sources as less reliable due to disinformation and fake news, especially on TikTok and Instagram. Trust and truth appear emotionally driven, with influencers and entertainment content often considered credible, increasing susceptibility to manipulation. Despite identifying as ‘digital natives’, participants rarely question source credibility or algorithmic influence, leaving them exposed to adversarial actors, such as Russia. Full article
Show Figures

Figure 1

21 pages, 549 KB  
Article
Employee Comfort with AI-Driven Algorithmic Decision-Making: Evidence from the GCC and Lebanon
by Soha El Achi, Dani Aoun, Wael Lahad and Nada Jabbour Al Maalouf
Adm. Sci. 2026, 16(1), 49; https://doi.org/10.3390/admsci16010049 - 18 Jan 2026
Viewed by 428
Abstract
In this digital era, many companies are integrating new solutions involving Artificial Intelligence (AI)-based automation systems to optimize processes, reach higher efficiency, and help them with decision-making. While implementing these changes, various challenges may arise, including resistance to AI integration from employees. This [...] Read more.
In this digital era, many companies are integrating new solutions involving Artificial Intelligence (AI)-based automation systems to optimize processes, reach higher efficiency, and help them with decision-making. While implementing these changes, various challenges may arise, including resistance to AI integration from employees. This study examines how employees’ perceived benefits, concerns, and trust regarding AI-driven algorithmic decision-making influence their comfort with AI-driven algorithmic decision-making in the workplace. This study employed a quantitative method by surveying employees in the Gulf Cooperation Council (GCC) and Lebanon with a final sample size of 388 participants. The results demonstrate that employees are more likely to feel comfortable with AI-driven algorithmic decision-making in the workplace if they believe AI will increase efficiency, promote fairness, and decrease errors. Unexpectedly, employee concerns were positively associated with comfort, suggesting an adaptive response to AI adoption. Lastly, comfort with AI-driven algorithmic decision-making is positively correlated with greater levels of trust in AI systems. These findings provide actionable guidance to organizations, underscoring the need to communicate clearly about AI’s role, address employees’ concerns through transparency and human oversight, and invest in training and reskilling initiatives that build trust and foster responsible, employee-centered adoption of AI. Full article
Show Figures

Figure 1

14 pages, 250 KB  
Article
Exploring an AI-First Healthcare System
by Ali Gates, Asif Ali, Scott Conard and Patrick Dunn
Bioengineering 2026, 13(1), 112; https://doi.org/10.3390/bioengineering13010112 - 17 Jan 2026
Viewed by 490
Abstract
Artificial intelligence (AI) is now embedded across many aspects of healthcare, yet most implementations remain fragmented, task-specific, and layered onto legacy workflows. This paper does not review AI applications in healthcare per se; instead, it examines what an AI-first healthcare system would look [...] Read more.
Artificial intelligence (AI) is now embedded across many aspects of healthcare, yet most implementations remain fragmented, task-specific, and layered onto legacy workflows. This paper does not review AI applications in healthcare per se; instead, it examines what an AI-first healthcare system would look like, one in which AI functions as a foundational organizing principle of care delivery rather than an adjunct technology. We synthesize evidence across ambulatory, inpatient, diagnostic, post-acute, and population health settings to assess where AI capabilities are sufficiently mature to support system-level integration and where critical gaps remain. Across domains, the literature demonstrates strong performance for narrowly defined tasks such as imaging interpretation, documentation support, predictive surveillance, and remote monitoring. However, evidence for longitudinal orchestration, cross-setting integration, and sustained impact on outcomes, costs, and equity remains limited. Key barriers include data fragmentation, workflow misalignment, algorithmic bias, insufficient governance, and lack of prospective, multi-site evaluations. We argue that advancing toward AI-first healthcare requires shifting evaluation from accuracy-centric metrics to system-level outcomes, emphasizing human-enabled AI, interoperability, continuous learning, and equity-aware design. Using hypertension management and patient journey exemplars, we illustrate how AI-first systems can enable proactive risk stratification, coordinated intervention, and continuous support across the care continuum. We further outline architectural and governance requirements, including cloud-enabled infrastructure, interoperability, operational machine learning practices, and accountability frameworks—necessary to operationalize AI-first care safely and at scale, subject to prospective validation, regulatory oversight, and post-deployment surveillance. This review contributes a system-level framework for understanding AI-first healthcare, identifies priority research and implementation gaps, and offers practical considerations for clinicians, health systems, researchers, and policymakers. By reframing AI as infrastructure rather than isolated tools, the AI-first approach provides a pathway toward more proactive, coordinated, and equitable healthcare delivery while preserving the central role of human judgment and trust. Full article
(This article belongs to the Special Issue AI and Data Science in Bioengineering: Innovations and Applications)
Back to TopTop