Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (306)

Search Parameters:
Keywords = privacy risk analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 358 KiB  
Article
Artificial Intelligence in Curriculum Design: A Data-Driven Approach to Higher Education Innovation
by Thai Son Chu and Mahfuz Ashraf
Knowledge 2025, 5(3), 14; https://doi.org/10.3390/knowledge5030014 - 29 Jul 2025
Viewed by 207
Abstract
This paper shows that artificial intelligence is fundamentally transforming college curricula by enabling data-driven personalization, which enhances student outcomes and better aligns educational programs with evolving workforce demands. Specifically, predictive analytics, machine learning algorithms, and natural language processing were applied here, grounded in [...] Read more.
This paper shows that artificial intelligence is fundamentally transforming college curricula by enabling data-driven personalization, which enhances student outcomes and better aligns educational programs with evolving workforce demands. Specifically, predictive analytics, machine learning algorithms, and natural language processing were applied here, grounded in constructivist learning theory and Human–Computer Interaction principles, to evaluate student performance and identify at-risk students to propose personalized learning pathways. Results indicated that the AI-based curriculum achieved much higher course completion rates (89.72%) as well as retention (91.44%) and dropout rates (4.98%) compared to the traditional model. Sentiment analysis of learner feedback showed a more positive learning experience, while regression and ANOVA analyses proved the impact of AI on enhancing academic performance to be real. Therefore, the learning content delivery for each student was continuously improved based on individual learner characteristics and industry trends by AI-enabled recommender systems and adaptive learning models. Its advantages notwithstanding, the study emphasizes the need to address ethical concerns, ensure data privacy safeguards, and mitigate algorithmic bias before an equitable outcome can be claimed. These findings can inform institutions aspiring to adopt AI-driven models for curriculum innovation to build a more dynamic, responsive, and learner-centered educational ecosystem. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

31 pages, 528 KiB  
Article
An Exploratory Factor Analysis Approach on Challenging Factors for Government Cloud Service Adoption Intention
by Ndukwe Ukeje, Jairo A. Gutierrez, Krassie Petrova and Ugochukwu Chinonso Okolie
Future Internet 2025, 17(8), 326; https://doi.org/10.3390/fi17080326 - 23 Jul 2025
Viewed by 268
Abstract
This study explores the challenges hindering the government’s adoption of cloud computing despite its benefits in improving services, reducing costs, and enhancing collaboration. Key barriers include information security, privacy, compliance, and perceived risks. Using the Unified Theory of Acceptance and Use of Technology [...] Read more.
This study explores the challenges hindering the government’s adoption of cloud computing despite its benefits in improving services, reducing costs, and enhancing collaboration. Key barriers include information security, privacy, compliance, and perceived risks. Using the Unified Theory of Acceptance and Use of Technology (UTAUT) model, the study conceptualises a model incorporating privacy, governance framework, performance expectancy, and information security as independent variables, with perceived risk as a moderator and government intention as the dependent variable. The study employs exploratory factor analysis (EFA) based on survey data from 71 participants in Nigerian government organisations to validate the measurement scale for these factors. The analysis evaluates variable validity, factor relationships, and measurement reliability. Cronbach’s alpha values range from 0.807 to 0.950, confirming high reliability. Measurement items with a common variance above 0.40 were retained, explaining 70.079% of the total variance on the measurement items, demonstrating reliability and accuracy in evaluating the challenging factors. These findings establish a validated scale for assessing government cloud adoption challenges and highlight complex relationships among influencing factors. This study provides a reliable measurement scale and model for future research and policymakers on the government’s intention to adopt cloud services. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

22 pages, 845 KiB  
Article
Bridging Cities and Citizens with Generative AI: Public Readiness and Trust in Urban Planning
by Adnan Alshahrani
Buildings 2025, 15(14), 2494; https://doi.org/10.3390/buildings15142494 - 16 Jul 2025
Viewed by 435
Abstract
As part of its modernisation and economic diversification policies, Saudi Arabia is building smart, sustainable cities intended to improve quality of life and meet environmental goals. However, involving the public in urban planning remains complex, with traditional methods often proving expensive, time-consuming, and [...] Read more.
As part of its modernisation and economic diversification policies, Saudi Arabia is building smart, sustainable cities intended to improve quality of life and meet environmental goals. However, involving the public in urban planning remains complex, with traditional methods often proving expensive, time-consuming, and inaccessible to many groups. Integrating artificial intelligence (AI) into public participation may help to address these limitations. This study explores whether Saudi residents are ready to engage with AI-driven tools in urban planning, how they prefer to interact with them, and what ethical concerns may arise. Using a quantitative, survey-based approach, the study collected data from 232 Saudi residents using non-probability stratified sampling. The survey assessed demographic influences on AI readiness, preferred engagement methods, and perceptions of ethical risks. The results showed a strong willingness among participants (200 respondents, 86%)—especially younger and university-educated respondents—to engage through AI platforms. Visual tools such as image and video analysis were the most preferred (96 respondents, 41%), while chatbots were less favoured (16 respondents, 17%). However, concerns were raised about privacy (76 respondents, 33%), bias (52 respondents, 22%), and over-reliance on technology (84 respondents, 36%). By exploring the intersection of generative AI and participatory urban governance, this study contributes directly to the discourse on inclusive smart city development. The research also offers insights into how AI-driven public engagement tools can be integrated into urban planning workflows to enhance the design, governance, and performance of the built environment. The findings suggest that AI has the potential to improve inclusivity and responsiveness in urban planning, but that its success depends on public trust, ethical safeguards, and the thoughtful design of accessible, user-friendly engagement platforms. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

21 pages, 877 KiB  
Article
Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing
by Mengdi Zhao and Huiyan Chen
Entropy 2025, 27(7), 753; https://doi.org/10.3390/e27070753 - 15 Jul 2025
Viewed by 186
Abstract
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity [...] Read more.
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 223 KiB  
Article
Balancing Privacy and Risk: A Critical Analysis of Personal Data Use as Governed by Saudi Insurance Law
by Mutaz Abdulaziz Alkhedhairy
Laws 2025, 14(4), 47; https://doi.org/10.3390/laws14040047 - 6 Jul 2025
Viewed by 521
Abstract
The Kingdom of Saudi Arabia (KSA) Personal Data Protection Law (PDPL) was enacted in 2021. In its brief three-year existence, the PDPL has attracted significant academic and legal practitioner attention. This critical analysis focuses on three key questions: (1) What are the key [...] Read more.
The Kingdom of Saudi Arabia (KSA) Personal Data Protection Law (PDPL) was enacted in 2021. In its brief three-year existence, the PDPL has attracted significant academic and legal practitioner attention. This critical analysis focuses on three key questions: (1) What are the key PDPL objectives? (2) How does this legislation compare with privacy–data protection approaches adopted in other jurisdictions (notably the European Union General Data Protection Regulation 2016 (GDPR))? and (3) Does the PDPL achieve a reasonable, workable balance between personal data protection (‘data subjects’ interests) and risks associated with personal data being shared with KSA insurers? The analysis confirms that these PDPL measures appear sound, but a definitive assessment of the ‘balance’ objectives highlighted here requires ongoing attention—three years of PDPL use is an insufficient basis to reach final conclusions regarding PDPL fitness for purpose. However, a tentative ‘soundness’ conclusion has reasonable support when the relevant authorities are collectively assessed, particularly regarding the treatment of personal data by KSA insurers in the context of personal insurance policies. Full article
20 pages, 2947 KiB  
Article
Personal Data Value Realization and Symmetry Enhancement Under Social Service Orientation: A Tripartite Evolutionary Game Approach
by Dandan Wang and Junhao Yu
Symmetry 2025, 17(7), 1069; https://doi.org/10.3390/sym17071069 - 5 Jul 2025
Viewed by 253
Abstract
In the digital economy, information asymmetry among individuals, data users, and governments limits the full realization of personal data value. To address this, “symmetry enhancement” strategies aim to reduce information gaps, enabling more balanced decision-making and facilitating efficient data flow. This study establishes [...] Read more.
In the digital economy, information asymmetry among individuals, data users, and governments limits the full realization of personal data value. To address this, “symmetry enhancement” strategies aim to reduce information gaps, enabling more balanced decision-making and facilitating efficient data flow. This study establishes a tripartite evolutionary game model based on personal data collection and development, conducts simulations using MATLAB R2024a, and proposes countermeasures based on equilibrium analysis and simulation results. The results highlight that individual participation is pivotal, influenced by perceived benefits, management costs, and privacy risks. Meanwhile, data users’ compliance hinges on economic incentives and regulatory burdens, with excessive costs potentially discouraging adherence. Governments must carefully weigh social benefits against regulatory expenditures. Based on these findings, this paper proposes the following recommendations: use personal data application scenarios as a guide, rely on the construction of personal trustworthy data spaces, explore and improve personal data revenue distribution mechanisms, strengthen the management of data users, and promote the maximization of personal data value through multi-party collaborative ecological incentives. Full article
Show Figures

Figure 1

49 pages, 1388 KiB  
Review
Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries
by Aleksandra Nastoska, Bojana Jancheska, Maryan Rizinski and Dimitar Trajanov
Electronics 2025, 14(13), 2717; https://doi.org/10.3390/electronics14132717 - 4 Jul 2025
Viewed by 977
Abstract
Ensuring the trustworthiness of artificial intelligence (AI) systems is critical as they become increasingly integrated into domains like healthcare, finance, and public administration. This paper explores frameworks and metrics for evaluating AI trustworthiness, focusing on key principles such as fairness, transparency, privacy, and [...] Read more.
Ensuring the trustworthiness of artificial intelligence (AI) systems is critical as they become increasingly integrated into domains like healthcare, finance, and public administration. This paper explores frameworks and metrics for evaluating AI trustworthiness, focusing on key principles such as fairness, transparency, privacy, and security. This study is guided by two central questions: how can trust in AI systems be systematically measured across the AI lifecycle, and what are the trade-offs involved when optimizing for different trustworthiness dimensions? By examining frameworks such as the NIST AI Risk Management Framework (AI RMF), the AI Trust Framework and Maturity Model (AI-TMM), and ISO/IEC standards, this study bridges theoretical insights with practical applications. We identify major risks across the AI lifecycle stages and outline various metrics to address challenges in system reliability, bias mitigation, and model explainability. This study includes a comparative analysis of existing standards and their application across industries to illustrate their effectiveness. Real-world case studies, including applications in healthcare, financial services, and autonomous systems, demonstrate approaches to applying trust metrics. The findings reveal that achieving trustworthiness involves navigating trade-offs between competing metrics, such as fairness versus efficiency or privacy versus transparency, and emphasizes the importance of interdisciplinary collaboration for robust AI governance. Emerging trends suggest the need for adaptive frameworks for AI trustworthiness that evolve alongside advancements in AI technologies. This paper contributes to the field by proposing a comprehensive review of existing frameworks with guidelines for building resilient, ethical, and transparent AI systems, ensuring their alignment with regulatory requirements and societal expectations. Full article
Show Figures

Figure 1

14 pages, 1418 KiB  
Article
Privacy-Preserving Data Sharing via PCA-Based Dimensionality Reduction in Non-IID Environments
by Yeon-Ji Lee, Na-Yeon Shin and Il-Gu Lee
Electronics 2025, 14(13), 2711; https://doi.org/10.3390/electronics14132711 - 4 Jul 2025
Viewed by 261
Abstract
The proliferation of mobile devices has generated exponential data growth, driving efforts to extract value. However, mobile data often presents non-independent and identically distributed (non-IID) challenges owing to varying device, environmental, and user factors. While data sharing can mitigate non-IID issues, direct raw [...] Read more.
The proliferation of mobile devices has generated exponential data growth, driving efforts to extract value. However, mobile data often presents non-independent and identically distributed (non-IID) challenges owing to varying device, environmental, and user factors. While data sharing can mitigate non-IID issues, direct raw data transmission poses significant security risks like privacy breaches and man-in-the-middle attacks. This paper proposes a secure data-sharing mechanism using principal component analysis (PCA). Each node independently builds a local PCA model to reduce data dimensionality before sharing. Receiving nodes then recover data using a similarly constructed local PCA model. Sharing only dimensionally reduced data instead of raw data enhances transmission privacy. The method’s effectiveness was evaluated from both legitimate user and attacker perspectives. Experimental results demonstrated stable accuracy for legitimate users post-sharing, while attacker accuracy significantly dropped. The optimal number of principal components was also experimentally determined. Under optimal configuration, the proposed method achieves up to 42 times greater memory efficiency and superior privacy metrics compared with conventional approaches, demonstrating its advantages. Full article
Show Figures

Figure 1

31 pages, 1411 KiB  
Article
Entropy-Based Correlation Analysis for Privacy Risk Assessment in IoT Identity Ecosystem
by Kai-Chih Chang and Suzanne Barber
Entropy 2025, 27(7), 723; https://doi.org/10.3390/e27070723 - 3 Jul 2025
Viewed by 244
Abstract
As the Internet of Things (IoT) expands, robust tools for assessing privacy risk are increasingly critical. This research introduces a quantitative framework for evaluating IoT privacy risks, centered on two algorithmically derived scores: the Personalized Privacy Assistant (PPA) score and the PrivacyCheck score, [...] Read more.
As the Internet of Things (IoT) expands, robust tools for assessing privacy risk are increasingly critical. This research introduces a quantitative framework for evaluating IoT privacy risks, centered on two algorithmically derived scores: the Personalized Privacy Assistant (PPA) score and the PrivacyCheck score, both developed by the Center for Identity at The University of Texas. We analyze the correlation between these scores across multiple types of sensitive data—including email, social security numbers, and location—to understand their effectiveness in detecting privacy vulnerabilities. Our approach leverages Bayesian networks with cycle decomposition to capture complex dependencies among risk factors and applies entropy-based metrics to quantify informational uncertainty in privacy assessments. Experimental results highlight the strengths and limitations of each tool and demonstrate the value of combining data-driven risk scoring, information-theoretic analysis, and network modeling for privacy evaluation in IoT environments. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 1732 KiB  
Article
GSIDroid: A Suspicious Subgraph-Driven and Interpretable Android Malware Detection System
by Hong Huang, Weitao Huang and Feng Jiang
Sensors 2025, 25(13), 4116; https://doi.org/10.3390/s25134116 - 1 Jul 2025
Viewed by 330
Abstract
In recent years, the growing threat of Android malware has caused significant economic losses and posed serious risks to user security and privacy. Machine learning-based detection approaches have improved the accuracy of malware identification, thereby providing more effective protection for Android users. However, [...] Read more.
In recent years, the growing threat of Android malware has caused significant economic losses and posed serious risks to user security and privacy. Machine learning-based detection approaches have improved the accuracy of malware identification, thereby providing more effective protection for Android users. However, graph-based detection methods rely on whole-graph computations instead of subgraph-level analyses, and they often ignore the semantic information of individual nodes. Moreover, limited attention has been paid to the interpretability of these models, hindering a deeper understanding of malicious behaviors and restricting their utility in supporting cybersecurity professionals for further in-depth research. To address these challenges, we propose GSIDroid, a novel subgraph-driven and interpretable Android malware detection framework designed to enhance detection performance, reduce computational overhead, protect user security, and assist security experts in rigorous malware analysis. GSIDroid focuses on extracting suspicious subgraphs, integrating deep and shallow-semantic features with permission information, and incorporating both global and local interpretability modules to ensure transparent, trustworthy, and analyzable detection results. Experiments conducted on 14,520 samples demonstrate that GSIDroid achieves an F1 score of 97.14%, and its interpretability module successfully identifies critical nodes and permission features that influence detection decisions, thereby enhancing practical deployment and supporting further security research. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 2027 KiB  
Article
Blockchain-Based Identity Management System Prototype for Enhanced Privacy and Security
by Haifa Mohammed Alanzi and Mohammad Alkhatib
Electronics 2025, 14(13), 2605; https://doi.org/10.3390/electronics14132605 - 27 Jun 2025
Viewed by 384
Abstract
An Identity Management System (IDMS) is responsible for managing and organizing identities and credentials exchanged between users, Identity Providers (IDPs), and Service Providers (SPs). The primary goal of IDMS is to ensure the confidentiality and privacy of users’ personal data. Traditional IDMS relies [...] Read more.
An Identity Management System (IDMS) is responsible for managing and organizing identities and credentials exchanged between users, Identity Providers (IDPs), and Service Providers (SPs). The primary goal of IDMS is to ensure the confidentiality and privacy of users’ personal data. Traditional IDMS relies on a third party to store user information and authenticate the user. However, this approach poses threats to user privacy and increases the risk of single point of failure (SPOF), user tracking, and data unavailability. In contrast, decentralized IDMSs that use blockchain technology offer potential solutions to these issues as they offer powerful features including immutability, transparency, anonymity, and decentralization. Despite its advantages, blockchain technology also suffers from limitations related to performance, third-party control, weak authentication, and data leakages. Furthermore, some blockchain-based IDMSs still exhibit centralization issues, which can compromise user privacy and create SPOF risks. This study proposes a decentralized IDMS that leverages blockchain and smart contract technologies to address the shortcomings of traditional IDMSs. The proposed system also utilizes the Interplanetary file system (IPFS) to enhance the scalability and performance by reducing the on-chain storage load. Additionally, the proposed IDMS employs the Elliptic Curve Integrated Encryption Scheme (ECIES) to provide an extra layer of security to protect users’ sensitive information while improving the performance of the systems’ transactions. Security analysis and experimental results demonstrated that the proposed IDMS offers significant security and performance advantages compared to its counterparts. Full article
Show Figures

Figure 1

24 pages, 1501 KiB  
Review
Large Language Models in Medical Chatbots: Opportunities, Challenges, and the Need to Address AI Risks
by James C. L. Chow and Kay Li
Information 2025, 16(7), 549; https://doi.org/10.3390/info16070549 - 27 Jun 2025
Viewed by 1118
Abstract
Large language models (LLMs) are transforming the capabilities of medical chatbots by enabling more context-aware, human-like interactions. This review presents a comprehensive analysis of their applications, technical foundations, benefits, challenges, and future directions in healthcare. LLMs are increasingly used in patient-facing roles, such [...] Read more.
Large language models (LLMs) are transforming the capabilities of medical chatbots by enabling more context-aware, human-like interactions. This review presents a comprehensive analysis of their applications, technical foundations, benefits, challenges, and future directions in healthcare. LLMs are increasingly used in patient-facing roles, such as symptom checking, health information delivery, and mental health support, as well as in clinician-facing applications, including documentation, decision support, and education. However, as a study from 2024 warns, there is a need to manage “extreme AI risks amid rapid progress”. We examine transformer-based architectures, fine-tuning strategies, and evaluation benchmarks specific to medical domains to identify their potential to transfer and mitigate AI risks when using LLMs in medical chatbots. While LLMs offer advantages in scalability, personalization, and 24/7 accessibility, their deployment in healthcare also raises critical concerns. These include hallucinations (the generation of factually incorrect or misleading content by an AI model), algorithmic biases, privacy risks, and a lack of regulatory clarity. Ethical and legal challenges, such as accountability, explainability, and liability, remain unresolved. Importantly, this review integrates broader insights on AI safety, drawing attention to the systemic risks associated with rapid LLM deployment. As highlighted in recent policy research, including work on managing extreme AI risks, there is an urgent need for governance frameworks that extend beyond technical reliability to include societal oversight and long-term alignment. We advocate for responsible innovation and sustained collaboration among clinicians, developers, ethicists, and regulators to ensure that LLM-powered medical chatbots are deployed safely, equitably, and transparently within healthcare systems. Full article
Show Figures

Graphical abstract

24 pages, 429 KiB  
Systematic Review
Advances in NLP Techniques for Detection of Message-Based Threats in Digital Platforms: A Systematic Review
by José Saias
Electronics 2025, 14(13), 2551; https://doi.org/10.3390/electronics14132551 - 24 Jun 2025
Viewed by 1012
Abstract
Users of all ages face risks on social media and messaging platforms. When encountering suspicious messages, legitimate concerns arise about a sender’s malicious intent. This study examines recent advances in Natural Language Processing for detecting message-based threats in digital communication. We conducted a [...] Read more.
Users of all ages face risks on social media and messaging platforms. When encountering suspicious messages, legitimate concerns arise about a sender’s malicious intent. This study examines recent advances in Natural Language Processing for detecting message-based threats in digital communication. We conducted a systematic review following PRISMA guidelines, to address four research questions. After applying a rigorous search and screening pipeline, 30 publications were selected for analysis. Our work assessed the NLP techniques and evaluation methods employed in recent threat detection research, revealing that large language models appear in only 20% of the reviewed works. We further categorized detection input scopes and discussed ethical and privacy implications. The results show that AI ethical aspects are not systematically addressed in the reviewed scientific literature. Full article
Show Figures

Figure 1

21 pages, 444 KiB  
Review
The Role of ChatGPT in Dermatology Diagnostics
by Ziad Khamaysi, Mahdi Awwad, Badea Jiryis, Naji Bathish and Jonathan Shapiro
Diagnostics 2025, 15(12), 1529; https://doi.org/10.3390/diagnostics15121529 - 16 Jun 2025
Viewed by 875
Abstract
Artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has disrupted different medical disciplines, including dermatology. This review explores the application of ChatGPT in dermatological diagnosis, emphasizing its role in natural language processing (NLP) for clinical data interpretation, differential diagnosis assistance, and [...] Read more.
Artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has disrupted different medical disciplines, including dermatology. This review explores the application of ChatGPT in dermatological diagnosis, emphasizing its role in natural language processing (NLP) for clinical data interpretation, differential diagnosis assistance, and patient communication enhancement. ChatGPT can enhance a diagnostic workflow when paired with image analysis tools, such as convolutional neural networks (CNNs), by merging text and image data. While it boasts great capabilities, it still faces some issues, such as its inability to perform any direct image analyses and the risk of inaccurate suggestions. Ethical considerations, including patient data privacy and the responsibilities of the clinician, are discussed. Future perspectives include an integrated multimodal model and AI-assisted framework for diagnosis, which shall improve dermatology practice. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

37 pages, 3151 KiB  
Systematic Review
Effectiveness, Adoption Determinants, and Implementation Challenges of ICT-Based Cognitive Support for Older Adults with MCI and Dementia: A PRISMA-Compliant Systematic Review and Meta-Analysis (2015–2025)
by Ashrafe Alam, Md Golam Rabbani and Victor R. Prybutok
Healthcare 2025, 13(12), 1421; https://doi.org/10.3390/healthcare13121421 - 13 Jun 2025
Viewed by 470
Abstract
Background: The increasing prevalence of dementia and mild cognitive impairment (MCI) among the elderly population is a global health issue. Information and Communication Technology (ICT)-based interventions hold promises for maintaining cognition, but their viability is affected by several challenges. Objectives: This study [...] Read more.
Background: The increasing prevalence of dementia and mild cognitive impairment (MCI) among the elderly population is a global health issue. Information and Communication Technology (ICT)-based interventions hold promises for maintaining cognition, but their viability is affected by several challenges. Objectives: This study aimed to significantly assess the effectiveness of ICT-based cognitive and memory aid technology for individuals with MCI or dementia, identify adoption drivers, and develop an implementation model to inform practice. Methods: A PRISMA-based systematic literature review, with the protocol registered in PROSPERO (CRD420251051515), was conducted using seven electronic databases published between January 2015 and January 2025 following the PECOS framework. Random effects models were used for meta-analysis, and risk of bias was assessed using the Joanna Briggs Institute (JBI) Critical Appraisal Checklists. Results: A total of ten forms of ICT interventions that had proved effective to support older adults with MCI and dementia. Barriers to adoption included digital literacy differences, usability issues, privacy concerns, and the lack of caregiver support. Facilitators were individualized design, caregiver involvement, and culturally appropriate implementation. ICT-based interventions showed moderate improvements in cognitive outcomes (pooled Cohen’s d = 0.49, 95% CI: 0.14–1.03). A sensitivity analysis excluding high-risk studies yielded a comparable effect size (Cohen’s d = 0.50), indicating robust findings. However, trim-and-fill analysis suggested a slightly reduced corrected effect (Cohen’s d = 0.39, 95% CI: 0.28–0.49), reflecting potential small-study bias. Heterogeneity was moderate (I2 = 46%) and increased to 55% after excluding high-risk studies. Subgroup analysis showed that tablet-based interventions tended to produce higher effect sizes. Conclusions: ICT-based interventions considerably enhance cognition status, autonomy, and social interaction in older adults with MCI and dementia. To ensure long-term scalability, future initiatives must prioritize user-centered design, caregiver education, equitable access to technology, accessible infrastructure and supportive policy frameworks. Full article
Show Figures

Figure 1

Back to TopTop