Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (270)

Search Parameters:
Keywords = decision trustworthiness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 675 KiB  
Article
A Trusted Multi-Cloud Brokerage System for Validating Cloud Services Using Ranking Heuristics
by Rajganesh Nagarajan, Vinothiyalakshmi Palanichamy, Ramkumar Thirunavukarasu and J. Arun Pandian
Future Internet 2025, 17(8), 348; https://doi.org/10.3390/fi17080348 (registering DOI) - 31 Jul 2025
Abstract
Cloud computing offers a broad spectrum of services to users, particularly in multi-cloud environments where service-centric features are introduced to support users from multiple endpoints. To improve service availability and optimize the utilization of required services, cloud brokerage has been integrated into multi-cloud [...] Read more.
Cloud computing offers a broad spectrum of services to users, particularly in multi-cloud environments where service-centric features are introduced to support users from multiple endpoints. To improve service availability and optimize the utilization of required services, cloud brokerage has been integrated into multi-cloud systems. The primary objective of a cloud broker is to ensure the quality and outcomes of services offered to customers. However, traditional cloud brokers face limitations in measuring service trust, ensuring validity, and anticipating future enhancements of services across different cloud platforms. To address these challenges, the proposed intelligent cloud broker integrates an intelligence mechanism that enhances decision-making within a multi-cloud environment. This broker performs a comprehensive validation and verification of service trustworthiness by analyzing various trust factors, including service response time, sustainability, suitability, accuracy, transparency, interoperability, availability, reliability, stability, cost, throughput, efficiency, and scalability. Customer feedback is also incorporated to assess these trust factors prior to service recommendation. The proposed model calculates service ranking (SR) values for available cloud services and dynamically includes newly introduced services during the validation process by mapping them with existing entries in the Service Collection Repository (SCR). Performance evaluation using the Google cluster-usage traces dataset demonstrates that the ICB outperforms existing approaches such as the Clustering-Based Trust Degree Computation (CBTDC) algorithm and the Service Context-Aware QoS Prediction and Recommendation (SCAQPR) model. Results confirm that the ICB significantly enhances the effectiveness and reliability of cloud service recommendations for users. Full article
Show Figures

Figure 1

26 pages, 14606 KiB  
Review
Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI
by Kazi Nabiul Alam, Pooneh Bagheri Zadeh and Akbar Sheikh-Akbari
Electronics 2025, 14(15), 3024; https://doi.org/10.3390/electronics14153024 - 29 Jul 2025
Viewed by 222
Abstract
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting [...] Read more.
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting genetic analysis for personalized medicine. However, a critical drawback of using Computer Vision (CV) approaches is their limited reliability and transparency. Clinicians and patients must comprehend the rationale behind predictions or results to ensure trust and ethical deployment in clinical settings. This demonstrates the adoption of the idea of Explainable Computer Vision (X-CV), which enhances vision-relative interpretability. Among various methodologies, attribution-based approaches are widely employed by researchers to explain medical imaging outputs by identifying influential features. This article solely aims to explore how attribution-based X-CV methods work in medical imaging, what they are good for in real-world use, and what their main limitations are. This study evaluates X-CV techniques by conducting a thorough review of relevant reports, peer-reviewed journals, and methodological approaches to obtain an adequate understanding of attribution-based approaches. It explores how these techniques tackle computational complexity issues, improve diagnostic accuracy and aid clinical decision-making processes. This article intends to present a path that generalizes the concept of trustworthiness towards AI-based healthcare solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

29 pages, 429 KiB  
Article
Matching Game Preferences Through Dialogical Large Language Models: A Perspective
by Renaud Fabre, Daniel Egret and Patrice Bellot
Appl. Sci. 2025, 15(15), 8307; https://doi.org/10.3390/app15158307 - 25 Jul 2025
Viewed by 187
Abstract
This perspective paper explores the future potential of “conversational intelligence” by examining how Large Language Models (LLMs) could be combined with GRAPHYP’s network system to better understand human conversations and preferences. Using recent research and case studies, we propose a conceptual framework that [...] Read more.
This perspective paper explores the future potential of “conversational intelligence” by examining how Large Language Models (LLMs) could be combined with GRAPHYP’s network system to better understand human conversations and preferences. Using recent research and case studies, we propose a conceptual framework that could make AI reasoning transparent and traceable, allowing humans to see and understand how AI reaches its conclusions. We present the conceptual perspective of “Matching Game Preferences through Dialogical Large Language Models (D-LLMs),” a proposed system that would allow multiple users to share their different preferences through structured conversations. This approach envisions personalizing LLMs by embedding individual user preferences directly into how the model makes decisions. The proposed D-LLM framework would require three main components: (1) reasoning processes that could analyze different search experiences and guide performance, (2) classification systems that would identify user preference patterns, and (3) dialogue approaches that could help humans resolve conflicting information. This perspective framework aims to create an interpretable AI system where users could examine, understand, and combine the different human preferences that influence AI responses, detected through GRAPHYP’s search experience networks. The goal of this perspective is to envision AI systems that would not only provide answers but also show users how those answers were reached, making artificial intelligence more transparent and trustworthy for human decision-making. Full article
Show Figures

Figure 1

19 pages, 1040 KiB  
Systematic Review
A Systematic Review on Risk Management and Enhancing Reliability in Autonomous Vehicles
by Ali Mahmood and Róbert Szabolcsi
Machines 2025, 13(8), 646; https://doi.org/10.3390/machines13080646 - 24 Jul 2025
Viewed by 278
Abstract
Autonomous vehicles (AVs) hold the potential to revolutionize transportation by improving safety, operational efficiency, and environmental impact. However, ensuring reliability and safety in real-world conditions remains a major challenge. Based on an in-depth examination of 33 peer-reviewed studies (2015–2025), this systematic review organizes [...] Read more.
Autonomous vehicles (AVs) hold the potential to revolutionize transportation by improving safety, operational efficiency, and environmental impact. However, ensuring reliability and safety in real-world conditions remains a major challenge. Based on an in-depth examination of 33 peer-reviewed studies (2015–2025), this systematic review organizes advancements across five key domains: fault detection and diagnosis (FDD), collision avoidance and decision making, system reliability and resilience, validation and verification (V&V), and safety evaluation. It integrates both hardware- and software-level perspectives, with a focus on emerging techniques such as Bayesian behavior prediction, uncertainty-aware control, and set-based fault detection to enhance operational robustness. Despite these advances, this review identifies persistent challenges, including limited cross-layer fault modeling, lack of formal verification for learning-based components, and the scarcity of scenario-driven validation datasets. To address these gaps, this paper proposes future directions such as verifiable machine learning, unified fault propagation models, digital twin-based reliability frameworks, and cyber-physical threat modeling. This review offers a comprehensive reference for developing certifiable, context-aware, and fail-operational autonomous driving systems, contributing to the broader goal of ensuring safe and trustworthy AV deployment. Full article
Show Figures

Figure 1

21 pages, 4519 KiB  
Article
Determining the Authenticity of Information Uploaded by Blockchain Based on Neural Networks—For Seed Traceability
by Kenan Zhao, Meng Zhang, Xiaofei Fan, Bo Peng, Huanyue Wang, Dongfang Zhang, Dongxiao Li and Xuesong Suo
Agriculture 2025, 15(15), 1569; https://doi.org/10.3390/agriculture15151569 - 22 Jul 2025
Viewed by 224
Abstract
Traditional seed supply chains face several hidden risks. Certain regulatory departments tend to focus primarily on entity circulation while neglecting the origin and accuracy of data in seed quality supervision, resulting in limited precision and low credibility of traceability information related to quality [...] Read more.
Traditional seed supply chains face several hidden risks. Certain regulatory departments tend to focus primarily on entity circulation while neglecting the origin and accuracy of data in seed quality supervision, resulting in limited precision and low credibility of traceability information related to quality and safety. Blockchain technology offers a systematic solution to key issues such as data source distortion and insufficient regulatory penetration in the seed supply chain by enabling data rights confirmation, tamper-proof traceability, smart contract execution, and multi-node consensus mechanisms. In this study, we developed a system that integrates blockchain and neural networks to provide seed traceability services. When uploading seed traceability information, the neural network models are employed to verify the authenticity of information provided by humans and save the tags on the blockchain. Various neural network architectures, such as Multilayer Perceptron, Recurrent Neural Network, Fully Convolutional Neural Network, and Long Short-term Memory model architectures, have been tested to determine the authenticity of seed traceability information. Among these, the Long Short-term Memory model architecture demonstrated the highest accuracy, with an accuracy rate of 90.65%. The results demonstrated that neural networks have significant research value and potential to assess the authenticity of information in a blockchain. In the application scenario of seed quality traceability, using blockchain and neural networks to determine the authenticity of seed traceability information provides a new solution for seed traceability. This system empowers farmers by providing trustworthy seed quality information, enabling better purchasing decisions and reducing risks from counterfeit or substandard seeds. Furthermore, this mechanism fosters market circulation of certified high-quality seeds, elevates crop yields, and contributes to the sustainable growth of agricultural systems. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

40 pages, 1540 KiB  
Review
A Survey on Video Big Data Analytics: Architecture, Technologies, and Open Research Challenges
by Thi-Thu-Trang Do, Quyet-Thang Huynh, Kyungbaek Kim and Van-Quyet Nguyen
Appl. Sci. 2025, 15(14), 8089; https://doi.org/10.3390/app15148089 - 21 Jul 2025
Viewed by 477
Abstract
The exponential growth of video data across domains such as surveillance, transportation, and healthcare has raised critical challenges in scalability, real-time processing, and privacy preservation. While existing studies have addressed individual aspects of Video Big Data Analytics (VBDA), an integrated, up-to-date perspective remains [...] Read more.
The exponential growth of video data across domains such as surveillance, transportation, and healthcare has raised critical challenges in scalability, real-time processing, and privacy preservation. While existing studies have addressed individual aspects of Video Big Data Analytics (VBDA), an integrated, up-to-date perspective remains limited. This paper presents a comprehensive survey of system architectures and enabling technologies in VBDA. It categorizes system architectures into four primary types as follows: centralized, cloud-based infrastructures, edge computing, and hybrid cloud–edge. It also analyzes key enabling technologies, including real-time streaming, scalable distributed processing, intelligent AI models, and advanced storage for managing large-scale multimodal video data. In addition, the study provides a functional taxonomy of core video processing tasks, including object detection, anomaly recognition, and semantic retrieval, and maps these tasks to real-world applications. Based on the survey findings, the paper proposes ViMindXAI, a hybrid AI-driven platform that combines edge and cloud orchestration, adaptive storage, and privacy-aware learning to support scalable and trustworthy video analytics. Our analysis in this survey highlights emerging trends such as the shift toward hybrid cloud–edge architectures, the growing importance of explainable AI and federated learning, and the urgent need for secure and efficient video data management. These findings highlight key directions for designing next-generation VBDA platforms that enhance real-time, data-driven decision-making in domains such as public safety, transportation, and healthcare. These platforms facilitate timely insights, rapid response, and regulatory alignment through scalable and explainable analytics. This work provides a robust conceptual foundation for future research on adaptive and efficient decision-support systems in video-intensive environments. Full article
Show Figures

Figure 1

15 pages, 420 KiB  
Article
The Impact of Greenwashing Awareness and Green Perceived Benefits on Green Purchase Propensity: The Mediating Role of Green Consumer Confusion
by Nikolaos Apostolopoulos, Ilias Makris, Georgios A. Deirmentzoglou and Sotiris Apostolopoulos
Sustainability 2025, 17(14), 6589; https://doi.org/10.3390/su17146589 - 18 Jul 2025
Viewed by 378
Abstract
In response to the increasing demand for environmentally friendly products and the parallel rise of deceptive green marketing practices, this study examines the impact of greenwashing awareness and green perceived benefits on consumers’ propensity to purchase green products, with a focus on the [...] Read more.
In response to the increasing demand for environmentally friendly products and the parallel rise of deceptive green marketing practices, this study examines the impact of greenwashing awareness and green perceived benefits on consumers’ propensity to purchase green products, with a focus on the mediating role of green consumer confusion. Drawing upon data collected from 300 consumers in Greece through an online questionnaire, this study employed validated measurement scales and used multiple regression analyses to test its hypotheses. The findings reveal that both greenwashing awareness and green perceived benefits positively influence green purchase propensity. Additionally, green consumer confusion mediates the relationship between greenwashing awareness and green purchase propensity, indicating that the awareness of greenwashing reduces confusion and enhances consumers’ likelihood to choose genuinely green products. This study contributes to the literature by offering an integrated model that connects greenwashing awareness, green consumer confusion, and green perceived benefits in shaping green purchase propensity. Finally, the findings offer valuable insights for organizations to design clearer, more trustworthy green marketing strategies that minimize consumer confusion and foster informed green purchasing decisions. Full article
Show Figures

Figure 1

26 pages, 4067 KiB  
Article
Performance-Based Classification of Users in a Containerized Stock Trading Application Environment Under Load
by Tomasz Rak, Jan Drabek and Małgorzata Charytanowicz
Electronics 2025, 14(14), 2848; https://doi.org/10.3390/electronics14142848 - 16 Jul 2025
Viewed by 198
Abstract
Emerging digital technologies are transforming how consumers participate in financial markets, yet their benefits depend critically on the speed, reliability, and transparency of the underlying platforms. Online stock trading platforms must maintain high efficiency underload to ensure a good user experience. This paper [...] Read more.
Emerging digital technologies are transforming how consumers participate in financial markets, yet their benefits depend critically on the speed, reliability, and transparency of the underlying platforms. Online stock trading platforms must maintain high efficiency underload to ensure a good user experience. This paper presents performance analysis under various load conditions based on the containerized stock exchange system. A comprehensive data logging pipeline was implemented, capturing metrics such as API response times, database query times, and resource utilization. We analyze the collected data to identify performance patterns, using both statistical analysis and machine learning techniques. Preliminary analysis reveals correlations between application processing time and database load, as well as the impact of user behavior on system performance. Association rule mining is applied to uncover relationships among performance metrics, and multiple classification algorithms are evaluated for their ability to predict user activity class patterns from system metrics. The insights from this work can guide optimizations in similar distributed web applications to improve scalability and reliability under a heavy load. By framing performance not merely as a technical property but as a determinant of financial decision-making and well-being, the study contributes actionable insights for designers of consumer-facing fintech services seeking to meet sustainable development goals through trustworthy, resilient digital infrastructure. Full article
Show Figures

Figure 1

44 pages, 2807 KiB  
Review
Artificial Intelligence in Dermatology: A Review of Methods, Clinical Applications, and Perspectives
by Agnieszka M. Zbrzezny and Tomasz Krzywicki
Appl. Sci. 2025, 15(14), 7856; https://doi.org/10.3390/app15147856 - 14 Jul 2025
Viewed by 894
Abstract
The use of artificial intelligence (AI) in dermatology is skyrocketing, but a comprehensive overview integrating regulatory, ethical, validation, and clinical issues is lacking. This work aims to review current research, map applicable legal regulations, identify ethical challenges and methods of verifying AI models [...] Read more.
The use of artificial intelligence (AI) in dermatology is skyrocketing, but a comprehensive overview integrating regulatory, ethical, validation, and clinical issues is lacking. This work aims to review current research, map applicable legal regulations, identify ethical challenges and methods of verifying AI models in dermatology, assess publication trends, compare the most popular neural network architectures and datasets, and identify good practices in creating AI-based applications for dermatological use. A systematic literature review is conducted in accordance with the PRISMA guidelines, utilising Google Scholar, PubMed, Scopus, and Web of Science and employing bibliometric analysis. Since 2016, there has been exponential growth in deep learning research in dermatology, revealing gaps in EU and US regulations and significant differences in model performance across different datasets. The decision-making process in clinical dermatology is analysed, focusing on how AI is augmenting skin imaging techniques such as dermatoscopy and histology. Further demonstration is provided regarding how AI is a valuable tool that supports dermatologists by automatically analysing skin images, enabling faster diagnosis and the more accurate identification of skin lesions. These advances enhance the precision and efficiency of dermatological care, showcasing the potential of AI to revolutionise the speed of diagnosis in modern dermatology, sparking excitement and curiosity. Then, we discuss the regulatory framework for AI in medicine, as well as the ethical issues that may arise. Additionally, this article addresses the critical challenge of ensuring the safety and trustworthiness of AI in dermatology, presenting classic examples of safety issues that can arise during its implementation. The review provides recommendations for regulatory harmonisation, the standardisation of validation metrics, and further research on data explainability and representativeness, which can accelerate the safe implementation of AI in dermatological practice. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Sciences)
Show Figures

Figure 1

26 pages, 3252 KiB  
Article
Interactive Mitigation of Biases in Machine Learning Models for Undergraduate Student Admissions
by Kelly Van Busum and Shiaofen Fang
AI 2025, 6(7), 152; https://doi.org/10.3390/ai6070152 - 9 Jul 2025
Viewed by 495
Abstract
Bias and fairness issues in artificial intelligence (AI) algorithms are major concerns, as people do not want to use software they cannot trust. Because these issues are intrinsically subjective and context-dependent, creating trustworthy software requires human input and feedback. (1) Introduction: This work [...] Read more.
Bias and fairness issues in artificial intelligence (AI) algorithms are major concerns, as people do not want to use software they cannot trust. Because these issues are intrinsically subjective and context-dependent, creating trustworthy software requires human input and feedback. (1) Introduction: This work introduces an interactive method for mitigating the bias introduced by machine learning models by allowing the user to adjust bias and fairness metrics iteratively to make the model more fair in the context of undergraduate student admissions. (2) Related Work: The social implications of bias in AI systems used in education are nuanced and can affect university reputation and student retention rates motivating a need for the development of fair AI systems. (3) Methods and Dataset: Admissions data over six years from a large urban research university was used to create AI models to predict admissions decisions. These AI models were analyzed to detect biases they may carry with respect to three variables chosen to represent sensitive populations: gender, race, and first-generation college students. We then describe a method for bias mitigation that uses a combination of machine learning and user interaction. (4) Results and Discussion: We use three scenarios to demonstrate that this interactive bias mitigation approach can successfully decrease the biases towards sensitive populations. (5) Conclusion: Our approach allows the user to examine a model and then iteratively and incrementally adjust bias and fairness metrics to change the training dataset and generate a modified AI model that is more fair, according to the user’s own determination of fairness. Full article
(This article belongs to the Special Issue Exploring the Use of Artificial Intelligence in Education)
Show Figures

Figure 1

42 pages, 3505 KiB  
Review
Computer Vision Meets Generative Models in Agriculture: Technological Advances, Challenges and Opportunities
by Xirun Min, Yuwen Ye, Shuming Xiong and Xiao Chen
Appl. Sci. 2025, 15(14), 7663; https://doi.org/10.3390/app15147663 - 8 Jul 2025
Viewed by 812
Abstract
The integration of computer vision (CV) and generative artificial intelligence (GenAI) into smart agriculture has revolutionised traditional farming practices by enabling real-time monitoring, automation, and data-driven decision-making. This review systematically examines the applications of CV in key agricultural domains, such as crop health [...] Read more.
The integration of computer vision (CV) and generative artificial intelligence (GenAI) into smart agriculture has revolutionised traditional farming practices by enabling real-time monitoring, automation, and data-driven decision-making. This review systematically examines the applications of CV in key agricultural domains, such as crop health monitoring, precision farming, harvesting automation, and livestock management, while highlighting the transformative role of GenAI in addressing data scarcity and enhancing model robustness. Advanced techniques, including convolutional neural networks (CNNs), YOLO variants, and transformer-based architectures, are analysed for their effectiveness in tasks like pest detection, fruit maturity classification, and field management. The survey reveals that generative models, such as generative adversarial networks (GANs) and diffusion models, significantly improve dataset diversity and model generalisation, particularly in low-resource scenarios. However, challenges persist, including environmental variability, edge deployment limitations, and the need for interpretable systems. Emerging trends, such as vision–language models and federated learning, offer promising avenues for future research. The study concludes that the synergy of CV and GenAI holds immense potential for advancing smart agriculture, though scalable, adaptive, and trustworthy solutions remain critical for widespread adoption. This comprehensive analysis provides valuable insights for researchers and practitioners aiming to harness AI-driven innovations in agricultural ecosystems. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

19 pages, 1130 KiB  
Article
RE-BPFT: An Improved PBFT Consensus Algorithm for Consortium Blockchain Based on Node Credibility and ID3-Based Classification
by Junwen Ding, Xu Wu, Jie Tian and Yuanpeng Li
Appl. Sci. 2025, 15(13), 7591; https://doi.org/10.3390/app15137591 - 7 Jul 2025
Viewed by 238
Abstract
Practical Byzantine Fault Tolerance (PBFT) has been widely used in consortium blockchain systems; however, it suffers from performance degradation and susceptibility to Byzantine faults in complex environments. To overcome these limitations, this paper proposes RE-BPFT, an enhanced consensus algorithm that integrates a nuanced [...] Read more.
Practical Byzantine Fault Tolerance (PBFT) has been widely used in consortium blockchain systems; however, it suffers from performance degradation and susceptibility to Byzantine faults in complex environments. To overcome these limitations, this paper proposes RE-BPFT, an enhanced consensus algorithm that integrates a nuanced node credibility model considering direct interactions, indirect reputations, and historical behavior. Additionally, we adopt an optimized ID3 decision-tree method for node classification, dynamically identifying high-performing, trustworthy, ordinary, and malicious nodes based on real-time data. To address issues related to centralization risk in leader selection, we introduce a weighted random primary node election mechanism. We implemented a prototype of the RE-BPFT algorithm in Python and conducted extensive evaluations across diverse network scales and transaction scenarios. Experimental results indicate that RE-BPFT markedly reduces consensus latency and communication costs while achieving higher throughput and better scalability than classical PBFT, RBFT, and PPoR algorithms. Thus, RE-BPFT demonstrates significant advantages for large-scale and high-demand consortium blockchain use cases, particularly in areas like digital traceability and forensic data management. The insights gained from this study offer valuable improvements for ensuring node reliability, consensus performance, and overall system resilience. Full article
Show Figures

Figure 1

20 pages, 374 KiB  
Article
Hotel Guest Satisfaction: A Predictive and Discriminant Study Using TripAdvisor Ratings
by Quiviny Jorge De Oliveira-Cardoso, José Alberto Martínez-González and Carmen D. Álvarez-Albelo
Adm. Sci. 2025, 15(7), 264; https://doi.org/10.3390/admsci15070264 - 7 Jul 2025
Viewed by 636
Abstract
Understanding and promoting guest satisfaction is central to the economic sustainability of the hospitality industry. Satisfaction influences consumers’ booking intentions, hotel choice, loyalty, and the reputation and performance of accommodation establishments. Thus, accurate decision making by hotel managers relies on trustworthy and easily [...] Read more.
Understanding and promoting guest satisfaction is central to the economic sustainability of the hospitality industry. Satisfaction influences consumers’ booking intentions, hotel choice, loyalty, and the reputation and performance of accommodation establishments. Thus, accurate decision making by hotel managers relies on trustworthy and easily accessible information on the variables that affect guest satisfaction. Nowadays, this information is available through reviews and ratings provided by online platforms, such as TripAdvisor. Indeed, much research into guest satisfaction uses TripAdvisor reviews. However, this study aims to analyse guest satisfaction using only TripAdvisor ratings. These ratings can be more succinct and tractable indicators than reviews. A sample of 118 hotels in Cape Verde and the Azores, two archipelagos belonging to Macaronesia, and a descriptive, predictive, and discriminant methodology are employed for this purpose. Four main results are obtained. First, the rated items on TripAdvisor are consistent with the scientific literature on this topic. Second, TripAdvisor ratings are valid and reliable. Third, TripAdvisor ratings can predict guest satisfaction based on the perceived quality of hotel services. Fourth, there are significant differences in ratings depending on the tourism destination chosen. These results are of interest to researchers, tourists, as well as hotel, destination, and platform managers. Full article
(This article belongs to the Section Strategic Management)
20 pages, 2947 KiB  
Article
Personal Data Value Realization and Symmetry Enhancement Under Social Service Orientation: A Tripartite Evolutionary Game Approach
by Dandan Wang and Junhao Yu
Symmetry 2025, 17(7), 1069; https://doi.org/10.3390/sym17071069 - 5 Jul 2025
Viewed by 253
Abstract
In the digital economy, information asymmetry among individuals, data users, and governments limits the full realization of personal data value. To address this, “symmetry enhancement” strategies aim to reduce information gaps, enabling more balanced decision-making and facilitating efficient data flow. This study establishes [...] Read more.
In the digital economy, information asymmetry among individuals, data users, and governments limits the full realization of personal data value. To address this, “symmetry enhancement” strategies aim to reduce information gaps, enabling more balanced decision-making and facilitating efficient data flow. This study establishes a tripartite evolutionary game model based on personal data collection and development, conducts simulations using MATLAB R2024a, and proposes countermeasures based on equilibrium analysis and simulation results. The results highlight that individual participation is pivotal, influenced by perceived benefits, management costs, and privacy risks. Meanwhile, data users’ compliance hinges on economic incentives and regulatory burdens, with excessive costs potentially discouraging adherence. Governments must carefully weigh social benefits against regulatory expenditures. Based on these findings, this paper proposes the following recommendations: use personal data application scenarios as a guide, rely on the construction of personal trustworthy data spaces, explore and improve personal data revenue distribution mechanisms, strengthen the management of data users, and promote the maximization of personal data value through multi-party collaborative ecological incentives. Full article
Show Figures

Figure 1

20 pages, 5480 KiB  
Article
Model-Data Hybrid-Driven Real-Time Optimal Power Flow: A Physics-Informed Reinforcement Learning Approach
by Ximing Zhang, Xiyuan Ma, Yun Yu, Duotong Yang, Zhida Lin, Changcheng Zhou, Huan Xu and Zhuohuan Li
Energies 2025, 18(13), 3483; https://doi.org/10.3390/en18133483 - 1 Jul 2025
Viewed by 311
Abstract
With the rapid development of artificial intelligence technology, DRL has shown great potential in solving complex real-time optimal power flow problems of modern power systems. Nevertheless, traditional DRL methodologies confront dual bottlenecks: (a) suboptimal coordination between exploratory behavior policies and experience-based data exploitation [...] Read more.
With the rapid development of artificial intelligence technology, DRL has shown great potential in solving complex real-time optimal power flow problems of modern power systems. Nevertheless, traditional DRL methodologies confront dual bottlenecks: (a) suboptimal coordination between exploratory behavior policies and experience-based data exploitation in practical applications, compounded by (b) users’ distrust from the opacity of model decision mechanics. To address these, a model–data hybrid-driven physics-informed reinforcement learning (PIRL) algorithm is proposed in this paper. Specifically, the proposed methodology uses the proximal policy optimization (PPO) algorithm as the agent’s foundational framework and constructs a PI-actor network embedded with prior model knowledge derived from power flow sensitivity into the agent’s actor network via the PINN method, which achieves dual optimization objectives: (a) enhanced environmental perceptibility to improve experience utilization efficiency via gradient-awareness from model knowledge during actor network updates, and (b) improved user trustworthiness through mathematically constrained action gradient information derived from explicit model knowledge, ensuring actor updates adhere to safety boundaries. The simulation and validation results show that the PIRL algorithm outperforms the baseline PPO algorithm in terms of training stability, exploration efficiency, economy, and security. Full article
Show Figures

Figure 1

Back to TopTop