Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (409)

Search Parameters:
Keywords = trust components

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 487 KB  
Article
Trust-Aware Causal Consistency Routing for Quantum Key Distribution Networks Against Malicious Nodes
by Yi Luo and Qiong Li
Entropy 2025, 27(11), 1100; https://doi.org/10.3390/e27111100 (registering DOI) - 24 Oct 2025
Viewed by 65
Abstract
Quantum key distribution (QKD) networks promise information-theoretic security for multiple nodes by leveraging the fundamental laws of quantum mechanics. In practice, QKD networks require dedicated routing protocols to coordinate secure key distribution among distributed nodes. However, most existing routing protocols operate under the [...] Read more.
Quantum key distribution (QKD) networks promise information-theoretic security for multiple nodes by leveraging the fundamental laws of quantum mechanics. In practice, QKD networks require dedicated routing protocols to coordinate secure key distribution among distributed nodes. However, most existing routing protocols operate under the assumption that all relay nodes are honest and fully trustworthy, an assumption that may not hold in realistic scenarios. Malicious nodes may tamper with routing updates, causing inconsistent key-state views or divergent routing plans across the network. Such inconsistencies increase routing failure rates and lead to severe wastage of valuable secret keys. To address these challenges, we propose a distributed routing framework that combines two key components: (i) Causal Consistency Key-State Update, which prevents malicious nodes from propagating inconsistent key states and routing plans; and (ii) Trust-Aware Multi-path Flow Optimization, which incorporates trust metrics derived from discrepancies in reported states into the path-selection objective, penalizing suspicious links and filtering fabricated demands. Across 50-node topologies with up to 30% malicious relays and under all three attack modes, our protocol sustains a high demand completion ratio (DCR) (mean 0.90, range 0.810.98) while keeping key utilization low (16.6 keys per demand), decisively outperforming the baselines—Multi-Path Planned (DCR 0.48, 30.8 keys per demand) and OSPF (DCR 0.12, 296 keys per demand; max 1601). These results highlight that our framework balances reliability and efficiency, providing a practical and resilient foundation for secure QKD networking in adversarial environments. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

10 pages, 1615 KB  
Article
Virtual Clinics in Cardiology: Do They Provide Equivalent Care and Reduce Travel?
by Matthew Farrier, Brian Wood, Zoubaida Yahia and Martin Farrier
J. Clin. Med. 2025, 14(20), 7363; https://doi.org/10.3390/jcm14207363 - 17 Oct 2025
Viewed by 244
Abstract
Objective: To evaluate whether virtual clinic appointments in cardiology are equivalent to face-to-face appointments in terms of investigations as a consequence of the appointment and a reduction in travel for the whole care episode. Design: Retrospective observational cohort study of 9445 patients. Setting: [...] Read more.
Objective: To evaluate whether virtual clinic appointments in cardiology are equivalent to face-to-face appointments in terms of investigations as a consequence of the appointment and a reduction in travel for the whole care episode. Design: Retrospective observational cohort study of 9445 patients. Setting: Wrightington, Wigan and Leigh Teaching Hospitals NHS Foundation Trust, a medium-sized NHS trust in the north-west of England. Participants: 9445 patients referred for new cardiology appointments between 2023 and 2025. Methods: Data were extracted from electronic records and test ordering systems and appointments with corresponding investigations were retrieved. The data was validated using random samples, and the extraction was modified until accuracy was achieved. Principle component analysis was used to compare groups, and Welch t-test was used to statistically analyse the results. Distance travelled was calculated using postcodes and the number of visits was calculated using investigations conducted on separate days. Results: Patients who had virtual appointments showed no statistical difference in the number of investigations or visits for investigations. The care provided via virtual and face-to-face appointments was found to be comparable in terms of clinical effectiveness and quality of care. The distance travelled for both types of appointment is therefore not different, but if the initial appointment is taken into consideration where there was no travel for the virtual appointment patients, then the reduction in miles travelled is 5002 km, resulting in a carbon saving of 784 kgCO2eq. Conclusions: Virtual Clinics in Cardiology offer an equitable service but only a small reduction in travel. Full article
(This article belongs to the Section Cardiovascular Medicine)
Show Figures

Figure 1

17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Viewed by 392
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

56 pages, 732 KB  
Review
The Erosion of Cybersecurity Zero-Trust Principles Through Generative AI: A Survey on the Challenges and Future Directions
by Dan Xu, Iqbal Gondal, Xun Yi, Teo Susnjak, Paul Watters and Timothy R. McIntosh
J. Cybersecur. Priv. 2025, 5(4), 87; https://doi.org/10.3390/jcp5040087 - 15 Oct 2025
Viewed by 766
Abstract
Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world [...] Read more.
Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world validation, leaving several core controls largely untested. Our critique, therefore, proceeds on two axes: first, mainstream ZTA research is empirically under-powered and operationally unproven; second, generative-AI attacks exploit these very weaknesses, accelerating policy bypass and detection failure. To expose this compounding risk, we contribute the Cyber Fraud Kill Chain (CFKC), a seven-stage attacker model (target identification, preparation, engagement, deception, execution, monetization, and cover-up) that maps specific generative techniques to NIST SP 800-207 components they erode. The CFKC highlights how synthetic identities, context manipulation and adversarial telemetry drive up false-negative rates, extend dwell time, and sidestep audit trails, thereby undermining the Zero-Trust principles of verify explicitly and assume breach. Existing guidance offers no systematic countermeasures for AI-scaled attacks, and that compliance regimes struggle to audit content that AI can mutate on demand. Finally, we outline research directions for adaptive, evidence-driven ZTA, and we argue that incremental extensions of current ZTA that are insufficient; only a generative-AI-aware redesign will sustain defensive parity in the coming threat cycle. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

18 pages, 312 KB  
Entry
The Psychology of Ocean Literacy
by Brianna Le Busque
Encyclopedia 2025, 5(4), 164; https://doi.org/10.3390/encyclopedia5040164 - 13 Oct 2025
Viewed by 253
Definition
Ocean Literacy (OL) can be broadly defined as a framework for understanding the complex and evolving relationships between people and the ocean. It is increasingly recognized as a vital component of marine conservation and sustainability efforts. OL is inherently interdisciplinary, and psychology, while [...] Read more.
Ocean Literacy (OL) can be broadly defined as a framework for understanding the complex and evolving relationships between people and the ocean. It is increasingly recognized as a vital component of marine conservation and sustainability efforts. OL is inherently interdisciplinary, and psychology, while being a particularly relevant field, remains an underutilized field in this space. This paper demonstrates how psychological theories, frameworks, and validated measures can meaningfully inform OL strategies across its ten proposed dimensions: knowledge, awareness, attitudes, behavior, activism, communication, emotional connections, access and experience, adaptive capacity, and trust and transparency. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Graphical abstract

11 pages, 621 KB  
Article
Using Conversations, Listening and Leadership to Support Staff Wellness: The CALM Framework
by Usman Iqbal, Natalie Wilson, Robyn Taylor, Louise Smith and Friedbert Kohler
Int. J. Environ. Res. Public Health 2025, 22(10), 1558; https://doi.org/10.3390/ijerph22101558 - 13 Oct 2025
Viewed by 347
Abstract
Healthcare workers’ (HCWs) wellness is a critical concern, particularly following the COVID-19 pandemic. Staff Wellness Rounding (SWR) has emerged as a leadership-driven strategy to support HCWs but research on its effectiveness remains limited. This study examines the impact of SWR within a large [...] Read more.
Healthcare workers’ (HCWs) wellness is a critical concern, particularly following the COVID-19 pandemic. Staff Wellness Rounding (SWR) has emerged as a leadership-driven strategy to support HCWs but research on its effectiveness remains limited. This study examines the impact of SWR within a large healthcare organisation in Australia and introduces the CALM (Conversation, Active Listening, Leadership Engagement, Mechanism for Feedback) Framework to enhance leadership-driven wellness initiatives. SWR was implemented across six acute hospitals and 14 community health centres in New South Wales, Australia (July to October 2021). A sequential mixed-methods design was used to evaluate SWR effectiveness, leadership engagement, and key components for a structured wellness approach. Phase One included a survey of 169 HCWs to capture their experiences, and Phase Two and Three comprised semi-structured interviews with SWR leaders, participants of SWR and analysis of 342 SWR records. Findings showed that informal conversations foster trust, active listening supports emotional well-being, and leadership engagement facilitates issue escalation. However, feedback mechanisms require improvement: 77.5% of HCWs felt able to escalate concerns but only 32.5% believed feedback was effectively addressed. These insights directly informed the development of the CALM Framework with implications for leadership training and digital wellness integration in healthcare settings. Full article
Show Figures

Figure 1

14 pages, 612 KB  
Article
Towards Trustful Machine Learning for Antimicrobial Therapy Using an Explainable Artificial Intelligence Dashboard
by Thomas De Corte, Jarne Verhaeghe, Femke Ongenae, Jan J. De Waele and Sofie Van Hoecke
Appl. Sci. 2025, 15(20), 10933; https://doi.org/10.3390/app152010933 - 11 Oct 2025
Viewed by 252
Abstract
The application of machine learning (ML) in healthcare has surged, yet its adoption in high-stakes clinical domains, like the Intensive Care Unit (ICU), remains low. This gap is largely driven by a lack of clinician trust in AI decision support. Explainable AI (XAI) [...] Read more.
The application of machine learning (ML) in healthcare has surged, yet its adoption in high-stakes clinical domains, like the Intensive Care Unit (ICU), remains low. This gap is largely driven by a lack of clinician trust in AI decision support. Explainable AI (XAI) techniques aim to address this by explaining how an AI reaches its decisions, thereby improving transparency. However, rigorous evaluation of XAI methods in clinical settings is lacking. Therefore, we evaluated the perceived explainability of a dashboard incorporating three XAI methods for an ML model that predicts piperacillin plasma concentrations. The dashboard was evaluated by seven ICU clinicians using five distinct patient cases. We assessed the interpretation and perceived explainability of each XAI component through a targeted survey. The overall dashboard received a median score of seven out of ten for completeness of explainability, with Ceteris Paribus profiles identified as the most preferred XAI method. Our findings provide a practical framework for evaluating XAI in critical care, offering crucial insights into clinician preferences that can guide the future development and implementation of trustworthy AI in the ICU. Full article
Show Figures

Figure 1

15 pages, 1374 KB  
Article
Stylometric Analysis of Sustainable Central Bank Communications: Revealing Authorial Signatures in Monetary Policy Statements
by Hakan Emekci and İbrahim Özkan
Sustainability 2025, 17(20), 8979; https://doi.org/10.3390/su17208979 - 10 Oct 2025
Viewed by 259
Abstract
Sustainable economic development requires transparent and consistent institutional communication from monetary authorities to maintain long-term financial stability and public trust. This study investigates the latent authorial structure and stylistic heterogeneity of central bank communications by applying stylometric analysis and unsupervised machine learning to [...] Read more.
Sustainable economic development requires transparent and consistent institutional communication from monetary authorities to maintain long-term financial stability and public trust. This study investigates the latent authorial structure and stylistic heterogeneity of central bank communications by applying stylometric analysis and unsupervised machine learning to official announcements of the Central Bank of the Republic of Turkey (CBRT). Using a dataset of 557 press releases from 2006 to 2017, we extract a range of linguistic features at both sentence and document levels—including sentence length, punctuation density, word length, and type–token ratios. These features are reduced using Principal Component Analysis (PCA) and clustered via Hierarchical Clustering on Principal Components (HCPC), revealing three distinct authorial groups within the CBRT’s communications. The robustness of these clusters is validated using multidimensional scaling (MDS) on character-level and word-level n-gram distances. The analysis finds consistent stylistic differences between clusters, with implications for authorship attribution, tone variation, and communication strategy. Notably, sentiment analysis indicates that one authorial cluster tends to exhibit more negative tonal features, suggesting potential bias or divergence in internal communication style. These findings challenge the conventional assumption of institutional homogeneity and highlight the presence of distinct communicative voices within the central bank. Furthermore, the results suggest that stylistic variation—though often subtle—may convey unintended policy signals to markets, especially in contexts where linguistic shifts are closely scrutinized. This research contributes to the emerging intersection of natural language processing, monetary economics, and institutional transparency. It demonstrates the efficacy of stylometric techniques in revealing the hidden structure of policy discourse and suggests that linguistic analytics can offer valuable insights into the internal dynamics, credibility, and effectiveness of monetary authorities. These findings contribute to sustainable financial governance by demonstrating how AI-driven analysis can enhance institutional transparency, promote consistent policy communication, and support long-term economic stability—key pillars of sustainable development. Full article
(This article belongs to the Special Issue Public Policy and Economic Analysis in Sustainability Transitions)
Show Figures

Figure 1

25 pages, 737 KB  
Systematic Review
A Systematic Literature Review on the Implementation and Challenges of Zero Trust Architecture Across Domains
by Sadaf Mushtaq, Muhammad Mohsin and Muhammad Mujahid Mushtaq
Sensors 2025, 25(19), 6118; https://doi.org/10.3390/s25196118 - 3 Oct 2025
Viewed by 1116
Abstract
The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning [...] Read more.
The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning domains such as cloud computing (24 studies), Internet of Things (11), healthcare (7), enterprise and remote work systems (6), industrial and supply chain networks (5), mobile networks (5), artificial intelligence and machine learning (5), blockchain (4), big data and edge computing (3), and other emerging contexts (4). The analysis shows that authentication, authorization, and access control are the most consistently implemented ZTA components, whereas auditing, orchestration, and environmental perception remain underexplored. Across domains, the main challenges include scalability limitations, insufficient lightweight cryptographic solutions for resource-constrained systems, weak orchestration mechanisms, and limited alignment with regulatory frameworks such as GDPR and HIPAA. Cross-domain comparisons reveal that cloud and enterprise systems demonstrate relatively mature implementations, while IoT, blockchain, and big data deployments face persistent performance and compliance barriers. Overall, the findings highlight both the progress and the gaps in ZTA adoption, underscoring the need for lightweight cryptography, context-aware trust engines, automated orchestration, and regulatory integration. This review provides a roadmap for advancing ZTA research and practice, offering implications for researchers, industry practitioners, and policymakers seeking to enhance cybersecurity resilience. Full article
Show Figures

Figure 1

36 pages, 2113 KB  
Article
Self-Sovereign Identities and Content Provenance: VeriTrust—A Blockchain-Based Framework for Fake News Detection
by Maruf Farhan, Usman Butt, Rejwan Bin Sulaiman and Mansour Alraja
Future Internet 2025, 17(10), 448; https://doi.org/10.3390/fi17100448 - 30 Sep 2025
Viewed by 930
Abstract
The widespread circulation of digital misinformation exposes a critical shortcoming in prevailing detection strategies, namely, the absence of robust mechanisms to confirm the origin and authenticity of online content. This study addresses this by introducing VeriTrust, a conceptual and provenance-centric framework designed to [...] Read more.
The widespread circulation of digital misinformation exposes a critical shortcoming in prevailing detection strategies, namely, the absence of robust mechanisms to confirm the origin and authenticity of online content. This study addresses this by introducing VeriTrust, a conceptual and provenance-centric framework designed to establish content-level trust by integrating Self-Sovereign Identity (SSI), blockchain-based anchoring, and AI-assisted decentralized verification. The proposed system is designed to operate through three key components: (1) issuing Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) through Hyperledger Aries and Indy; (2) anchoring cryptographic hashes of content metadata to an Ethereum-compatible blockchain using Merkle trees and smart contracts; and (3) enabling a community-led verification model enhanced by federated learning with future extensibility toward zero-knowledge proof techniques. Theoretical projections, derived from established performance benchmarks, suggest the framework offers low latency and high scalability for content anchoring and minimal on-chain transaction fees. It also prioritizes user privacy by ensuring no on-chain exposure of personal data. VeriTrust redefines misinformation mitigation by shifting from reactive content-based classification to proactive provenance-based verification, forming a verifiable link between digital content and its creator. VeriTrust, while currently at the conceptual and theoretical validation stage, holds promise for enhancing transparency, accountability, and resilience against misinformation attacks across journalism, academia, and online platforms. Full article
(This article belongs to the Special Issue AI and Blockchain: Synergies, Challenges, and Innovations)
Show Figures

Figure 1

24 pages, 1149 KB  
Article
Sustainable Development of Smart Regions via Cybersecurity of National Infrastructure: A Fuzzy Risk Assessment Approach
by Oleksandr Korchenko, Oleksandr Korystin, Volodymyr Shulha, Svitlana Kazmirchuk, Serhii Demediuk and Serhii Zybin
Sustainability 2025, 17(19), 8757; https://doi.org/10.3390/su17198757 - 29 Sep 2025
Viewed by 315
Abstract
This article proposes a scientifically grounded approach to risk assessment for infrastructural and functional systems that underpin the development of digitally transformed regional territories under conditions of high threat dynamics and sociotechnical instability. The core methodology is based on modeling of multifactorial threats [...] Read more.
This article proposes a scientifically grounded approach to risk assessment for infrastructural and functional systems that underpin the development of digitally transformed regional territories under conditions of high threat dynamics and sociotechnical instability. The core methodology is based on modeling of multifactorial threats through the application of fuzzy set theory and logic–linguistic analysis, enabling consideration of parameter uncertainty, fragmented expert input, and the lack of a unified risk landscape within complex infrastructure environments. A special emphasis is placed on components of technogenic, informational, and mobile infrastructure that ensure regional viability across planning, response, and recovery phases. The results confirm the relevance of the approach for assessing infrastructure resilience risks in regional spatial–functional systems, which demonstrates the potential integration into sustainable development strategies at the level of regional governance, cross-sectoral planning, and cultural reevaluation of the role of analytics as an ethically grounded practice for cultivating trust, transparency, and professional maturity. Full article
Show Figures

Figure 1

29 pages, 1728 KB  
Article
How Rituals Can Contribute to Co-Governance: Evidence from the Reconstruction of Water Pipes of Old Housing Estates in Shanghai
by Wenda Xie, Zhujie Chu and Lei Li
Systems 2025, 13(10), 860; https://doi.org/10.3390/systems13100860 - 29 Sep 2025
Viewed by 407
Abstract
Water is the source of life and also the lifeline of cities. The reconstruction of secondary water supply systems is a key component of urban renewal reforms, and the collaborative governance of such projects has become a focal topic through academic research. In [...] Read more.
Water is the source of life and also the lifeline of cities. The reconstruction of secondary water supply systems is a key component of urban renewal reforms, and the collaborative governance of such projects has become a focal topic through academic research. In this article, we try to discover the path to successful “bottom-up” collaborative water governance with Collins’s theory of interaction ritual chains (IRC) through a case study of a secondary water supply reconstruction program in J Estate, Jinshan District, Shanghai. The case study involved a total of 104 households, and we employed convenience sampling for all households through door-to-door inquiries, which included semi-structured interviews and non-participant observations. A total of 15 households participated in our interview. This study demonstrates that repeated social interactive rituals, such as bodily co-presence, rhythmic synchronization, and shared signs, can stimulate the accumulation of residents’ emotional energy, which becomes the initial power to promote community water governance and, in return, becomes the driving force for sustained collective action and mutual trust. Drawing on Collins’s theory of IRC, this article fills a gap by explaining the symbolic mechanism driven by emotions and personal relationships that macro-level governance ignores. We also demonstrate the spillover effects of such social rituals and propose policy recommendations that governments should apply, using these rituals to mobilize and consolidate residents’ emotions to create a virtuous circle of collaborative governance. Full article
Show Figures

Figure 1

38 pages, 1612 KB  
Review
Microengineered Breast Cancer Models: Shaping the Future of Personalized Oncology
by Tudor-Alexandru Popoiu, Anca Maria Cimpean, Florina Bojin, Simona Cerbu, Miruna-Cristiana Gug, Catalin-Alexandru Pirvu, Stelian Pantea and Adrian Neagu
Cancers 2025, 17(19), 3160; https://doi.org/10.3390/cancers17193160 - 29 Sep 2025
Viewed by 884
Abstract
Background: Breast cancer remains the most prevalent malignancy in women worldwide, characterized by remarkable genetic, molecular, and clinical heterogeneity. Traditional preclinical models have significantly advanced our understanding of tumor biology, yet consistently fall short in recapitulating the complexity of the human tumor [...] Read more.
Background: Breast cancer remains the most prevalent malignancy in women worldwide, characterized by remarkable genetic, molecular, and clinical heterogeneity. Traditional preclinical models have significantly advanced our understanding of tumor biology, yet consistently fall short in recapitulating the complexity of the human tumor microenvironment (TME), immune, and metastatic behavior. In recent years, breast cancer-on-a-chip (BCOC) have emerged as powerful microengineered systems that integrate patient-derived cells, stromal and immune components, and physiological stimuli such as perfusion, hypoxia, and acidic milieu within controlled three-dimensional microenvironments. Aim: To comprehensively review the BCOC development and application, encompassing fabrication materials, biological modeling of key subtypes (DCIS, luminal A, triple-negative), dynamic tumor–stroma–immune crosstalk, and organotropic metastasis to bone, liver, brain, lungs, and lymph nodes. Methods: We selected papers from academic trusted databases (PubMed, Web of Science, Google Scholar) by using Breast Cancer, Microfluidic System, and Breast Cancer on a Chip as the main search terms. Results: We critically discuss and highlight how microfluidic systems replicate essential features of disease progression—such as epithelial-to-mesenchymal transition, vascular invasion, immune evasion, and therapy resistance—with unprecedented physiological relevance. Special attention has been paid to the integration of liquid biopsy technologies within microfluidic platforms for non-invasive, real-time analysis of circulating tumor cells, cell-free nucleic acids, and exosomes. Conclusions: In light of regulatory momentum toward reducing animal use in drug development, BCOC platforms stand at the forefront of a new era in precision oncology. By bridging biological fidelity with engineering innovation, these systems hold immense potential to transform cancer research, therapy screening, and personalized medicine. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

19 pages, 316 KB  
Article
Psychometric Validation of Trust, Commitment, and Satisfaction Scales to Measure Marital Relationship Quality Among Newly Married Women in Nepal
by Lakshmi Gopalakrishnan, Nadia Diamond-Smith and Hannah H. Leslie
Int. J. Environ. Res. Public Health 2025, 22(9), 1457; https://doi.org/10.3390/ijerph22091457 - 20 Sep 2025
Viewed by 804
Abstract
Marital relationship quality significantly influences health outcomes, but validated measurement tools for South Asian populations remain limited. To validate scales measuring trust, commitment, and satisfaction as key components of marital relationship quality among newly married women in Nepal, we conducted a two-wave psychometric [...] Read more.
Marital relationship quality significantly influences health outcomes, but validated measurement tools for South Asian populations remain limited. To validate scales measuring trust, commitment, and satisfaction as key components of marital relationship quality among newly married women in Nepal, we conducted a two-wave psychometric validation study in rural Nawalparasi district. The study included 200 newly married women aged 18–25 years, with 192 participants (96% retention) completing 6-month follow-up. We assessed factor structure, internal consistency, test-retest reliability, and criterion validity of trust (eight items), commitment (five items), and satisfaction (seven items) scales using exploratory and confirmatory factor analysis. Exploratory factor analysis identified single-factor solutions for trust and commitment scales and a two-factor model for satisfaction. Confirmatory factor analysis confirmed these structures, with satisfaction comprising marital conflict/dissatisfaction (four items) and general satisfaction (two items) subscales. All scales demonstrated good internal consistency (Cronbach’s α: 0.79–0.96) and significant criterion validity correlations with relationship happiness (r = 0.63–0.72, p < 0.001). Test-retest reliability showed moderate to low stability (r = 0.21–0.51), likely reflecting genuine relationship changes in early marriage. The validated scales provide reliable tools for assessing relationship quality in South Asian contexts, enabling research on marriage-health associations and evidence-based interventions. Full article
(This article belongs to the Section Behavioral and Mental Health)
23 pages, 1262 KB  
Article
Confidential Kubernetes Deployment Models: Architecture, Security, and Performance Trade-Offs
by Eduardo Falcão, Fernando Silva, Carlos Pamplona, Anderson Melo, A S M Asadujjaman and Andrey Brito
Appl. Sci. 2025, 15(18), 10160; https://doi.org/10.3390/app151810160 - 17 Sep 2025
Viewed by 1260
Abstract
Cloud computing brings numerous advantages that can be leveraged through containerized workloads to deliver agile, dependable, and cost-effective microservices. However, the security of such cloud-based services depends on the assumption of trusting potentially vulnerable components, such as code installed on the host. The [...] Read more.
Cloud computing brings numerous advantages that can be leveraged through containerized workloads to deliver agile, dependable, and cost-effective microservices. However, the security of such cloud-based services depends on the assumption of trusting potentially vulnerable components, such as code installed on the host. The addition of confidential computing technology to the cloud computing landscape brings the possibility of stronger security guarantees by removing such assumptions. Nevertheless, the merger of containerization and confidential computing technologies creates a complex ecosystem. In this work, we show how Kubernetes workloads can be secured despite these challenges. In addition, we design, analyze, and evaluate five different Kubernetes deployment models using the infrastructure of three of the most popular cloud providers with CPUs from two major vendors. Our evaluation shows that performance can vary significantly across the possible deployment models while remaining similar across CPU vendors and cloud providers. Our security analysis highlights the trade-offs between different workload isolation levels, trusted computing base size, and measurement reproducibility. Through a comprehensive performance, security, and financial analysis, we identify the deployment models best suited to different scenarios. Full article
(This article belongs to the Special Issue Secure Cloud Computing Infrastructures)
Show Figures

Figure 1

Back to TopTop