Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (174)

Search Parameters:
Keywords = healthcare data mining

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4050 KiB  
Review
Network Pharmacology-Driven Sustainability: AI and Multi-Omics Synergy for Drug Discovery in Traditional Chinese Medicine
by Lifang Yang, Hanye Wang, Zhiyao Zhu, Ye Yang, Yin Xiong, Xiuming Cui and Yuan Liu
Pharmaceuticals 2025, 18(7), 1074; https://doi.org/10.3390/ph18071074 - 21 Jul 2025
Viewed by 431
Abstract
Traditional Chinese medicine (TCM), a holistic medical system rooted in dialectical theories and natural product-based therapies, has served as a cornerstone of healthcare systems for millennia. While its empirical efficacy is widely recognized, the polypharmacological mechanisms stemming from its multi-component nature remain poorly [...] Read more.
Traditional Chinese medicine (TCM), a holistic medical system rooted in dialectical theories and natural product-based therapies, has served as a cornerstone of healthcare systems for millennia. While its empirical efficacy is widely recognized, the polypharmacological mechanisms stemming from its multi-component nature remain poorly characterized. The conventional trial-and-error approaches for bioactive compound screening from herbs raise sustainability concerns, including excessive resource consumption and suboptimal temporal efficiency. The integration of artificial intelligence (AI) and multi-omics technologies with network pharmacology (NP) has emerged as a transformative methodology aligned with TCM’s inherent “multi-component, multi-target, multi-pathway” therapeutic characteristics. This convergent review provides a computational framework to decode complex bioactive compound–target–pathway networks through two synergistic strategies, (i) NP-driven dynamics interaction network modeling and (ii) AI-enhanced multi-omics data mining, thereby accelerating drug discovery and reducing experimental costs. Our analysis of 7288 publications systematically maps NP-AI–omics integration workflows for natural product screening. The proposed framework enables sustainable drug discovery through data-driven compound prioritization, systematic repurposing of herbal formulations via mechanism-based validation, and the development of evidence-based novel TCM prescriptions. This paradigm bridges empirical TCM knowledge with mechanism-driven precision medicine, offering a theoretical basis for reconciling traditional medicine with modern pharmaceutical innovation. Full article
(This article belongs to the Special Issue Sustainable Approaches and Strategies for Bioactive Natural Compounds)
Show Figures

Figure 1

27 pages, 4187 KiB  
Article
Assessing Occupational Work-Related Stress and Anxiety of Healthcare Staff During COVID-19 Using Fuzzy Natural Language-Based Association Rule Mining
by Abdulaziz S. Alkabaa, Osman Taylan, Hanan S. Alqabbaa and Bulent Guloglu
Healthcare 2025, 13(14), 1745; https://doi.org/10.3390/healthcare13141745 - 18 Jul 2025
Viewed by 237
Abstract
Background/Objective: Frontline healthcare staff who contend diseases and mitigate their transmission were repeatedly exposed to high-risk conditions during the COVID-19 pandemic. They were at risk of mental health issues, in particular, psychological stress, depression, anxiety, financial stress, and/or burnout. This study aimed to [...] Read more.
Background/Objective: Frontline healthcare staff who contend diseases and mitigate their transmission were repeatedly exposed to high-risk conditions during the COVID-19 pandemic. They were at risk of mental health issues, in particular, psychological stress, depression, anxiety, financial stress, and/or burnout. This study aimed to investigate and evaluate the occupational stress of medical doctors, nurses, pharmacists, physiotherapists, and other hospital support crew during the COVID-19 pandemic in Saudi Arabia. Methods: We collected both qualitative and quantitative data from a survey given to public and private hospitals using methods like correspondence analysis, cluster analysis, and structural equation models to investigate the work-related stress (WRS) and anxiety of the staff. Since health-related factors are unclear and uncertain, a fuzzy association rule mining (FARM) method was created to address these problems and find out the levels of work-related stress (WRS) and anxiety. The statistical results and K-means clustering method were used to find the best number of fuzzy rules and the level of fuzziness in clusters to create the FARM approach and to predict the work-related stress and anxiety of healthcare staff. This innovative approach allows for a more nuanced appraisal of the factors contributing to work-related stress and anxiety, ultimately enabling healthcare organizations to implement targeted interventions. By leveraging these insights, management can foster a healthier work environment that supports staff well-being and enhances overall productivity. This study also aimed to identify the relevant health factors that are the root causes of work-related stress and anxiety to facilitate better preparation and motivation of the staff for reorganizing resources and equipment. Results: The results and findings show that when the financial burden (FIN) of healthcare staff increased, WRS and anxiety increased. Similarly, a rise in psychological stress caused an increase in WRS and anxiety. The psychological impact (PCG) ratio and financial impact (FIN) were the most influential factors for the staff’s anxiety. The FARM results and findings revealed that improving the financial situation of healthcare staff alone was not sufficient during the COVID-19 pandemic. Conclusions: This study found that while the impact of PCG was significant, its combined effect with FIN was more influential on staff’s work-related stress and anxiety. This difference was due to the mutual effects of PCG and FIN on the staff’s motivation. The findings will help healthcare managers make decisions to reduce or eliminate the WRS and anxiety experienced by healthcare staff in the future. Full article
(This article belongs to the Special Issue Depression, Anxiety and Emotional Problems Among Healthcare Workers)
Show Figures

Figure 1

37 pages, 2921 KiB  
Article
A Machine-Learning-Based Data Science Framework for Effectively and Efficiently Processing, Managing, and Visualizing Big Sequential Data
by Alfredo Cuzzocrea, Islam Belmerabet, Abderraouf Hafsaoui and Carson K. Leung
Computers 2025, 14(7), 276; https://doi.org/10.3390/computers14070276 - 14 Jul 2025
Viewed by 600
Abstract
In recent years, the open data initiative has led to the willingness of many governments, researchers, and organizations to share their data and make it publicly available. Healthcare, disease, and epidemiological data, such as privacy statistics on patients who have suffered from epidemic [...] Read more.
In recent years, the open data initiative has led to the willingness of many governments, researchers, and organizations to share their data and make it publicly available. Healthcare, disease, and epidemiological data, such as privacy statistics on patients who have suffered from epidemic diseases such as the Coronavirus disease 2019 (COVID-19), are examples of open big data. Therefore, huge volumes of valuable data have been generated and collected at high speed from a wide variety of rich data sources. Analyzing these open big data can be of social benefit. For example, people gain a better understanding of disease by analyzing and mining disease statistics, which can inspire them to participate in disease prevention, detection, control, and combat. Visual representation further improves data understanding and corresponding results for analysis and mining, as a picture is worth a thousand words. In this paper, we present a visual data science solution for the visualization and visual analysis of large sequence data. These ideas are illustrated by the visualization and visual analysis of sequences of real epidemiological data of COVID-19. Through our solution, we enable users to visualize the epidemiological data of COVID-19 over time. It also allows people to visually analyze data and discover relationships between popular features associated with COVID-19 cases. The effectiveness of our visual data science solution in improving the user experience of visualization and visual analysis of large sequence data is demonstrated by the real-life evaluation of these sequenced epidemiological data of COVID-19. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

18 pages, 533 KiB  
Article
Prediction of Metastasis in Paragangliomas and Pheochromocytomas Using Machine Learning Models: Explainability Challenges
by Carmen García-Barceló, David Gil, David Tomás and David Bernabeu
Sensors 2025, 25(13), 4184; https://doi.org/10.3390/s25134184 - 4 Jul 2025
Viewed by 378
Abstract
One of the main issues with paragangliomas and pheochromocytomas is that these tumors have up to a 20% rate of metastatic disease, which cannot be reliably predicted. While machine learning models hold great promise for enhancing predictive accuracy, their often opaque nature limits [...] Read more.
One of the main issues with paragangliomas and pheochromocytomas is that these tumors have up to a 20% rate of metastatic disease, which cannot be reliably predicted. While machine learning models hold great promise for enhancing predictive accuracy, their often opaque nature limits trust and adoption in critical fields such as healthcare. Understanding the factors driving predictions is essential not only for validating their reliability but also for enabling their integration into clinical decision-making. In this paper, we propose an architecture that combines data mining, machine learning, and explainability techniques to improve predictions of metastatic disease in these types of cancer and enhance trust in the models. A wide variety of algorithms have been applied for the development of predictive models, with a focus on interpreting their outputs to support clinical insights. Our methodology involves a comprehensive preprocessing phase to prepare the data, followed by the application of classification algorithms. Explainability techniques were integrated to provide insights into the key factors driving predictions. Additionally, a feature selection process was performed to identify the most influential variables and explore how their inclusion affects model performance. The best-performing algorithm, Random Forest, achieved an accuracy of 96.3%, precision of 96.5%, and AUC of 0.963, among other metrics, combining strong predictive capability with explainability that fosters trust in clinical applications. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

28 pages, 4584 KiB  
Article
Fast Track Design Using Process Mining: Does It Improve Saturation and Times in Emergency Departments?
by Angeles Celda-Moret, Gema Ibanez-Sanchez, Javier Garijo, Mirela Pop-Llut, Miriam Faus-Lluquet and Carlos Fernandez-Llatas
Appl. Sci. 2025, 15(13), 7367; https://doi.org/10.3390/app15137367 - 30 Jun 2025
Viewed by 294
Abstract
Emergency department overcrowding disproportionately affects complex patients, such as older adults and those with comorbidities, who consume significant resources and experience prolonged delays. This study integrates process mining and predictive simulation to identify key factors influencing length of stay and to propose a [...] Read more.
Emergency department overcrowding disproportionately affects complex patients, such as older adults and those with comorbidities, who consume significant resources and experience prolonged delays. This study integrates process mining and predictive simulation to identify key factors influencing length of stay and to propose a data-driven solution: a tailored fast-track pathway for high-risk patients. Using data from 94,489 emergency episodes, a predictive formula was developed based on clinically relevant variables, including age (>65 years); triage levels (II and III); frequent emergency department visits; need for mobility aids; and specific reasons for consultation such as dyspnea, abdominal pain, and poor general condition. Simulation results demonstrated that implementing this fast-track pathway reduces length of stay by up to 21% and emergency department saturation by 35%, even with minimal resource allocation (five beds). The manual predictive formula showed comparable prediction performance to machine learning models while maintaining transparency and traceability, ensuring greater acceptability among healthcare professionals. This approach represents a paradigm shift in emergency department management, offering a scalable tool to optimise resource allocation, improve patient outcomes, and reduce operational inefficiencies. Future multicenter validations could establish this model as an essential component of emergency department management strategies. Full article
Show Figures

Figure 1

17 pages, 699 KiB  
Article
Secure K-Means Clustering Scheme for Confidential Data Based on Paillier Cryptosystem
by Zhengqi Zhang, Zixin Xiong and Jun Ye
Appl. Sci. 2025, 15(12), 6918; https://doi.org/10.3390/app15126918 - 19 Jun 2025
Viewed by 209
Abstract
In this paper, we propose a secure homomorphic K-means clustering protocol based on the Paillier cryptosystem to address the urgent need for privacy-preserving clustering techniques in sensitive domains such as healthcare and finance. The protocol uses the additive homomorphism property of the Paillier [...] Read more.
In this paper, we propose a secure homomorphic K-means clustering protocol based on the Paillier cryptosystem to address the urgent need for privacy-preserving clustering techniques in sensitive domains such as healthcare and finance. The protocol uses the additive homomorphism property of the Paillier cryptosystem to perform K-means clustering on the encrypted data, which ensures the confidentiality of the data during the whole calculation process. The protocol consists of three main components: secure computation distance (SCD) protocol, secure cluster assignment (SCA) protocol and secure cluster center update (SUCC) protocol. The SCD protocol securely computes the squared Euclidean distance between the encrypted data point and the encrypted cluster center. The SCA protocol securely assigns data points to clusters based on these cryptographic distances. Finally, the SUCC protocol securely updates the cluster centers without leaking the actual data points as well as the number of intermediate sums. Through security analysis and experimental verification, the effectiveness and practicability of the protocol are proved. This work provides a practical solution for secure clustering based on homomorphic encryption and contributes to the research in the field of privacy-preserving data mining. Although this protocol solves the key problems of secure distance computation, cluster assignment and centroid update, there are still areas for further research. These include optimizing the computational efficiency of the protocol, exploring other homomorphic encryption schemes that may provide better performance, and extending the protocol to handle more complex clustering algorithms. Full article
Show Figures

Figure 1

21 pages, 721 KiB  
Article
Benchmarking Variants of Recursive Feature Elimination: Insights from Predictive Tasks in Education and Healthcare
by Okan Bulut, Bin Tan, Elisabetta Mazzullo and Ali Syed
Information 2025, 16(6), 476; https://doi.org/10.3390/info16060476 - 6 Jun 2025
Viewed by 703
Abstract
Originally developed as an effective feature selection method in healthcare predictive analytics, Recursive Feature Elimination (RFE) has gained increasing popularity in Educational Data Mining (EDM) due to its ability to handle high-dimensional data and support interpretable modeling. Over time, various RFE variants have [...] Read more.
Originally developed as an effective feature selection method in healthcare predictive analytics, Recursive Feature Elimination (RFE) has gained increasing popularity in Educational Data Mining (EDM) due to its ability to handle high-dimensional data and support interpretable modeling. Over time, various RFE variants have emerged, each introducing methodological enhancements. To help researchers better understand and apply RFE more effectively, this study organizes existing variants into four methodological categories: (1) integration with different machine learning models, (2) combinations of multiple feature importance metrics, (3) modifications to the original RFE process, and (4) hybridization with other feature selection or dimensionality reduction techniques. Rather than conducting a systematic review, we present a narrative synthesis supported by illustrative studies from EDM to demonstrate how different variants have been applied in practice. We also conduct an empirical evaluation of five representative RFE variants across two domains: a regression task using a large-scale educational dataset and a classification task using a clinical dataset on chronic heart failure. Our evaluation benchmarks predictive accuracy, feature selection stability, and runtime efficiency. Results show that the evaluation metrics vary significantly across RFE variants. For example, while RFE wrapped with tree-based models such as Random Forest and Extreme Gradient Boosting (XGBoost) yields strong predictive performance, these methods tend to retain large feature sets and incur high computational costs. In contrast, a variant known as Enhanced RFE achieves substantial feature reduction with only marginal accuracy loss, offering a favorable balance between efficiency and performance. These findings underscore the trade-offs among accuracy, interpretability, and computational cost across RFE variants, providing practical guidance for selecting the most appropriate algorithm based on domain-specific needs and constraints. Full article
Show Figures

Figure 1

16 pages, 680 KiB  
Review
Revolutionizing Utility of Big Data Analytics in Personalized Cardiovascular Healthcare
by Praneel Sharma, Pratyusha Sharma, Kamal Sharma, Vansh Varma, Vansh Patel, Jeel Sarvaiya, Jonsi Tavethia, Shubh Mehta, Anshul Bhadania, Ishan Patel and Komal Shah
Bioengineering 2025, 12(5), 463; https://doi.org/10.3390/bioengineering12050463 - 27 Apr 2025
Cited by 1 | Viewed by 867
Abstract
The term “big data analytics (BDA)” defines the computational techniques to study complex datasets that are too large for common data processing software, encompassing techniques such as data mining (DM), machine learning (ML), and predictive analytics (PA) to find patterns, correlations, and insights [...] Read more.
The term “big data analytics (BDA)” defines the computational techniques to study complex datasets that are too large for common data processing software, encompassing techniques such as data mining (DM), machine learning (ML), and predictive analytics (PA) to find patterns, correlations, and insights in massive datasets. Cardiovascular diseases (CVDs) are attributed to a combination of various risk factors, including sedentary lifestyle, obesity, diabetes, dyslipidaemia, and hypertension. We searched PubMed and published research using the Google and Cochrane search engines to evaluate existing models of BDA that have been used for CVD prediction models. We critically analyse the pitfalls and advantages of various BDA models using artificial intelligence (AI), machine learning (ML), and artificial neural networks (ANN). BDA with the integration of wide-ranging data sources, such as genomic, proteomic, and lifestyle data, could help understand the complex biological mechanisms behind CVD, including risk stratification in risk-exposed individuals. Predictive modelling is proposed to help in the development of personalized medicines, particularly in pharmacogenomics; understanding genetic variation might help to guide drug selection and dosing, with the consequent improvement in patient outcomes. To summarize, incorporating BDA into cardiovascular research and treatment represents a paradigm shift in our approach to CVD prevention, diagnosis, and management. By leveraging the power of big data, researchers and clinicians can gain deeper insights into disease mechanisms, improve patient care, and ultimately reduce the burden of cardiovascular disease on individuals and healthcare systems. Full article
Show Figures

Figure 1

48 pages, 6422 KiB  
Review
Modern Trends and Recent Applications of Hyperspectral Imaging: A Review
by Ming-Fang Cheng, Arvind Mukundan, Riya Karmakar, Muhamed Adil Edavana Valappil, Jumana Jouhar and Hsiang-Chen Wang
Technologies 2025, 13(5), 170; https://doi.org/10.3390/technologies13050170 - 23 Apr 2025
Cited by 3 | Viewed by 4278
Abstract
Hyperspectral imaging (HSI) is an advanced imaging technique that captures detailed spectral information across multiple fields. This review explores its applications in counterfeit detection, remote sensing, agriculture, medical imaging, cancer detection, environmental monitoring, mining, mineralogy, and food processing, specifically highlighting significant achievements from [...] Read more.
Hyperspectral imaging (HSI) is an advanced imaging technique that captures detailed spectral information across multiple fields. This review explores its applications in counterfeit detection, remote sensing, agriculture, medical imaging, cancer detection, environmental monitoring, mining, mineralogy, and food processing, specifically highlighting significant achievements from the past five years, providing a timely update across several fields. It also presents a cross-disciplinary classification framework to systematically categorize applications in medical, agriculture, environment, and industry. In counterfeit detection, HSI identified fake currency with high accuracy in the 400–500 nm range and achieved a 99.03% F1-score for counterfeit alcohol detection. Remote sensing applications include hyperspectral satellites, which improve forest classification accuracy by 50%, and soil organic matter, with the prediction reaching R2 = 0.6. In agriculture, the HSI-TransUNet model achieved 86.05% accuracy for crop classification, and disease detection reached 98.09% accuracy. Medical imaging benefits from HSI’s non-invasive diagnostics, distinguishing skin cancer with 87% sensitivity and 88% specificity. In cancer detection, colorectal cancer identification reached 86% sensitivity and 95% specificity. Environmental applications include PM2.5 pollution detection with 85.93% accuracy and marine plastic waste detection with 70–80% accuracy. In food processing, egg freshness prediction achieved R2 = 91%, and pine nut classification reached 100% accuracy. Despite its advantages, HSI faces challenges like high costs and complex data processing. Advances in artificial intelligence and miniaturization are expected to improve accessibility and real-time applications. Future advancements are anticipated to concentrate on the integration of deep learning models for automated feature extraction and decision-making in hyperspectral imaging analysis. The development of lightweight, portable HSI devices will enable more on-site applications in agriculture, healthcare, and environmental monitoring. Moreover, real-time processing methods will enhance efficiency for field deployment. These improvements seek to enhance the accessibility, practicality, and efficacy of HSI in both industrial and clinical environments. Full article
Show Figures

Figure 1

26 pages, 951 KiB  
Article
Leveraging Kaizen with Process Mining in Healthcare Settings: A Conceptual Framework for Data-Driven Continuous Improvement
by Mohammad Najeh Samara and Kimberly D. Harry
Healthcare 2025, 13(8), 941; https://doi.org/10.3390/healthcare13080941 - 19 Apr 2025
Viewed by 1367
Abstract
Background/Objectives: Healthcare systems face persistent challenges in improving efficiency, optimizing resources, and delivering high-quality care. Traditional continuous improvement methodologies often rely on subjective assessments, while data-driven approaches typically lack human-centered adaptability. This study aims to develop an integrated framework combining Kaizen principles with [...] Read more.
Background/Objectives: Healthcare systems face persistent challenges in improving efficiency, optimizing resources, and delivering high-quality care. Traditional continuous improvement methodologies often rely on subjective assessments, while data-driven approaches typically lack human-centered adaptability. This study aims to develop an integrated framework combining Kaizen principles with Process Mining capabilities to address these limitations in healthcare process optimization. Methods: This research employed a structured literature review approach to identify key concepts, methodologies, and applications of both Kaizen and Process Mining in healthcare settings. The study synthesized insights from the peer-reviewed literature published in the last two decades to develop a conceptual framework integrating these approaches for healthcare process improvement. Results: The proposed framework combines Kaizen’s employee-driven approach to eliminating inefficiencies with Process Mining’s ability to analyze workflow data and identify process deviations. The integration is structured into four key phases: data collection, process analysis, Kaizen events, and continuous monitoring. This structure creates a feedback loop where data-driven insights inform collaborative problem-solving, resulting in sustained improvements validated through objective process analysis. Conclusions: The integration of Kaizen and Process Mining offers a promising approach to enhancing workflow efficiency, reducing operational errors, and improving resource utilization in healthcare settings. While challenges such as data quality concerns, resource constraints, and potential resistance to change must be addressed, the framework provides a foundation for more effective process optimization. Future research should focus on empirical validation, AI-enhanced analytics, and assessing adaptability across diverse healthcare contexts. Full article
Show Figures

Figure 1

15 pages, 774 KiB  
Article
Using Measles Outbreaks to Identify Under-Resourced Health Systems in Low- and Middle-Income Countries: A Predictive Model
by Gabrielle P. D. MacKechnie, Milena Dalton, Dominic Delport and Stefanie Vaccher
Vaccines 2025, 13(4), 367; https://doi.org/10.3390/vaccines13040367 - 30 Mar 2025
Viewed by 1134
Abstract
Background/Objectives: Measles is a vaccine-preventable disease with a high level of transmissibility. Outbreaks of measles continue globally, with gaps in healthcare and immunisation resulting in pockets of susceptible individuals. Measles outbreaks have been proposed as a “canary in the coal mine” of under-resourced [...] Read more.
Background/Objectives: Measles is a vaccine-preventable disease with a high level of transmissibility. Outbreaks of measles continue globally, with gaps in healthcare and immunisation resulting in pockets of susceptible individuals. Measles outbreaks have been proposed as a “canary in the coal mine” of under-resourced health systems, uncovering broader system weaknesses. We aim to understand whether under-resourced health systems are associated with increased odds of large measles outbreaks in low- and middle-income countries (LMICs). Methods: We used an ecological study design to identify measles outbreaks that occurred in LMICs between 2010 and 2020. Health systems were represented using a set of health system indicators for the corresponding outbreak country, guided by the World Health Organization’s building blocks of health systems framework. These indicators were: the proportion of births delivered in a health facility, the number of nurses and midwives per 10,000 population, and domestic general government health expenditure per capita in USD. We analysed the associations using a predictive model and assessed the accuracy of this model. Results: The analysis included 78 outbreaks. We found an absence of any association between the included health system indicators and large measles outbreaks. When testing predictive accuracy, the model obtained a Brier score of 0.21, which indicates that the model is not informative in predicting large measles outbreaks. We found that missing data did not affect the results of the model. Conclusions: Large measles outbreaks were not able to be used to identify under-resourced health systems in LMICs. However, further research is required to understand whether this association may exist when taking other factors, including smaller outbreaks, into account. Full article
(This article belongs to the Section Epidemiology and Vaccination)
Show Figures

Figure 1

24 pages, 1329 KiB  
Article
Personalised Risk Modelling for Older Adult Cancer Survivors: Combining Wearable Data and Self-Reported Measures to Address Time-Varying Risks
by Zoe Valero-Ramon, Gema Ibanez-Sanchez, Antonio Martinez-Millana and Carlos Fernandez-Llatas
Sensors 2025, 25(7), 2097; https://doi.org/10.3390/s25072097 - 27 Mar 2025
Cited by 1 | Viewed by 655
Abstract
Recent advancements in wearable devices have significantly enhanced remote patient monitoring, enabling healthcare professionals to evaluate conditions within home settings. While electronic health records (EHRs) offer extensive clinical data, they often lack crucial contextual information about patients’ daily lives and symptoms. By integrating [...] Read more.
Recent advancements in wearable devices have significantly enhanced remote patient monitoring, enabling healthcare professionals to evaluate conditions within home settings. While electronic health records (EHRs) offer extensive clinical data, they often lack crucial contextual information about patients’ daily lives and symptoms. By integrating continuous self-reported outcomes related to vulnerability, anxiety, and depression from older adult cancer survivors with objective data from wearables, we can develop personalised risk models that address time-varying risk factors in cancer care. Our study combines real-world data from wearable devices with self-reported information, employing process mining techniques to analyse dynamic risk models for vulnerability and anxiety. Unlike traditional static assessments, this approach recognises that risk factors evolve. Collaborating with healthcare professionals, we analysed data from the LifeChamps study to create two dynamic risk models. This collaborative effort revealed how activity and sleep patterns influence self-reported vulnerability and anxiety among participants. It underscored the potential of wearable sensors and artificial intelligence techniques for deeper analysis and understanding, making us all part of a larger effort in cancer care. Overall, patients with prolonged sedentary activity had a higher risk of vulnerability, while those with highly dynamic sleep patterns were more likely to report anxiety and depression. Prostate-metastatic patients showed an increased risk of vulnerability compared to other cancer types. Full article
(This article belongs to the Special Issue Wearable Technologies and Sensors for Healthcare and Wellbeing)
Show Figures

Figure 1

24 pages, 2927 KiB  
Article
Text Mining Approaches for Exploring Research Trends in the Security Applications of Generative Artificial Intelligence
by Jinsick Kim, Byeongsoo Koo, Moonju Nam, Kukjin Jang, Jooyeoun Lee, Myoungsug Chung and Youngseo Song
Appl. Sci. 2025, 15(6), 3355; https://doi.org/10.3390/app15063355 - 19 Mar 2025
Viewed by 2037
Abstract
This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns are growing regarding security vulnerabilities, ethical challenges, and potential for misuse. This [...] Read more.
This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns are growing regarding security vulnerabilities, ethical challenges, and potential for misuse. This study not only synthesizes existing research but also conducts an original scientometric analysis using text mining techniques. To address these concerns, this research analyzes 1047 peer-reviewed academic articles from the SCOPUS database using scientometric methods, including Term Frequency–Inverse Document Frequency (TF-IDF) analysis, keyword centrality analysis, and Latent Dirichlet Allocation (LDA) topic modeling. The results highlight significant contributions from countries such as the United States, China, and India, with leading institutions like the Chinese Academy of Sciences and the National University of Singapore driving research on GAI security. In the keyword centrality analysis, “ChatGPT” emerged as a highly central term, reflecting its prominence in the research discourse. However, despite its frequent mention, “ChatGPT” showed lower proximity centrality than terms like “model” and “AI”. This suggests that while ChatGPT is broadly associated with other key themes, it has a less direct connection to specific research subfields. Topic modeling identified six major themes, including AI and security in education, language models, data processing, and risk management. The analysis emphasizes the need for robust security frameworks to address technical vulnerabilities, ensure ethical responsibility, and manage risks in the safe deployment of AI systems. These frameworks must incorporate not only technical solutions but also ethical accountability, regulatory compliance, and continuous risk management. This study underscores the importance of interdisciplinary research that integrates technical, legal, and ethical perspectives to ensure the responsible and secure deployment of GAI technologies. Full article
(This article belongs to the Special Issue New Advances in Computer Security and Cybersecurity)
Show Figures

Figure 1

15 pages, 3440 KiB  
Article
Contribution of Structure Learning Algorithms in Social Epidemiology: Application to Real-World Data
by Helene Colineaux, Benoit Lepage, Pierre Chauvin, Chloe Dimeglio, Cyrille Delpierre and Thomas Lefèvre
Int. J. Environ. Res. Public Health 2025, 22(3), 348; https://doi.org/10.3390/ijerph22030348 - 27 Feb 2025
Viewed by 605
Abstract
Epidemiologists often handle large datasets with numerous variables and are currently seeing a growing wealth of techniques for data analysis, such as machine learning. Critical aspects involve addressing causality, often based on observational data, and dealing with the complex relationships between variables to [...] Read more.
Epidemiologists often handle large datasets with numerous variables and are currently seeing a growing wealth of techniques for data analysis, such as machine learning. Critical aspects involve addressing causality, often based on observational data, and dealing with the complex relationships between variables to uncover the overall structure of variable interactions, causal or not. Structure learning (SL) methods aim to automatically or semi-automatically reveal the structure of variables’ relationships. The objective of this study is to delineate some of the potential contributions and limitations of structure learning methods when applied to social epidemiology topics and the search for determinants of healthcare system access. We applied SL techniques to a real-world dataset, namely the 2010 wave of the SIRS cohort, which included a sample of 3006 adults from the Paris region, France. Healthcare utilization, encompassing both direct and indirect access to care, was the primary outcome. Candidate determinants included health status, demographic characteristics, and socio-cultural and economic positions. We present two approaches: a non-automated epidemiological method (an initial expert knowledge network and stepwise logistic regression models) and three SL techniques using various algorithms, with and without knowledge constraints. We compared the results based on the presence, direction, and strength of specific links within the produced network. Although the interdependencies and relative strengths identified by both approaches were similar, the SL algorithms detect fewer associations with the outcome than the non-automated method. Relationships between variables were sometimes incorrectly oriented when using a purely data-driven approach. SL algorithms can be valuable in exploratory stages, helping to generate new hypotheses or mining novel databases. However, results should be validated against prior knowledge and supplemented with additional confirmatory analyses. Full article
Show Figures

Figure 1

10 pages, 2464 KiB  
Proceeding Paper
CAVE Automatic Virtual Environment Technology: A Patent Analysis
by Fatma Beji, William de Paula Ferreira, Isabelle Pivotto Dabat and Vitor Matias
Eng. Proc. 2025, 89(1), 9; https://doi.org/10.3390/engproc2025089009 - 24 Feb 2025
Viewed by 1214
Abstract
Cave automatic virtual environment (CAVE) technology provides a highly immersive experience in virtual reality (VR) environments, transcending traditional boundaries of VR head-mounted devices. CAVE is applied to many fields, including education, construction, healthcare, and manufacturing. Despite its relevance, studies examining CAVE technology evolution [...] Read more.
Cave automatic virtual environment (CAVE) technology provides a highly immersive experience in virtual reality (VR) environments, transcending traditional boundaries of VR head-mounted devices. CAVE is applied to many fields, including education, construction, healthcare, and manufacturing. Despite its relevance, studies examining CAVE technology evolution and research directions are still lacking. To address this research gap, we analyzed patents using CAVE to understand the technology’s development and identify opportunities for future research, development, and innovation. Patent data were collected from the Lens database and analyzed using data mining techniques. An increasing number of CAVE patents were granted, reflecting significant growth and investments in this field. The results highlight emerging trends in the development of CAVE systems, emphasizing various technical configurations and innovative applications across a wide range of fields. Full article
Show Figures

Figure 1

Back to TopTop