Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,875)

Search Parameters:
Keywords = personal computers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1380 KiB  
Article
Critical Smart Functions for Smart Living Based on User Perspectives
by Benjamin Botchway, Frank Ato Ghansah, David John Edwards, Ebenezer Kumi-Amoah and Joshua Amo-Larbi
Buildings 2025, 15(15), 2727; https://doi.org/10.3390/buildings15152727 (registering DOI) - 1 Aug 2025
Abstract
Smart living is strongly promoted to enhance the quality of life via the application of innovative solutions, and this is driven by domain specialists and policymakers, including designers, urban planners, computer engineers, and property developers. Nonetheless, the actual user, whose views ought to [...] Read more.
Smart living is strongly promoted to enhance the quality of life via the application of innovative solutions, and this is driven by domain specialists and policymakers, including designers, urban planners, computer engineers, and property developers. Nonetheless, the actual user, whose views ought to be considered during the design and development of smart living systems, has received little attention. Thus, this study aims to identify and examine the critical smart functions to achieve smart living in smart buildings based on occupants’ perceptions. The aim is achieved using a sequential quantitative research method involving a literature review and 221 valid survey data gathered from a case of a smart student residence in Hong Kong. The method is further integrated with descriptive statistics, the Kruskal–Walli’s test, and the criticality test. The results were validated via a post-survey with related experts. Twenty-six critical smart functions for smart living were revealed, with the top three including the ability to protect personal data and information privacy, provide real-time safety and security, and the ability to be responsive to users’ needs. A need was discovered to consider the context of buildings during the design of smart living systems, and the recommendation is for professionals to understand the kind of digital technology to be integrated into a building by strongly considering the context of the building and how smart living will be achieved within it based on users’ perceptions. The study provides valuable insights into the occupants’ perceptions of critical smart features/functions for policymakers and practitioners to consider in the construction of smart living systems, specifically students’ smart buildings. This study contributes to knowledge by identifying the critical smart functions to achieve smart living based on occupants’ perceptions of smart living by considering the specific context of a smart student building facility constructed in Hong Kong. Full article
Show Figures

Figure 1

34 pages, 2929 KiB  
Review
Recent Advances in PET and Radioligand Therapy for Lung Cancer: FDG and FAP
by Eun Jeong Lee, Hyun Woo Chung, Young So, In Ae Kim, Hee Joung Kim and Kye Young Lee
Cancers 2025, 17(15), 2549; https://doi.org/10.3390/cancers17152549 (registering DOI) - 1 Aug 2025
Viewed by 17
Abstract
Lung cancer is one of the most common cancers and the leading cause of cancer-related death worldwide. Despite advancements, the overall survival rate for lung cancer remains between 10% and 20% in most countries. However, recent progress in diagnostic tools and therapeutic strategies [...] Read more.
Lung cancer is one of the most common cancers and the leading cause of cancer-related death worldwide. Despite advancements, the overall survival rate for lung cancer remains between 10% and 20% in most countries. However, recent progress in diagnostic tools and therapeutic strategies has led to meaningful improvements in survival outcomes, highlighting the growing importance of personalized management based on accurate disease assessment. 18F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) has become essential in the management of lung cancer, serving as a key imaging modality for initial diagnosis, staging, treatment response assessment, and follow-up evaluation. Recent developments in radiomics and artificial intelligence (AI), including machine learning and deep learning, have revolutionized the analysis of complex imaging data, enhancing the diagnostic and predictive capabilities of FDG PET/CT in lung cancer. However, the limitations of FDG, including its low specificity for malignancy, have driven the development of novel oncologic radiotracers. One such target is fibroblast activation protein (FAP), a type II transmembrane glycoprotein that is overexpressed in activated cancer-associated fibroblasts within the tumor microenvironment of various epithelial cancers. As a result, FAP-targeted radiopharmaceuticals represent a novel theranostic approach, offering the potential to integrate PET imaging with radioligand therapy (RLT). In this review, we provide a comprehensive overview of FDG PET/CT in lung cancer, along with recent advances in AI. Additionally, we discuss FAP-targeted radiopharmaceuticals for PET imaging and their potential application in RLT for the personalized management of lung cancer. Full article
(This article belongs to the Special Issue Molecular PET Imaging in Cancer Metabolic Studies)
19 pages, 2196 KiB  
Article
User-Centered Design of a Computer Vision System for Monitoring PPE Compliance in Manufacturing
by Luis Alberto Trujillo-Lopez, Rodrigo Alejandro Raymundo-Guevara and Juan Carlos Morales-Arevalo
Computers 2025, 14(8), 312; https://doi.org/10.3390/computers14080312 (registering DOI) - 1 Aug 2025
Viewed by 70
Abstract
In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency [...] Read more.
In manufacturing environments, the proper use of Personal Protective Equipment (PPE) is essential to prevent workplace accidents. Despite this need, existing PPE monitoring methods remain largely manual and suffer from limited coverage, significant errors, and inefficiencies. This article focuses on addressing this deficiency by designing a computer vision desktop application for automated monitoring of PPE use. This system uses lightweight YOLOv8 models, developed to run on the local system and operate even in industrial locations with limited network connectivity. Using a Lean UX approach, the development of the system involved creating empathy maps, assumptions, product backlog, followed by high-fidelity prototype interface components. C4 and physical diagrams helped define the system architecture to facilitate modifiability, scalability, and maintainability. Usability was verified using the System Usability Scale (SUS), with a score of 87.6/100 indicating “excellent” usability. The findings demonstrate that a user-centered design approach, considering user experience and technical flexibility, can significantly advance the utility and adoption of AI-based safety tools, especially in small- and medium-sized manufacturing operations. This article delivers a validated and user-centered design solution for implementing machine vision systems into manufacturing safety processes, simplifying the complexities of utilizing advanced AI technologies and their practical application in resource-limited environments. Full article
Show Figures

Figure 1

25 pages, 2082 KiB  
Article
XTTS-Based Data Augmentation for Profanity Keyword Recognition in Low-Resource Speech Scenarios
by Shin-Chi Lai, Yi-Chang Zhu, Szu-Ting Wang, Yen-Ching Chang, Ying-Hsiu Hung, Jhen-Kai Tang and Wen-Kai Tsai
Appl. Syst. Innov. 2025, 8(4), 108; https://doi.org/10.3390/asi8040108 - 31 Jul 2025
Viewed by 94
Abstract
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation [...] Read more.
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation method based on XText-to-Speech (XTTS) synthesis to tackle the challenges of small-sample, multi-class speech recognition, using profanity as a case study to achieve high-accuracy keyword recognition. Two models were therefore evaluated: a CNN model (Proposed-I) and a CNN-Transformer hybrid model (Proposed-II). Proposed-I leverages local feature extraction, improving accuracy on a real human speech (RHS) test set from 55.35% without augmentation to 80.36% with XTTS-enhanced data. Proposed-II integrates CNN’s local feature extraction with Transformer’s long-range dependency modeling, further boosting test set accuracy to 88.90% while reducing the parameter count by approximately 41%, significantly enhancing computational efficiency. Compared to a previously proposed incremental architecture, the Proposed-II model achieves an 8.49% higher accuracy while reducing parameters by about 98.81% and MACs by about 98.97%, demonstrating exceptional resource efficiency. By utilizing XTTS and public corpora to generate a novel keyword speech dataset, this study enhances sample diversity and reduces reliance on large-scale original speech data. Experimental analysis reveals that an optimal synthetic-to-real speech ratio of 1:5 significantly improves the overall system accuracy, effectively addressing data scarcity. Additionally, the Proposed-I and Proposed-II models achieve accuracies of 97.54% and 98.66%, respectively, in distinguishing real from synthetic speech, demonstrating their strong potential for speech security and anti-spoofing applications. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
24 pages, 2325 KiB  
Review
Personalization of AI-Based Digital Twins to Optimize Adaptation in Industrial Design and Manufacturing—Review
by Izabela Rojek, Dariusz Mikołajewski, Ewa Dostatni, Jan Cybulski and Mirosław Kozielski
Appl. Sci. 2025, 15(15), 8525; https://doi.org/10.3390/app15158525 (registering DOI) - 31 Jul 2025
Viewed by 83
Abstract
The growing scale of big data and artificial intelligence (AI)-based models has heightened the urgency of developing real-time digital twins (DTs), particularly those capable of simulating personalized behavior in dynamic environments. In this study, we examine the personalization of AI-based digital twins (DTs), [...] Read more.
The growing scale of big data and artificial intelligence (AI)-based models has heightened the urgency of developing real-time digital twins (DTs), particularly those capable of simulating personalized behavior in dynamic environments. In this study, we examine the personalization of AI-based digital twins (DTs), with a focus on overcoming computational latencies that hinder real-time responses—especially in complex, large-scale systems and networks. We use bibliometric analysis to map current trends, prevailing themes, and technical challenges in this field. The key findings highlight the growing emphasis on scalable model architectures, multimodal data integration, and the use of high-performance computing platforms. While existing research has focused on model decomposition, structural optimization, and algorithmic integration, there remains a need for fast DT platforms that support diverse user requirements. This review synthesizes these insights to outline new directions for accelerating adaptation and enhancing personalization. By providing a structured overview of the current research landscape, this study contributes to a better understanding of how AI and edge computing can drive the development of the next generation of real-time personalized DTs. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 5770 KiB  
Article
Assessment of Influencing Factors and Robustness of Computable Image Texture Features in Digital Images
by Diego Andrade, Howard C. Gifford and Mini Das
Tomography 2025, 11(8), 87; https://doi.org/10.3390/tomography11080087 (registering DOI) - 31 Jul 2025
Viewed by 64
Abstract
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. [...] Read more.
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. While we use digital breast tomosynthesis (DBT) to show these effects, our results would be generally applicable to a wider range of other imaging modalities and applications. Methods: We examine factors in texture estimation methods, such as quantization, pixel distance offset, and region of interest (ROI) size, that influence the magnitudes of these readily computable and widely used image texture features (specifically Haralick’s gray level co-occurrence matrix (GLCM) textural features). Results: Our results indicate that quantization is the most influential of these parameters, as it controls the size of the GLCM and range of values. We propose a new multi-resolution normalization (by either fixing ROI size or pixel offset) that can significantly reduce quantization magnitude disparities. We show reduction in mean differences in feature values by orders of magnitude; for example, reducing it to 7.34% between quantizations of 8–128, while preserving trends. Conclusions: When combining images from multiple vendors in a common analysis, large variations in texture magnitudes can arise due to differences in post-processing methods like filters. We show that significant changes in GLCM magnitude variations may arise simply due to the filter type or strength. These trends can also vary based on estimation variables (like offset distance or ROI) that can further complicate analysis and robustness. We show pathways to reduce sensitivity to such variations due to estimation methods while increasing the desired sensitivity to patient-specific information such as breast density. Finally, we show that our results obtained from simulated DBT images are consistent with what we see when applied to clinical DBT images. Full article
Show Figures

Figure 1

15 pages, 608 KiB  
Article
A Personal Privacy-Ensured User Authentication Scheme
by Ya-Fen Chang, Wei-Liang Tai and Ting-Yu Chang
Electronics 2025, 14(15), 3072; https://doi.org/10.3390/electronics14153072 (registering DOI) - 31 Jul 2025
Viewed by 56
Abstract
User authentication verifies the legitimacy of users and prevents service providers from offering services to unauthorized parties. The concept is widely applied in various scenarios, including everyday access control systems and IoT applications. With growing concerns about personal privacy, ensuring user anonymity has [...] Read more.
User authentication verifies the legitimacy of users and prevents service providers from offering services to unauthorized parties. The concept is widely applied in various scenarios, including everyday access control systems and IoT applications. With growing concerns about personal privacy, ensuring user anonymity has become increasingly important. In addition to privacy, user convenience is also a key factor influencing the willingness to adopt a system. To address these concerns, we propose a user authentication scheme that ensures personal privacy. The system consists of a backend server, multiple users, and multiple control units. Each user is issued or equipped with an authentication unit. An authorized user can be authenticated by a control unit, with assistance from the backend server, without revealing their identity to the control unit. The scheme is suitable for applications requiring privacy-preserving authentication. Furthermore, to enhance generality, the proposed design ensures computational efficiency and allows the authentication unit to adapt to specific application requirements. Full article
31 pages, 2007 KiB  
Review
Artificial Intelligence-Driven Strategies for Targeted Delivery and Enhanced Stability of RNA-Based Lipid Nanoparticle Cancer Vaccines
by Ripesh Bhujel, Viktoria Enkmann, Hannes Burgstaller and Ravi Maharjan
Pharmaceutics 2025, 17(8), 992; https://doi.org/10.3390/pharmaceutics17080992 - 30 Jul 2025
Viewed by 473
Abstract
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the [...] Read more.
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the AI’s impact on LNP engineering through machine learning-driven predictive models, generative adversarial networks (GANs) for novel lipid design, and neural network-enhanced biodistribution prediction. AI reduces the therapeutic development timeline through accelerated virtual screening of millions of lipid combinations, compared to conventional high-throughput screening. Furthermore, AI-optimized LNPs demonstrate improved tumor targeting. GAN-generated lipids show structural novelty while maintaining higher encapsulation efficiency; graph neural networks predict RNA-LNP binding affinity with high accuracy vs. experimental data; digital twins reduce lyophilization optimization from years to months; and federated learning models enable multi-institutional data sharing. We propose a framework to address key technical challenges: training data quality (min. 15,000 lipid structures), model interpretability (SHAP > 0.65), and regulatory compliance (21CFR Part 11). AI integration reduces manufacturing costs and makes personalized cancer vaccine affordable. Future directions need to prioritize quantum machine learning for stability prediction and edge computing for real-time formulation modifications. Full article
Show Figures

Figure 1

13 pages, 894 KiB  
Article
Enhancing and Not Replacing Clinical Expertise: Improving Named-Entity Recognition in Colonoscopy Reports Through Mixed Real–Synthetic Training Sources
by Andrei-Constantin Ioanovici, Andrei-Marian Feier, Marius-Ștefan Mărușteri, Alina-Dia Trâmbițaș-Miron and Daniela-Ecaterina Dobru
J. Pers. Med. 2025, 15(8), 334; https://doi.org/10.3390/jpm15080334 - 30 Jul 2025
Viewed by 195
Abstract
Background/Objectives: In routine practice, colonoscopy findings are saved as unstructured free text, limiting secondary use. Accurate named-entity recognition (NER) is essential to unlock these descriptions for quality monitoring, personalized medicine and research. We compared named-entity recognition (NER) models trained on real, synthetic, [...] Read more.
Background/Objectives: In routine practice, colonoscopy findings are saved as unstructured free text, limiting secondary use. Accurate named-entity recognition (NER) is essential to unlock these descriptions for quality monitoring, personalized medicine and research. We compared named-entity recognition (NER) models trained on real, synthetic, and mixed data to determine whether privacy preserving synthetic reports can boost clinical information extraction. Methods: Three Spark NLP biLSTM CRF models were trained on (i) 100 manually annotated Romanian colonoscopy reports (ModelR), (ii) 100 prompt-generated synthetic reports (ModelS), and (iii) a 1:1 mix (ModelM). Performance was tested on 40 unseen reports (20 real, 20 synthetic) for seven entities. Micro-averaged precision, recall, and F1-score values were computed; McNemar tests with Bonferroni correction assessed pairwise differences. Results: ModelM outperformed single-source models (precision 0.95, recall 0.93, F1 0.94) and was significantly superior to ModelR (F1 0.70) and ModelS (F1 0.64; p < 0.001 for both). ModelR maintained high accuracy on real text (F1 = 0.90), but its accuracy fell when tested on synthetic data (0.47); the reverse was observed for ModelS (F1 = 0.99 synthetic, 0.33 real). McNemar χ2 statistics (64.6 for ModelM vs. ModelR; 147.0 for ModelM vs. ModelS) greatly exceeded the Bonferroni-adjusted significance threshold (α = 0.0167), confirming that the observed performance gains were unlikely to be due to chance. Conclusions: Synthetic colonoscopy descriptions are a valuable complement, but not a substitute for real annotations, while AI is helping human experts, not replacing them. Training on a balanced mix of real and synthetic data can help to obtain robust, generalizable NER models able to structure free-text colonoscopy reports, supporting large-scale, privacy-preserving colorectal cancer surveillance and personalized follow-up. Full article
(This article belongs to the Special Issue Clinical Updates on Personalized Upper Gastrointestinal Endoscopy)
Show Figures

Figure 1

22 pages, 1588 KiB  
Article
Scaffold-Free Functional Deconvolution Identifies Clinically Relevant Metastatic Melanoma EV Biomarkers
by Shin-La Shu, Shawna Benjamin-Davalos, Xue Wang, Eriko Katsuta, Megan Fitzgerald, Marina Koroleva, Cheryl L. Allen, Flora Qu, Gyorgy Paragh, Hans Minderman, Pawel Kalinski, Kazuaki Takabe and Marc S. Ernstoff
Cancers 2025, 17(15), 2509; https://doi.org/10.3390/cancers17152509 - 30 Jul 2025
Viewed by 215
Abstract
Background: Melanoma metastasis, driven by tumor microenvironment (TME)-mediated crosstalk facilitated by extracellular vesicles (EVs), remains a major therapeutic challenge. A critical barrier to clinical translation is the overlap in protein cargo between tumor-derived and healthy cell EVs. Objective: To address this, we developed [...] Read more.
Background: Melanoma metastasis, driven by tumor microenvironment (TME)-mediated crosstalk facilitated by extracellular vesicles (EVs), remains a major therapeutic challenge. A critical barrier to clinical translation is the overlap in protein cargo between tumor-derived and healthy cell EVs. Objective: To address this, we developed Scaffold-free Functional Deconvolution (SFD), a novel computational approach that leverages a comprehensive healthy cell EV protein database to deconvolute non-oncogenic background signals. Methods: Beginning with 1915 proteins (identified by MS/MS analysis on an Orbitrap Fusion Lumos Mass Spectrometer using the IonStar workflow) from melanoma EVs isolated using REIUS, SFD applies four sequential filters: exclusion of normal melanocyte EV proteins, prioritization of metastasis-linked entries (HCMDB), refinement via melanocyte-specific databases, and validation against TCGA survival data. Results: This workflow identified 21 high-confidence targets implicated in metabolic-associated acidification, immune modulation, and oncogenesis, and were analyzed for reduced disease-free and overall survival. SFD’s versatility was further demonstrated by surfaceome profiling, confirming enrichment of H7-B3 (CD276), ICAM1, and MIC-1 (GDF-15) in metastatic melanoma EV via Western blot and flow cytometry. Meta-analysis using Vesiclepedia and STRING categorized these targets into metabolic, immune, and oncogenic drivers, revealing a dense interaction network. Conclusions: Our results highlight SFD as a powerful tool for identifying clinically relevant biomarkers and therapeutic targets within melanoma EVs, with potential applications in drug development and personalized medicine. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

26 pages, 14606 KiB  
Review
Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI
by Kazi Nabiul Alam, Pooneh Bagheri Zadeh and Akbar Sheikh-Akbari
Electronics 2025, 14(15), 3024; https://doi.org/10.3390/electronics14153024 - 29 Jul 2025
Viewed by 281
Abstract
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting [...] Read more.
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting genetic analysis for personalized medicine. However, a critical drawback of using Computer Vision (CV) approaches is their limited reliability and transparency. Clinicians and patients must comprehend the rationale behind predictions or results to ensure trust and ethical deployment in clinical settings. This demonstrates the adoption of the idea of Explainable Computer Vision (X-CV), which enhances vision-relative interpretability. Among various methodologies, attribution-based approaches are widely employed by researchers to explain medical imaging outputs by identifying influential features. This article solely aims to explore how attribution-based X-CV methods work in medical imaging, what they are good for in real-world use, and what their main limitations are. This study evaluates X-CV techniques by conducting a thorough review of relevant reports, peer-reviewed journals, and methodological approaches to obtain an adequate understanding of attribution-based approaches. It explores how these techniques tackle computational complexity issues, improve diagnostic accuracy and aid clinical decision-making processes. This article intends to present a path that generalizes the concept of trustworthiness towards AI-based healthcare solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

11 pages, 275 KiB  
Article
Polygenic Score for Body Mass Index Is Associated with Weight Loss and Lipid Outcomes After Metabolic and Bariatric Surgery
by Luana Aldegheri, Chiara Cipullo, Natalia Rosso, Eulalia Catamo, Biagio Casagranda, Pablo Giraudi, Nicolò de Manzini, Silvia Palmisano and Antonietta Robino
Int. J. Mol. Sci. 2025, 26(15), 7337; https://doi.org/10.3390/ijms26157337 - 29 Jul 2025
Viewed by 264
Abstract
Metabolic and bariatric surgery (MBS) is an effective treatment for severe obesity, though individual responses vary widely, partly due to genetic predisposition. This study investigates the association of a body mass index (BMI) polygenic score (PGS) with weight loss and metabolic outcomes following [...] Read more.
Metabolic and bariatric surgery (MBS) is an effective treatment for severe obesity, though individual responses vary widely, partly due to genetic predisposition. This study investigates the association of a body mass index (BMI) polygenic score (PGS) with weight loss and metabolic outcomes following surgery. A cohort of 225 patients undergoing MBS was analyzed at baseline (T0), six (T6), and twelve (T12) months, with anthropometric and biochemical parameters recorded at each time point. Total weight loss (TWL) and excess weight loss (EWL) percentages were calculated. PGS was computed using the LDpred-grid Bayesian method. The mean age was 45.9 ± 9.4 years. Males had a higher baseline prevalence of type 2 diabetes (T2D) and comorbidities (p < 0.001). Linear regression analysis confirmed an association between PGS and baseline BMI (p = 0.012). Moreover, mediation analysis revealed that baseline BMI mediated the effect of the PGS on %TWL at T12, with an indirect effect (p-value = 0.018). In contrast, high-density lipoprotein-cholesterol (HDL-C) at T6 and triglycerides (TG) at T12 showed direct associations with the PGS (p-value = 0.004 and p-value = 0.08, respectively), with no significant mediation by BMI. This study showed a BMI-mediated association of PGS with %TWL and a direct association with lipid changes, suggesting its potential integration into personalized obesity treatment. Full article
(This article belongs to the Special Issue Genetic and Molecular Mechanisms of Obesity)
17 pages, 1777 KiB  
Article
Reduced-Order Model Based on Neural Network of Roll Bending
by Dmytro Svyetlichnyy
Appl. Sci. 2025, 15(15), 8418; https://doi.org/10.3390/app15158418 - 29 Jul 2025
Viewed by 98
Abstract
Effective real-time control systems require fast and accurate models. The roll bending models presented in this paper are proposed for a real-time control system for the design of the rolling schedule. The roll bending, with other factors, defines the shape of the roll [...] Read more.
Effective real-time control systems require fast and accurate models. The roll bending models presented in this paper are proposed for a real-time control system for the design of the rolling schedule. The roll bending, with other factors, defines the shape of the roll surface, its convexity, and finally the shape of the final product of the flat rolling, its convexity, and its flatness. This paper presents accurate finite element (FE) models for a four-high mill. The models serve to obtain accurate solutions to the problem of roll bending, taking into account the rolling force, width of the rolling sheet (strip), initial shape of the roll surface, and the anti-bending force. The results of the FE simulation are used to train three models developed on the basis of the neural network (NN) for the solution of one direct and two inverse tasks. The pre-trained NN model gives accurate results and is faster than the FE model (FEM). The calculation time on a personal computer for one case of 3D FEM is 1 to 2 min, for 2D FEM it is 1 s, and for NN it is less than 1 ms. The results can be immediately used by other models of the real-time control system. A novelty of the research presented in the paper is the creation of complex applications of the FE method and an NN as a reduced-order model (ROM) for prediction of roll bending and calculation of sheet (strip) convexity, rolling, and anti-bending forces to obtain the required convexity. Full article
Show Figures

Figure 1

21 pages, 602 KiB  
Review
Transforming Cancer Care: A Narrative Review on Leveraging Artificial Intelligence to Advance Immunotherapy in Underserved Communities
by Victor M. Vasquez, Molly McCabe, Jack C. McKee, Sharon Siby, Usman Hussain, Farah Faizuddin, Aadil Sheikh, Thien Nguyen, Ghislaine Mayer, Jennifer Grier, Subramanian Dhandayuthapani, Shrikanth S. Gadad and Jessica Chacon
J. Clin. Med. 2025, 14(15), 5346; https://doi.org/10.3390/jcm14155346 - 29 Jul 2025
Viewed by 266
Abstract
Purpose: Cancer immunotherapy has transformed oncology, but underserved populations face persistent disparities in access and outcomes. This review explores how artificial intelligence (AI) can help mitigate these barriers. Methods: We conducted a narrative review based on peer-reviewed literature selected for relevance [...] Read more.
Purpose: Cancer immunotherapy has transformed oncology, but underserved populations face persistent disparities in access and outcomes. This review explores how artificial intelligence (AI) can help mitigate these barriers. Methods: We conducted a narrative review based on peer-reviewed literature selected for relevance to artificial intelligence, cancer immunotherapy, and healthcare challenges, without restrictions on publication date. We searched three major electronic databases: PubMed, IEEE Xplore, and arXiv, covering both biomedical and computational literature. The search included publications from January 2015 through April 2024 to capture contemporary developments in AI and cancer immunotherapy. Results: AI tools such as machine learning, natural language processing, and predictive analytics can enhance early detection, personalize treatment, and improve clinical trial representation for historically underrepresented populations. Additionally, AI-driven solutions can aid in managing side effects, expanding telehealth, and addressing social determinants of health (SDOH). However, algorithmic bias, privacy concerns, and data diversity remain major challenges. Conclusions: With intentional design and implementation, AI holds the potential to reduce disparities in cancer immunotherapy and promote more inclusive oncology care. Future efforts must focus on ethical deployment, inclusive data collection, and interdisciplinary collaboration. Full article
(This article belongs to the Special Issue Recent Advances in Immunotherapy of Cancer)
Show Figures

Figure 1

16 pages, 358 KiB  
Article
Artificial Intelligence in Curriculum Design: A Data-Driven Approach to Higher Education Innovation
by Thai Son Chu and Mahfuz Ashraf
Knowledge 2025, 5(3), 14; https://doi.org/10.3390/knowledge5030014 - 29 Jul 2025
Viewed by 323
Abstract
This paper shows that artificial intelligence is fundamentally transforming college curricula by enabling data-driven personalization, which enhances student outcomes and better aligns educational programs with evolving workforce demands. Specifically, predictive analytics, machine learning algorithms, and natural language processing were applied here, grounded in [...] Read more.
This paper shows that artificial intelligence is fundamentally transforming college curricula by enabling data-driven personalization, which enhances student outcomes and better aligns educational programs with evolving workforce demands. Specifically, predictive analytics, machine learning algorithms, and natural language processing were applied here, grounded in constructivist learning theory and Human–Computer Interaction principles, to evaluate student performance and identify at-risk students to propose personalized learning pathways. Results indicated that the AI-based curriculum achieved much higher course completion rates (89.72%) as well as retention (91.44%) and dropout rates (4.98%) compared to the traditional model. Sentiment analysis of learner feedback showed a more positive learning experience, while regression and ANOVA analyses proved the impact of AI on enhancing academic performance to be real. Therefore, the learning content delivery for each student was continuously improved based on individual learner characteristics and industry trends by AI-enabled recommender systems and adaptive learning models. Its advantages notwithstanding, the study emphasizes the need to address ethical concerns, ensure data privacy safeguards, and mitigate algorithmic bias before an equitable outcome can be claimed. These findings can inform institutions aspiring to adopt AI-driven models for curriculum innovation to build a more dynamic, responsive, and learner-centered educational ecosystem. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

Back to TopTop