Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (944)

Search Parameters:
Keywords = medication information extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5770 KiB  
Article
Assessment of Influencing Factors and Robustness of Computable Image Texture Features in Digital Images
by Diego Andrade, Howard C. Gifford and Mini Das
Tomography 2025, 11(8), 87; https://doi.org/10.3390/tomography11080087 (registering DOI) - 31 Jul 2025
Abstract
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. [...] Read more.
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. While we use digital breast tomosynthesis (DBT) to show these effects, our results would be generally applicable to a wider range of other imaging modalities and applications. Methods: We examine factors in texture estimation methods, such as quantization, pixel distance offset, and region of interest (ROI) size, that influence the magnitudes of these readily computable and widely used image texture features (specifically Haralick’s gray level co-occurrence matrix (GLCM) textural features). Results: Our results indicate that quantization is the most influential of these parameters, as it controls the size of the GLCM and range of values. We propose a new multi-resolution normalization (by either fixing ROI size or pixel offset) that can significantly reduce quantization magnitude disparities. We show reduction in mean differences in feature values by orders of magnitude; for example, reducing it to 7.34% between quantizations of 8–128, while preserving trends. Conclusions: When combining images from multiple vendors in a common analysis, large variations in texture magnitudes can arise due to differences in post-processing methods like filters. We show that significant changes in GLCM magnitude variations may arise simply due to the filter type or strength. These trends can also vary based on estimation variables (like offset distance or ROI) that can further complicate analysis and robustness. We show pathways to reduce sensitivity to such variations due to estimation methods while increasing the desired sensitivity to patient-specific information such as breast density. Finally, we show that our results obtained from simulated DBT images are consistent with what we see when applied to clinical DBT images. Full article
Show Figures

Figure 1

19 pages, 7161 KiB  
Article
Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
by Weiqiang Xin, Ziang Wu, Qi Zhu, Tingting Bi, Bing Li and Chunwei Tian
Mathematics 2025, 13(15), 2457; https://doi.org/10.3390/math13152457 - 30 Jul 2025
Viewed by 118
Abstract
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To [...] Read more.
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 ×4 dataset). Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

35 pages, 4940 KiB  
Article
A Novel Lightweight Facial Expression Recognition Network Based on Deep Shallow Network Fusion and Attention Mechanism
by Qiaohe Yang, Yueshun He, Hongmao Chen, Youyong Wu and Zhihua Rao
Algorithms 2025, 18(8), 473; https://doi.org/10.3390/a18080473 - 30 Jul 2025
Viewed by 199
Abstract
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to [...] Read more.
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to run efficiently on mobile devices or edge devices, so the research on lightweight face expression recognition is particularly important. However, feature extraction and classification methods of lightweight convolutional neural network expression recognition algorithms mostly used at present are not specifically and fully optimized for the characteristics of facial expression images, yet fail to make full use of the feature information in face expression images. To address the lack of facial expression recognition models that are both lightweight and effectively optimized for expression-specific feature extraction, this study proposes a novel network design tailored to the characteristics of facial expressions. In this paper, we refer to the backbone architecture of MobileNet V2 network, and redesign LightExNet, a lightweight convolutional neural network based on the fusion of deep and shallow layers, attention mechanism, and joint loss function, according to the characteristics of the facial expression features. In the network architecture of LightExNet, firstly, deep and shallow features are fused in order to fully extract the shallow features in the original image, reduce the loss of information, alleviate the problem of gradient disappearance when the number of convolutional layers increases, and achieve the effect of multi-scale feature fusion. The MobileNet V2 architecture has also been streamlined to seamlessly integrate deep and shallow networks. Secondly, by combining the own characteristics of face expression features, a new channel and spatial attention mechanism is proposed to obtain the feature information of different expression regions as much as possible for encoding. Thus improve the accuracy of expression recognition effectively. Finally, the improved center loss function is superimposed to further improve the accuracy of face expression classification results, and corresponding measures are taken to significantly reduce the computational volume of the joint loss function. In this paper, LightExNet is tested on the three mainstream face expression datasets: Fer2013, CK+ and RAF-DB, respectively, and the experimental results show that LightExNet has 3.27 M Parameters and 298.27 M Flops, and the accuracy on the three datasets is 69.17%, 97.37%, and 85.97%, respectively. The comprehensive performance of LightExNet is better than the current mainstream lightweight expression recognition algorithms such as MobileNet V2, IE-DBN, Self-Cure Net, Improved MobileViT, MFN, Ada-CM, Parallel CNN(Convolutional Neural Network), etc. Experimental results confirm that LightExNet effectively improves recognition accuracy and computational efficiency while reducing energy consumption and enhancing deployment flexibility. These advantages underscore its strong potential for real-world applications in lightweight facial expression recognition. Full article
Show Figures

Figure 1

16 pages, 589 KiB  
Article
CT-Based Radiomics Enhance Respiratory Function Analysis for Lung SBRT
by Alice Porazzi, Mattia Zaffaroni, Vanessa Eleonora Pierini, Maria Giulia Vincini, Aurora Gaeta, Sara Raimondi, Lucrezia Berton, Lars Johannes Isaksson, Federico Mastroleo, Sara Gandini, Monica Casiraghi, Gaia Piperno, Lorenzo Spaggiari, Juliana Guarize, Stefano Maria Donghi, Łukasz Kuncman, Roberto Orecchia, Stefania Volpe and Barbara Alicja Jereczek-Fossa
Bioengineering 2025, 12(8), 800; https://doi.org/10.3390/bioengineering12080800 - 25 Jul 2025
Viewed by 385
Abstract
Introduction: Radiomics is the extraction of non-invasive and reproducible quantitative imaging features, which may yield mineable information for clinical practice implementation. Quantification of lung function through radiomics could play a role in the management of patients with pulmonary lesions. The aim of this [...] Read more.
Introduction: Radiomics is the extraction of non-invasive and reproducible quantitative imaging features, which may yield mineable information for clinical practice implementation. Quantification of lung function through radiomics could play a role in the management of patients with pulmonary lesions. The aim of this study is to test the capability of radiomic features to predict pulmonary function parameters, focusing on the diffusing capacity of lungs to carbon monoxide (DLCO). Methods: Retrospective data were retrieved from electronical medical records of patients treated with Stereotactic Body Radiation Therapy (SBRT) at a single institution. Inclusion criteria were as follows: (1) SBRT treatment performed for primary early-stage non-small cell lung cancer (ES-NSCLC) or oligometastatic lung nodules, (2) availability of simulation four-dimensional computed tomography (4DCT) scan, (3) baseline spirometry data availability, (4) availability of baseline clinical data, and (5) written informed consent for the anonymized use of data. The gross tumor volume (GTV) was segmented on 4DCT reconstructed phases representing the moment of maximum inhalation and maximum exhalation (Phase 0 and Phase 50, respectively), and radiomic features were extracted from the lung parenchyma subtracting the lesion/s. An iterative algorithm was clustered based on correlation, while keeping only those most associated with baseline and post-treatment DLCO. Three models were built to predict DLCO abnormality: the clinical model—containing clinical information; the radiomic model—containing the radiomic score; the clinical-radiomic model—containing clinical information and the radiomic score. For the models just described, the following were constructed: Model 1 based on the features in Phase 0; Model 2 based on the features in Phase 50; Model 3 based on the difference between the two phases. The AUC was used to compare their performances. Results: A total of 98 patients met the inclusion criteria. The Charlson Comorbidity Index (CCI) scored as the clinical variable most associated with baseline DLCO (p = 0.014), while the most associated features were mainly texture features and similar among the two phases. Clinical-radiomic models were the best at predicting both baseline and post-treatment abnormal DLCO. In particular, the performances for the three clinical-radiomic models at predicting baseline abnormal DLCO were AUC1 = 0.72, AUC2 = 0.72, and AUC3 = 0.75, for Model 1, Model 2, and Model 3, respectively. Regarding the prediction of post-treatment abnormal DLCO, the performances of the three clinical-radiomic models were AUC1 = 0.91, AUC2 = 0.91, and AUC3 = 0.95, for Model 1, Model 2, and Model 3, respectively. Conclusions: This study demonstrates that radiomic features extracted from healthy lung parenchyma on a 4DCT scan are associated with baseline pulmonary function parameters, showing that radiomics can add a layer of information in surrogate models for lung function assessment. Preliminary results suggest the potential applicability of these models for predicting post-SBRT lung function, warranting validation in larger, prospective cohorts. Full article
(This article belongs to the Special Issue Engineering the Future of Radiotherapy: Innovations and Challenges)
Show Figures

Figure 1

17 pages, 2072 KiB  
Article
Barefoot Footprint Detection Algorithm Based on YOLOv8-StarNet
by Yujie Shen, Xuemei Jiang, Yabin Zhao and Wenxin Xie
Sensors 2025, 25(15), 4578; https://doi.org/10.3390/s25154578 - 24 Jul 2025
Viewed by 271
Abstract
This study proposes an optimized footprint recognition model based on an enhanced StarNet architecture for biometric identification in the security, medical, and criminal investigation fields. Conventional image recognition algorithms exhibit limitations in processing barefoot footprint images characterized by concentrated feature distributions and rich [...] Read more.
This study proposes an optimized footprint recognition model based on an enhanced StarNet architecture for biometric identification in the security, medical, and criminal investigation fields. Conventional image recognition algorithms exhibit limitations in processing barefoot footprint images characterized by concentrated feature distributions and rich texture patterns. To address this, our framework integrates an improved StarNet into the backbone of YOLOv8 architecture. Leveraging the unique advantages of element-wise multiplication, the redesigned backbone efficiently maps inputs to a high-dimensional nonlinear feature space without increasing channel dimensions, achieving enhanced representational capacity with low computational latency. Subsequently, an Encoder layer facilitates feature interaction within the backbone through multi-scale feature fusion and attention mechanisms, effectively extracting rich semantic information while maintaining computational efficiency. In the feature fusion part, a feature modulation block processes multi-scale features by synergistically combining global and local information, thereby reducing redundant computations and decreasing both parameter count and computational complexity to achieve model lightweighting. Experimental evaluations on a proprietary barefoot footprint dataset demonstrate that the proposed model exhibits significant advantages in terms of parameter efficiency, recognition accuracy, and computational complexity. The number of parameters has been reduced by 0.73 million, further improving the model’s speed. Gflops has been reduced by 1.5, lowering the performance requirements for computational hardware during model deployment. Recognition accuracy has reached 99.5%, with further improvements in model precision. Future research will explore how to capture shoeprint images with complex backgrounds from shoes worn at crime scenes, aiming to further enhance the model’s recognition capabilities in more forensic scenarios. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

26 pages, 6051 KiB  
Article
A Novel Sound Coding Strategy for Cochlear Implants Based on Spectral Feature and Temporal Event Extraction
by Behnam Molaee-Ardekani, Rafael Attili Chiea, Yue Zhang, Julian Felding, Aswin Adris Wijetillake, Peter T. Johannesen, Enrique A. Lopez-Poveda and Manuel Segovia-Martínez
Technologies 2025, 13(8), 318; https://doi.org/10.3390/technologies13080318 - 23 Jul 2025
Viewed by 319
Abstract
This paper presents a novel cochlear implant (CI) sound coding strategy called Spectral Feature Extraction (SFE). The SFE is a novel Fast Fourier Transform (FFT)-based Continuous Interleaved Sampling (CIS) strategy that provides less-smeared spectral cues to CI patients compared to Crystalis, a predecessor [...] Read more.
This paper presents a novel cochlear implant (CI) sound coding strategy called Spectral Feature Extraction (SFE). The SFE is a novel Fast Fourier Transform (FFT)-based Continuous Interleaved Sampling (CIS) strategy that provides less-smeared spectral cues to CI patients compared to Crystalis, a predecessor strategy used in Oticon Medical devices. The study also explores how the SFE can be enhanced into a Temporal Fine Structure (TFS)-based strategy named Spectral Event Extraction (SEE), combining spectral sharpness with temporal cues. Background/Objectives: Many CI recipients understand speech in quiet settings but struggle with music and complex environments, increasing cognitive effort. De-smearing the power spectrum and extracting spectral peak features can reduce this load. The SFE targets feature extraction from spectral peaks, while the SEE enhances TFS-based coding by tracking these features across frames. Methods: The SFE strategy extracts spectral peaks and models them with synthetic pure tone spectra characterized by instantaneous frequency, phase, energy, and peak resemblance. This deblurs input peaks by estimating their center frequency. In SEE, synthetic peaks are tracked across frames to yield reliable temporal cues (e.g., zero-crossings) aligned with stimulation pulses. Strategy characteristics are analyzed using electrodograms. Results: A flexible Frequency Allocation Map (FAM) can be applied to both SFE and SEE strategies without being limited by FFT bandwidth constraints. Electrodograms of Crystalis and SFE strategies showed that SFE reduces spectral blurring and provides detailed temporal information of harmonics in speech and music. Conclusions: SFE and SEE are expected to enhance speech understanding, lower listening effort, and improve temporal feature coding. These strategies could benefit CI users, especially in challenging acoustic environments. Full article
(This article belongs to the Special Issue The Challenges and Prospects in Cochlear Implantation)
Show Figures

Figure 1

15 pages, 1006 KiB  
Article
Framework for a Modular Emergency Departments Registry: A Case Study of the Tasmanian Emergency Care Outcomes Registry (TECOR)
by Viet Tran, Lauren Thurlow, Simone Page and Giles Barrington
Hospitals 2025, 2(3), 18; https://doi.org/10.3390/hospitals2030018 - 23 Jul 2025
Viewed by 216
Abstract
Background: The emergency department (ED) often represents the entry point to care for patients that require urgent medical attention or have no alternative for medical treatment. This has implications on scope of practice and how quality of care is measured. A diverse [...] Read more.
Background: The emergency department (ED) often represents the entry point to care for patients that require urgent medical attention or have no alternative for medical treatment. This has implications on scope of practice and how quality of care is measured. A diverse array of methodologies has been developed to evaluate the quality of clinical care and broadly includes quality improvement (QI), quality assurance (QA), observational research (OR) and clinical quality registries (CQRs). Considering the overlap between QI, QA, OR and CQRs, we conceptualized a modular framework for TECOR to effectively and efficiently streamline clinical quality evaluations. Streamlining is both appropriate and justified as it reduces redundancy, enhances clarity and optimizes resource utilization, thereby allowing clinicians to focus on delivering high-quality patient care without being overwhelmed by excessive data and procedural complexities. The objective of this study is to describe the process for designing a modular framework for ED CQRs using TECOR as a case study. Methods: We performed a scoping audit of all quality projects performed in our ED over a 1-year period (1 January 2021 to 31 December 2021) as well as data mapping and categorical formulation of key themes from the TECOR dataset with clinical data sources. Both these processes then informed the design of TECOR. Results: For the audit of quality projects, we identified 29 projects. The quality evaluation methodologies for these projects included 12 QI projects, 5 CQRs and 12 OR projects. Data mapping identified that clinical information was fragmented across 11 distinct data sources. Through thematic analysis during data mapping, we identified three extraction techniques: self-extractable, manual entry and on request. Conclusions: The modular framework for TECOR aims to enable an efficient streamlined approach that caters to all aspects of clinical quality evaluation to enable higher throughput of clinician-led quality evaluations and improvements. TECOR is also an essential component in the development of a learning health system to drive evidence-based practice and the subject of future research. Full article
Show Figures

Figure 1

14 pages, 730 KiB  
Article
Opportunities and Limitations of Wrist-Worn Devices for Dyskinesia Detection in Parkinson’s Disease
by Alexander Johannes Wiederhold, Qi Rui Zhu, Sören Spiegel, Adrin Dadkhah, Monika Pötter-Nerger, Claudia Langebrake, Frank Ückert and Christopher Gundler
Sensors 2025, 25(14), 4514; https://doi.org/10.3390/s25144514 - 21 Jul 2025
Viewed by 324
Abstract
During the in-hospital optimization of dopaminergic dosage for Parkinson’s disease, drug-induced dyskinesias emerge as a common side effect. Wrist-worn devices present a substantial opportunity for continuous movement recording and the supportive identification of these dyskinesias. To bridge the gap between dyskinesia assessment and [...] Read more.
During the in-hospital optimization of dopaminergic dosage for Parkinson’s disease, drug-induced dyskinesias emerge as a common side effect. Wrist-worn devices present a substantial opportunity for continuous movement recording and the supportive identification of these dyskinesias. To bridge the gap between dyskinesia assessment and machine learning-enabled detection, the recorded information requires meaningful data representations. This study evaluates and compares two distinct representations of sensor data: a task-dependent, semantically grounded approach and automatically extracted large-scale time-series features. Each representation was assessed on public datasets to identify the best-performing machine learning model and subsequently applied to our own collected dataset to assess generalizability. Data representations incorporating semantic knowledge demonstrated comparable or superior performance to reported works, with peak F1 scores of 0.68. Generalization to our own dataset from clinical practice resulted in an observed F1 score of 0.53 using both setups. These results highlight the potential of semantic movement data analysis for dyskinesia detection. Dimensionality reduction in accelerometer-based movement data positively impacts performance, and models trained with semantically obtained features avoid overfitting. Expanding cohorts with standardized neurological assessments labeled by medical experts is essential for further improvements. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

15 pages, 4874 KiB  
Article
A Novel 3D Convolutional Neural Network-Based Deep Learning Model for Spatiotemporal Feature Mapping for Video Analysis: Feasibility Study for Gastrointestinal Endoscopic Video Classification
by Mrinal Kanti Dhar, Mou Deb, Poonguzhali Elangovan, Keerthy Gopalakrishnan, Divyanshi Sood, Avneet Kaur, Charmy Parikh, Swetha Rapolu, Gianeshwaree Alias Rachna Panjwani, Rabiah Aslam Ansari, Naghmeh Asadimanesh, Shiva Sankari Karuppiah, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
J. Imaging 2025, 11(7), 243; https://doi.org/10.3390/jimaging11070243 - 18 Jul 2025
Viewed by 422
Abstract
Accurate analysis of medical videos remains a major challenge in deep learning (DL) due to the need for effective spatiotemporal feature mapping that captures both spatial detail and temporal dynamics. Despite advances in DL, most existing models in medical AI focus on static [...] Read more.
Accurate analysis of medical videos remains a major challenge in deep learning (DL) due to the need for effective spatiotemporal feature mapping that captures both spatial detail and temporal dynamics. Despite advances in DL, most existing models in medical AI focus on static images, overlooking critical temporal cues present in video data. To bridge this gap, a novel DL-based framework is proposed for spatiotemporal feature extraction from medical video sequences. As a feasibility use case, this study focuses on gastrointestinal (GI) endoscopic video classification. A 3D convolutional neural network (CNN) is developed to classify upper and lower GI endoscopic videos using the hyperKvasir dataset, which contains 314 lower and 60 upper GI videos. To address data imbalance, 60 matched pairs of videos are randomly selected across 20 experimental runs. Videos are resized to 224 × 224, and the 3D CNN captures spatiotemporal information. A 3D version of the parallel spatial and channel squeeze-and-excitation (P-scSE) is implemented, and a new block called the residual with parallel attention (RPA) block is proposed by combining P-scSE3D with a residual block. To reduce computational complexity, a (2 + 1)D convolution is used in place of full 3D convolution. The model achieves an average accuracy of 0.933, precision of 0.932, recall of 0.944, F1-score of 0.935, and AUC of 0.933. It is also observed that the integration of P-scSE3D increased the F1-score by 7%. This preliminary work opens avenues for exploring various GI endoscopic video-based prospective studies. Full article
Show Figures

Figure 1

13 pages, 1566 KiB  
Article
Turkish Chest X-Ray Report Generation Model Using the Swin Enhanced Yield Transformer (Model-SEY) Framework
by Murat Ucan, Buket Kaya and Mehmet Kaya
Diagnostics 2025, 15(14), 1805; https://doi.org/10.3390/diagnostics15141805 - 17 Jul 2025
Viewed by 285
Abstract
Background/Objectives: Extracting meaningful medical information from chest X-ray images and transcribing it into text is a complex task that requires a high level of expertise and directly affects clinical decision-making processes. Automatic reporting systems for this field in Turkish represent an important [...] Read more.
Background/Objectives: Extracting meaningful medical information from chest X-ray images and transcribing it into text is a complex task that requires a high level of expertise and directly affects clinical decision-making processes. Automatic reporting systems for this field in Turkish represent an important gap in scientific research, as they have not been sufficiently addressed in the existing literature. Methods: A deep learning-based approach called Model-SEY was developed with the aim of automatically generating Turkish medical reports from chest X-ray images. The Swin Transformer structure was used in the encoder part of the model to extract image features, while the text generation process was carried out using the cosmosGPT architecture, which was adapted specifically for the Turkish language. Results: With the permission of the ethics committee, a new dataset was created using image–report pairs obtained from Elazıg Fethi Sekin City Hospital and Indiana University Chest X-Ray dataset and experiments were conducted on this new dataset. In the tests conducted within the scope of the study, scores of 0.6412, 0.5335, 0.4395, 0.4395, 0.3716, and 0.2240 were obtained in BLEU-1, BLEU-2, BLEU-3, BLEU-4, and ROUGE word overlap evaluation metrics, respectively. Conclusions: Quantitative and qualitative analyses of medical reports autonomously generated by the proposed model have shown that they are meaningful and consistent. The proposed model is one of the first studies in the field of autonomous reporting using deep learning architectures specific to the Turkish language, representing an important step forward in this field. It will also reduce potential human errors during diagnosis by supporting doctors in their decision-making. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

17 pages, 3753 KiB  
Article
LSA-DDI: Learning Stereochemistry-Aware Drug Interactions via 3D Feature Fusion and Contrastive Cross-Attention
by Shanshan Wang, Chen Yang and Lirong Chen
Int. J. Mol. Sci. 2025, 26(14), 6799; https://doi.org/10.3390/ijms26146799 - 16 Jul 2025
Viewed by 252
Abstract
Accurate prediction of drug–drug interactions (DDIs) is essential for ensuring medication safety and optimizing combination-therapy strategies. However, existing DDI models face limitations in handling interactions related to stereochemistry and precisely locating drug interaction sites. These limitations reduce the prediction accuracy for conformation-dependent interactions [...] Read more.
Accurate prediction of drug–drug interactions (DDIs) is essential for ensuring medication safety and optimizing combination-therapy strategies. However, existing DDI models face limitations in handling interactions related to stereochemistry and precisely locating drug interaction sites. These limitations reduce the prediction accuracy for conformation-dependent interactions and the interpretability of molecular mechanisms, potentially posing risks to clinical safety. To address these challenges, we introduce LSA-DDI, a Spatial-Contrastive-Attention-Based Drug–Drug Interaction framework. Our 3D feature extraction method captures the spatial structure of molecules through three features—coordinates, distances, and angles—and fuses them to enhance the model of molecular spatial structures. Concurrently, we design and implement a Dynamic Feature Exchange (DFE) mechanism that dynamically regulates the flow of information across modalities via an attention mechanism, achieving bidirectional enhancement and semantic alignment of 2D topological and 3D spatial structure features. Additionally, we incorporate a dynamic temperature-regulated multiscale contrastive learning framework that effectively aligns multiscale features and enhances the model’s generalizability. Experiments conducted on public drug databases under both warm-start and cold-start scenarios demonstrated that LSA-DDI achieved competitive performance, with consistent improvements over existing methods. Full article
Show Figures

Figure 1

15 pages, 16898 KiB  
Article
Cross-Scale Hypergraph Neural Networks with Inter–Intra Constraints for Mitosis Detection
by Jincheng Li, Danyang Dong, Yihui Zhan, Guanren Zhu, Hengshuo Zhang, Xing Xie and Lingling Yang
Sensors 2025, 25(14), 4359; https://doi.org/10.3390/s25144359 - 12 Jul 2025
Viewed by 406
Abstract
Mitotic figures in tumor tissues are an important criterion for diagnosing malignant lesions, and physicians often search for the presence of mitosis in whole slide imaging (WSI). However, prolonged visual inspection by doctors may increase the likelihood of human error. With the advancement [...] Read more.
Mitotic figures in tumor tissues are an important criterion for diagnosing malignant lesions, and physicians often search for the presence of mitosis in whole slide imaging (WSI). However, prolonged visual inspection by doctors may increase the likelihood of human error. With the advancement of deep learning, AI-based automatic cytopathological diagnosis has been increasingly applied in clinical settings. Nevertheless, existing diagnostic models often suffer from high computational costs and suboptimal detection accuracy. More importantly, when assessing cellular abnormalities, doctors frequently compare target cells with their surrounding cells—an aspect that current models fail to capture due to their lack of intercellular information modeling, leading to the loss of critical medical insights. To address these limitations, we conducted an in-depth analysis of existing models and propose an Inter–Intra Hypergraph Neural Network (II-HGNN). Our model introduces a block-based feature extraction mechanism to efficiently capture deep representations. Additionally, we leverage hypergraph convolutional networks to process both intracellular and intercellular information, leading to more precise diagnostic outcomes. We evaluate our model on publicly available datasets under varying imaging conditions, and experimental results demonstrate that our approach consistently outperforms baseline models in terms of accuracy. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Imaging Sensors and Processing)
Show Figures

Figure 1

22 pages, 726 KiB  
Review
Advancing Women’s Health: A Scoping Review of Pharmaceutical Therapies for Female Sexual Dysfunction
by Alissa I. Elanjian, Sesilia Kammo, Lyndsey Braman and Aron Liaw
Sexes 2025, 6(3), 38; https://doi.org/10.3390/sexes6030038 - 11 Jul 2025
Viewed by 421
Abstract
Background: Female Sexual Dysfunction (FSD) encompasses a range of conditions that can profoundly impact quality of life and intimate relationships. The primary classifications of FSD include female sexual interest and arousal disorder (FSIAD), genitopelvic pain and penetration disorder (GPPPD), female orgasmic disorder (FOD), [...] Read more.
Background: Female Sexual Dysfunction (FSD) encompasses a range of conditions that can profoundly impact quality of life and intimate relationships. The primary classifications of FSD include female sexual interest and arousal disorder (FSIAD), genitopelvic pain and penetration disorder (GPPPD), female orgasmic disorder (FOD), and substance or medication-induced sexual dysfunction (SM-ISD). Despite its prevalence, FSD is often underdiagnosed and undertreated. Objectives: This scoping review follows Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to evaluate the existing literature on both U.S. Food and Drug Administration (FDA)-approved and off-label pharmacotherapies for FSD by study type, outcomes, and limitations. Eligibility Criteria: Eligible studies comprised randomized controlled trials (RCTs), systematic reviews, and cohort studies involving adult women (≥18 years) with any subtype of FSD. These studies assessed pharmacologic interventions against a comparator and reported at least one treatment efficacy outcome. Studies outside this scope were excluded. Sources of Evidence: A 25-year literature search was conducted using PubMed/MEDLINE, the Cochrane Library, reference lists of relevant articles, academic handbooks, and targeted journals. Charting Methods: Three independent reviewers screened and extracted data. Risk of bias was assessed using the Cochrane Risk of Bias Tool. Findings were organized into summary tables and categorized by pharmaceutical agent, pertinent study information, outcomes, and limitations. Results: A total of 44 human-based pharmacologic studies met inclusion criteria. FDA-approved agents were the most thoroughly studied pharmacotherapies. Hormonal, topical, and adjunctive agents demonstrated less robust evidence. Heterogeneity in outcome measures and inadequate long-term data were common limitations. Conclusions: Pharmacologic treatment for FSD shows promise but requires further research. Individualized, multifaceted care is essential for optimizing FSD outcomes. Full article
(This article belongs to the Section Women's Health and Gynecology)
Show Figures

Figure 1

21 pages, 5069 KiB  
Article
A Patent-Based Technology Roadmap for AI-Powered Manipulators: An Evolutionary Analysis of the B25J Classification
by Yujia Zhai, Zehao Liu, Rui Zhao, Xin Zhang and Gengfeng Zheng
Informatics 2025, 12(3), 69; https://doi.org/10.3390/informatics12030069 - 11 Jul 2025
Viewed by 514
Abstract
Technology roadmapping is conducted by systematic mapping of technological evolution through patent analytics to inform innovation strategies. This study proposes an integrated framework combining hierarchical Latent Dirichlet Allocation (LDA) modeling with multiphase technology lifecycle theory, analyzing 113,449 Derwent patent abstracts (2008–2022) across three [...] Read more.
Technology roadmapping is conducted by systematic mapping of technological evolution through patent analytics to inform innovation strategies. This study proposes an integrated framework combining hierarchical Latent Dirichlet Allocation (LDA) modeling with multiphase technology lifecycle theory, analyzing 113,449 Derwent patent abstracts (2008–2022) across three dimensions: technological novelty, functional applications, and competitive advantages. By segmenting innovation stages via logistic growth curve modeling and optimizing topic extraction through perplexity validation, we constructed dynamic technology roadmaps to decode latent evolutionary patterns in AI-powered programmable manipulators (B25J classification) within an innovation trajectory. Key findings revealed: (1) a progressive transition from electromechanical actuation to sensor-integrated architectures, evidenced by 58% compound annual growth in embedded sensing patents; (2) application expansion from industrial automation (72% early stage patents) to precision medical operations, with surgical robotics growing 34% annually since 2018; and (3) continuous advancements in adaptive control algorithms, showing 2.7× growth in reinforcement learning implementations. The methodology integrates quantitative topic modeling (via pyLDAvis visualization and cosine similarity analysis) with qualitative lifecycle theory, addressing the limitations of conventional technology analysis methods by reconciling semantic granularity with temporal dynamics. The results identify core innovation trajectories—precision control, intelligent detection, and medical robotics—while highlighting emerging opportunities in autonomous navigation and human–robot collaboration. This framework provides empirically grounded strategic intelligence for R&D prioritization, cross-industry investment, and policy formulation in Industry 4.0. Full article
Show Figures

Figure 1

12 pages, 567 KiB  
Article
Toxicity Profiles of Antibody–Drug Conjugates: Synthesis and Graphical Insights to Optimize Patient-Centered Treatment Strategies for HER2-Negative Metastatic Breast Cancer
by Bérénice Collineau, Anthony Gonçalves, Marie Domon, Damien Bruyat, François Bertucci and Alexandre de Nonneville
Cancers 2025, 17(14), 2307; https://doi.org/10.3390/cancers17142307 - 11 Jul 2025
Viewed by 376
Abstract
Background: The treatment options for HER2-negative metastatic breast cancer include targeted therapies, cytotoxic chemotherapies, and immunotherapy. However, limited specificity and inevitable resistance highlight the need for novel agents. Antibody–drug conjugates (ADCs), such as trastuzumab deruxtecan (T-DXd) and sacituzumab govitecan (SG), represent a breakthrough [...] Read more.
Background: The treatment options for HER2-negative metastatic breast cancer include targeted therapies, cytotoxic chemotherapies, and immunotherapy. However, limited specificity and inevitable resistance highlight the need for novel agents. Antibody–drug conjugates (ADCs), such as trastuzumab deruxtecan (T-DXd) and sacituzumab govitecan (SG), represent a breakthrough by selectively delivering cytotoxic agents to tumor cells, potentially improving the therapeutic index. Despite demonstrated efficacy, ADCs present toxicity profiles similar to conventional chemotherapy, alongside unique adverse events. In clinical practice, oncologists may face scenarios where both T-DXd and SG are treatment options in HER2-negative mBC. To enable shared decision-making, it is crucial to present a comprehensive overview that includes both efficacy data and detailed toxicity profiles. Our objective was to provide a pooled and informative synthesis of toxicities from pivotal studies, including graphical representations, to support informed, patient-centered medical decisions. Methods: We reviewed safety data from phase 3 clinical trials in HER2-negative mBC: DESTINY-Breast04/DESTINY-Breast06 for T-DXd and ASCENT/TROPICS-02 for SG. Adverse event (AE) profiles, including frequency and severity, were extracted, and weighted means were calculated. Emerging ADCs such as datopotamab deruxtecan and patritumab deruxtecan were considered to contextualize future therapeutic decisions. Results: Tables, bar plots and radar plots were generated. T-DXd demonstrated high rates of nausea (69.2%), fatigue (47.2%), and neutropenia (35.6%), with 52.7% experiencing grade ≥ 3 AEs. Notably, pneumonitis occurred in 10.7%, with grade ≥ 3 in 2.6%. SG showed a distinct AE profile, with higher incidences of neutropenia (67.1%), with grade ≥ 3 in 51.3%, and diarrhea (60.8%). Conclusions: The choice between ADCs in HER2-negative metastatic BC when both T-DXd and SG are treatment options should consider toxicity profiles to optimize patient-centered treatment strategies. Tailoring ADC selection based on individual tolerance and preferences is critical for shared decision-making, and future research should focus on assessing the utility and acceptability of such clinical tools to guide treatment selection. Full article
(This article belongs to the Section Cancer Drug Development)
Show Figures

Figure 1

Back to TopTop