Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = precision medical assistance pattern

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1769 KB  
Article
Advanced Brain Tumor Segmentation Using SAM2-UNet
by Rohit Viswakarma Pidishetti, Maaz Amjad and Victor S. Sheng
Appl. Sci. 2025, 15(6), 3267; https://doi.org/10.3390/app15063267 - 17 Mar 2025
Cited by 2 | Viewed by 2183
Abstract
Image segmentation is one of the key factors in diagnosing glioma patients with brain tumors. It helps doctors identify the types of tumor that a patient is carrying and will lead to a prognosis that will help save the lives of patients. The [...] Read more.
Image segmentation is one of the key factors in diagnosing glioma patients with brain tumors. It helps doctors identify the types of tumor that a patient is carrying and will lead to a prognosis that will help save the lives of patients. The analysis of medical images is a specialized domain in computer vision and image processing. This process extracts meaningful information from medical images that helps in treatment planning and monitoring the condition of patients. Deep learning models like CNN have shown promising results in image segmentation by identifying complex patterns in the image data. These methods have also shown great results in tumor segmentation and the identification of anomalies, which assist health care professionals in treatment planning. Despite advancements made in the domain of deep learning for medical image segmentation, the precise segmentation of tumors remains challenging because of the complex structures of tumors across patients. Existing models, such as traditional U-Net- and SAM-based architectures, either lack efficiency in handling class-specific segmentation or require extensive computational resources. This study aims to bridge this gap by proposing Segment Anything Model 2-UNetwork, a hybrid model that leverages the strengths of both architectures to improve segmentation accuracy and consumes less computational resources by maintaining efficiency. The proposed model possesses the ability to perform explicitly well on scarce data, and we trained this model on the Brain Tumor Segmentation Challenge 2020 (BraTS) dataset. This architecture is inspired by U-Networks that are based on the encoder and decoder architecture. The Hiera pre-trained model is set as a backbone to this architecture to capture multi-scale features. Adapters are embedded into the encoder to achieve parameter-efficient fine-tuning. The dataset contains four channels of MRI scans of 369 glioma patients as T1, T1ce, T2, and T2-flair and a segmentation mask for each patient consisting of non-tumor (NT), necrotic and non-enhancing tumor (NCR/NET), and peritumoral edema or GD-enhancing tumor (ET) as the ground-truth value. These experiments yielded good segmentation performance and achieved balanced performance based on the metrics discussed next in this paragraph for each tumor region. Our experiments yielded the following results with minimal hardware resources, i.e., 16 GB RAM with 30 epochs: a mean Dice score (mDice) of 0.771, a mean Intersection over Union (mIoU) of 0.569, an Sα score of 0.692, a weighted F-beta score (Fβw) of 0.267, a F-beta score (Fβ) of 0.261, an Eϕ score of 0.857, and a Mean Absolute Error (MAE) of 0.04 on the BraTS 2020 dataset. Full article
(This article belongs to the Special Issue Artificial Intelligence Techniques for Medical Data Analytics)
Show Figures

Figure 1

20 pages, 1619 KB  
Systematic Review
A Breakthrough in Producing Personalized Solutions for Rehabilitation and Physiotherapy Thanks to the Introduction of AI to Additive Manufacturing
by Emilia Mikołajewska, Dariusz Mikołajewski, Tadeusz Mikołajczyk and Tomasz Paczkowski
Appl. Sci. 2025, 15(4), 2219; https://doi.org/10.3390/app15042219 - 19 Feb 2025
Cited by 4 | Viewed by 3308
Abstract
The integration of artificial intelligence (AI) with additive manufacturing (AM) is driving breakthroughs in personalized rehabilitation and physical therapy solutions, enabling precise customization to individual patient needs. This article presents the current state of knowledge and perspectives of using personalized solutions for rehabilitation [...] Read more.
The integration of artificial intelligence (AI) with additive manufacturing (AM) is driving breakthroughs in personalized rehabilitation and physical therapy solutions, enabling precise customization to individual patient needs. This article presents the current state of knowledge and perspectives of using personalized solutions for rehabilitation and physiotherapy thanks to the introduction of AI to AM. Advanced AI algorithms analyze patient-specific data such as body scans, movement patterns, and medical history to design customized assistive devices, orthoses, and prosthetics. This synergy enables the rapid prototyping and production of highly optimized solutions, improving comfort, functionality, and therapeutic outcomes. Machine learning (ML) models further streamline the process by anticipating biomechanical needs and adapting designs based on feedback, providing iterative refinement. Cutting-edge techniques leverage generative design and topology optimization to create lightweight yet durable structures that are ideally suited to the patient’s anatomy and rehabilitation goals .AI-based AM also facilitates the production of multi-material devices that combine flexibility, strength, and sensory capabilities, enabling improved monitoring and support during physical therapy. New perspectives include integrating smart sensors with printed devices, enabling real-time data collection and feedback loops for adaptive therapy. Additionally, these solutions are becoming increasingly accessible as AM technology lowers costs and improves, democratizing personalized healthcare. Future advances could lead to the widespread use of digital twins for the real-time simulation and customization of rehabilitation devices before production. AI-based virtual reality (VR) and augmented reality (AR) tools are also expected to combine with AM to provide immersive, patient-specific training environments along with physical aids. Collaborative platforms based on federated learning can enable healthcare providers and researchers to securely share AI insights, accelerating innovation. However, challenges such as regulatory approval, data security, and ensuring equity in access to these technologies must be addressed to fully realize their potential. One of the major gaps is the lack of large, diverse datasets to train AI models, which limits their ability to design solutions that span different demographics and conditions. Integration of AI–AM systems into personalized rehabilitation and physical therapy should focus on improving data collection and processing techniques. Full article
(This article belongs to the Special Issue Additive Manufacturing in Material Processing)
Show Figures

Figure 1

11 pages, 1081 KB  
Review
Is Artificial Intelligence the Next Co-Pilot for Primary Care in Diagnosing and Recommending Treatments for Depression?
by Inbar Levkovich
Med. Sci. 2025, 13(1), 8; https://doi.org/10.3390/medsci13010008 - 11 Jan 2025
Cited by 6 | Viewed by 4495
Abstract
Depression poses significant challenges to global healthcare systems and impacts the quality of life of individuals and their family members. Recent advancements in artificial intelligence (AI) have had a transformative impact on the diagnosis and treatment of depression. These innovations have the potential [...] Read more.
Depression poses significant challenges to global healthcare systems and impacts the quality of life of individuals and their family members. Recent advancements in artificial intelligence (AI) have had a transformative impact on the diagnosis and treatment of depression. These innovations have the potential to significantly enhance clinical decision-making processes and improve patient outcomes in healthcare settings. AI-powered tools can analyze extensive patient data—including medical records, genetic information, and behavioral patterns—to identify early warning signs of depression, thereby enhancing diagnostic accuracy. By recognizing subtle indicators that traditional assessments may overlook, these tools enable healthcare providers to make timely and precise diagnostic decisions that are crucial in preventing the onset or escalation of depressive episodes. In terms of treatment, AI algorithms can assist in personalizing therapeutic interventions by predicting the effectiveness of various approaches for individual patients based on their unique characteristics and medical history. This includes recommending tailored treatment plans that consider the patient’s specific symptoms. Such personalized strategies aim to optimize therapeutic outcomes and improve the overall efficiency of healthcare. This theoretical review uniquely synthesizes current evidence on AI applications in primary care depression management, offering a comprehensive analysis of both diagnostic and treatment personalization capabilities. Alongside these advancements, we also address the conflicting findings in the field and the presence of biases that necessitate important limitations. Full article
Show Figures

Figure 1

18 pages, 2813 KB  
Article
Multimodal Data Fusion for Depression Detection Approach
by Mariia Nykoniuk, Oleh Basystiuk, Nataliya Shakhovska and Nataliia Melnykova
Computation 2025, 13(1), 9; https://doi.org/10.3390/computation13010009 - 2 Jan 2025
Cited by 9 | Viewed by 5893
Abstract
Depression is one of the most common mental health disorders in the world, affecting millions of people. Early detection of depression is crucial for effective medical intervention. Multimodal networks can greatly assist in the detection of depression, especially in situations where in patients [...] Read more.
Depression is one of the most common mental health disorders in the world, affecting millions of people. Early detection of depression is crucial for effective medical intervention. Multimodal networks can greatly assist in the detection of depression, especially in situations where in patients are not always aware of or able to express their symptoms. By analyzing text and audio data, such networks are able to automatically identify patterns in speech and behavior that indicate a depressive state. In this study, we propose two multimodal information fusion networks: early and late fusion. These networks were developed using convolutional neural network (CNN) layers to learn local patterns, a bidirectional LSTM (Bi-LSTM) to process sequences, and a self-attention mechanism to improve focus on key parts of the data. The DAIC-WOZ and EDAIC-WOZ datasets were used for the experiments. The experiments compared the precision, recall, f1-score, and accuracy metrics for the cases of using early and late multimodal data fusion and found that the early information fusion multimodal network achieved higher classification accuracy results. On the test dataset, this network achieved an f1-score of 0.79 and an overall classification accuracy of 0.86, indicating its effectiveness in detecting depression. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

13 pages, 890 KB  
Review
Artificial Intelligence in Head and Neck Cancer Diagnosis: A Comprehensive Review with Emphasis on Radiomics, Histopathological, and Molecular Applications
by Giuseppe Broggi, Antonino Maniaci, Mario Lentini, Andrea Palicelli, Magda Zanelli, Maurizio Zizzo, Nektarios Koufopoulos, Serena Salzano, Manuel Mazzucchelli and Rosario Caltabiano
Cancers 2024, 16(21), 3623; https://doi.org/10.3390/cancers16213623 - 27 Oct 2024
Cited by 18 | Viewed by 4254
Abstract
The present review discusses the transformative role of AI in the diagnosis and management of head and neck cancers (HNCs). Methods: It explores how AI technologies, including ML, DL, and CNNs, are applied in various diagnostic tasks, such as medical imaging, molecular profiling, [...] Read more.
The present review discusses the transformative role of AI in the diagnosis and management of head and neck cancers (HNCs). Methods: It explores how AI technologies, including ML, DL, and CNNs, are applied in various diagnostic tasks, such as medical imaging, molecular profiling, and predictive modeling. Results: This review highlights AI’s ability to improve diagnostic accuracy and efficiency, particularly in analyzing medical images like CT, MRI, and PET scans, where AI sometimes outperforms human radiologists. This paper also emphasizes AI’s application in histopathology, where algorithms assist in whole-slide image (WSI) analysis, tumor-infiltrating lymphocytes (TILs) quantification, and tumor segmentation. AI shows promise in identifying subtle or rare histopathological patterns and enhancing the precision of tumor grading and treatment planning. Furthermore, the integration of AI with molecular and genomic data aids in mutation analysis, prognosis, and personalized treatment strategies. Conclusions: Despite these advancements, the review identifies challenges in AI adoption, such as data standardization and model interpretability, and calls for further research to fully integrate AI into clinical practice for improved patient outcomes. Full article
(This article belongs to the Special Issue Head and Neck Cancers—Novel Approaches and Future Outlook)
Show Figures

Figure 1

7 pages, 1170 KB  
Proceeding Paper
Development of an Artificial Neural Network-Based Image Retrieval System for Lung Disease Classification and Identification
by Atul Pratap Singh, Ajeet Singh, Amit Kumar, Himanshu Agarwal, Sapna Yadav and Mohit Gupta
Eng. Proc. 2024, 62(1), 2; https://doi.org/10.3390/engproc2024062002 - 28 Feb 2024
Cited by 10 | Viewed by 1631
Abstract
The rapid advancement of medical imaging technologies has propelled the development of automated systems for the identification and classification of lung diseases. This study presents the design and implementation of an innovative image retrieval system utilizing artificial neural networks (ANNs) to enhance the [...] Read more.
The rapid advancement of medical imaging technologies has propelled the development of automated systems for the identification and classification of lung diseases. This study presents the design and implementation of an innovative image retrieval system utilizing artificial neural networks (ANNs) to enhance the accuracy and efficiency of diagnosing lung diseases. The proposed system focuses on addressing the challenges associated with the accurate recognition and classification of lung diseases from medical images, such as X-rays and CT scans. Leveraging the capabilities of ANNs, specifically convolutional neural networks (CNNs), the system aims to capture intricate patterns and features within images that are often imperceptible to human observers. This enables the system to learn discriminative representations of normal lung anatomy and various disease manifestations. The design of the system involves multiple stages. Initially, a robust dataset of annotated lung images is curated, encompassing a diverse range of lung diseases and their corresponding healthy states. Subsequently, a pre-processing pipeline is implemented to standardize the images, ensuring consistent quality and facilitating feature extraction. The CNN architecture is then meticulously constructed, with attention to layer configurations, activation functions, and optimization algorithms to facilitate effective learning and classification. The system also incorporates image retrieval techniques, enabling the efficient searching and retrieval of relevant medical images from the database based on query inputs. This retrieval functionality assists medical practitioners in accessing similar cases for comparative analysis and reference, ultimately supporting accurate diagnosis and treatment planning. To evaluate the system’s performance, comprehensive experiments are conducted using benchmark datasets, and performance metrics such as accuracy, precision, recall, and F1-score are measured. The experimental results demonstrate the system’s capability to distinguish between various lung diseases and healthy states with a high degree of accuracy and reliability. The proposed system exhibits substantial potential in revolutionizing lung disease diagnosis by assisting medical professionals in making informed decisions and improving patient outcomes. This study presents a novel image retrieval system empowered by artificial neural networks for the identification and classification of lung diseases. By leveraging advanced deep learning techniques, the system showcases promising results in automating the diagnosis process, facilitating the efficient retrieval of relevant medical images, and thereby contributing to the advancement of pulmonary healthcare practices. Full article
(This article belongs to the Proceedings of The 2nd Computing Congress 2023)
Show Figures

Figure 1

14 pages, 1281 KB  
Article
Design and Development of IoT and Deep Ensemble Learning Based Model for Disease Monitoring and Prediction
by Mareeswari Venkatachala Appa Swamy, Jayalakshmi Periyasamy, Muthamilselvan Thangavel, Surbhi B. Khan, Ahlam Almusharraf, Prasanna Santhanam, Vijayan Ramaraj and Mahmoud Elsisi
Diagnostics 2023, 13(11), 1942; https://doi.org/10.3390/diagnostics13111942 - 1 Jun 2023
Cited by 14 | Viewed by 3057
Abstract
With the rapidly increasing reliance on advances in IoT, we persist towards pushing technology to new heights. From ordering food online to gene editing-based personalized healthcare, disruptive technologies like ML and AI continue to grow beyond our wildest dreams. Early detection and treatment [...] Read more.
With the rapidly increasing reliance on advances in IoT, we persist towards pushing technology to new heights. From ordering food online to gene editing-based personalized healthcare, disruptive technologies like ML and AI continue to grow beyond our wildest dreams. Early detection and treatment through AI-assisted diagnostic models have outperformed human intelligence. In many cases, these tools can act upon the structured data containing probable symptoms, offer medication schedules based on the appropriate code related to diagnosis conventions, and predict adverse drug effects, if any, in accordance with medications. Utilizing AI and IoT in healthcare has facilitated innumerable benefits like minimizing cost, reducing hospital-obtained infections, decreasing mortality and morbidity etc. DL algorithms have opened up several frontiers by contributing towards healthcare opportunities through their ability to understand and learn from different levels of demonstration and generalization, which is significant in data analysis and interpretation. In contrast to ML which relies more on structured, labeled data and domain expertise to facilitate feature extractions, DL employs human-like cognitive abilities to extract hidden relationships and patterns from uncategorized data. Through the efficient application of DL techniques on the medical dataset, precise prediction, and classification of infectious/rare diseases, avoiding surgeries that can be preventable, minimization of over-dosage of harmful contrast agents for scans and biopsies can be reduced to a greater extent in future. Our study is focused on deploying ensemble deep learning algorithms and IoT devices to design and develop a diagnostic model that can effectively analyze medical Big Data and diagnose diseases by identifying abnormalities in early stages through medical images provided as input. This AI-assisted diagnostic model based on Ensemble Deep learning aims to be a valuable tool for healthcare systems and patients through its ability to diagnose diseases in the initial stages and present valuable insights to facilitate personalized treatment by aggregating the prediction of each base model and generating a final prediction. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

24 pages, 2779 KB  
Review
Machine-Learning-Based Prediction Modelling in Primary Care: State-of-the-Art Review
by Adham H. El-Sherbini, Hafeez Ul Hassan Virk, Zhen Wang, Benjamin S. Glicksberg and Chayakrit Krittanawong
AI 2023, 4(2), 437-460; https://doi.org/10.3390/ai4020024 - 23 May 2023
Cited by 14 | Viewed by 7874
Abstract
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized [...] Read more.
Primary care has the potential to be transformed by artificial intelligence (AI) and, in particular, machine learning (ML). This review summarizes the potential of ML and its subsets in influencing two domains of primary care: pre-operative care and screening. ML can be utilized in preoperative treatment to forecast postoperative results and assist physicians in selecting surgical interventions. Clinicians can modify their strategy to reduce risk and enhance outcomes using ML algorithms to examine patient data and discover factors that increase the risk of worsened health outcomes. ML can also enhance the precision and effectiveness of screening tests. Healthcare professionals can identify diseases at an early and curable stage by using ML models to examine medical pictures, diagnostic modalities, and spot patterns that may suggest disease or anomalies. Before the onset of symptoms, ML can be used to identify people at an increased risk of developing specific disorders or diseases. ML algorithms can assess patient data such as medical history, genetics, and lifestyle factors to identify those at higher risk. This enables targeted interventions such as lifestyle adjustments or early screening. In general, using ML in primary care offers the potential to enhance patient outcomes, reduce healthcare costs, and boost productivity. Full article
Show Figures

Figure 1

37 pages, 4539 KB  
Review
Recent Process in Microrobots: From Propulsion to Swarming for Biomedical Applications
by Ruoxuan Wu, Yi Zhu, Xihang Cai, Sichen Wu, Lei Xu and Tingting Yu
Micromachines 2022, 13(9), 1473; https://doi.org/10.3390/mi13091473 - 5 Sep 2022
Cited by 33 | Viewed by 9362
Abstract
Recently, robots have assisted and contributed to the biomedical field. Scaling down the size of robots to micro/nanoscale can increase the accuracy of targeted medications and decrease the danger of invasive operations in human surgery. Inspired by the motion pattern and collective behaviors [...] Read more.
Recently, robots have assisted and contributed to the biomedical field. Scaling down the size of robots to micro/nanoscale can increase the accuracy of targeted medications and decrease the danger of invasive operations in human surgery. Inspired by the motion pattern and collective behaviors of the tiny biological motors in nature, various kinds of sophisticated and programmable microrobots are fabricated with the ability for cargo delivery, bio-imaging, precise operation, etc. In this review, four types of propulsion—magnetically, acoustically, chemically/optically and hybrid driven—and their corresponding features have been outlined and categorized. In particular, the locomotion of these micro/nanorobots, as well as the requirement of biocompatibility, transportation efficiency, and controllable motion for applications in the complex human body environment should be considered. We discuss applications of different propulsion mechanisms in the biomedical field, list their individual benefits, and suggest their potential growth paths. Full article
(This article belongs to the Special Issue Medical Micro/Nanorobots)
Show Figures

Figure 1

13 pages, 3469 KB  
Article
Impoverishment Effect of Hydatid Disease and Precision Medical Assistance Pattern of Government: Evidence from Yushu in China
by Yaozu Xue
Int. J. Environ. Res. Public Health 2022, 19(16), 9990; https://doi.org/10.3390/ijerph19169990 - 13 Aug 2022
Cited by 1 | Viewed by 1670
Abstract
Hydatid disease is one of the 17 neglected tropical diseases recognized by WHO and causes a huge global disease burden. Hydatid disease poses a great threat to local medical poverty alleviation. In efforts to break the vicious circle of poverty, Hydatid disease has [...] Read more.
Hydatid disease is one of the 17 neglected tropical diseases recognized by WHO and causes a huge global disease burden. Hydatid disease poses a great threat to local medical poverty alleviation. In efforts to break the vicious circle of poverty, Hydatid disease has been widely concerned and discussed. In the practice of poverty alleviation in China, medical poverty alleviation is regarded as the double goal of getting rid of poverty and promoting the construction of a healthy China. On the basis of on-the-spot investigation in Yushu Prefecture, this paper conducts a follow up study on the poverty-causing effect of Hydatid disease and the precision medical assistance pattern of government using a field investigation method. The results show that Hydatid disease led to the increase of poverty in the population in Yushu Prefecture, precision medical assistance played an obvious role in treating Hydatid disease and poverty alleviation, the health service in the study area continues to improve and the medical backbone team further expanded. The main conclusion is that the three-level diagnosis and treatment framework can effectively reduce local poverty and improve people’s living environment. Full article
(This article belongs to the Special Issue Environmental Policy and Governance Performance)
Show Figures

Figure 1

31 pages, 3633 KB  
Article
Boosting Unsupervised Dorsal Hand Vein Segmentation with U-Net Variants
by Szidónia Lefkovits, Simina Emerich and László Lefkovits
Mathematics 2022, 10(15), 2620; https://doi.org/10.3390/math10152620 - 27 Jul 2022
Cited by 8 | Viewed by 2648
Abstract
The identification of vascular network structures is one of the key fields of research in medical imaging. The segmentation of dorsal hand vein patterns form NIR images is not only the basis for reliable biometric identification, but would also provide a significant tool [...] Read more.
The identification of vascular network structures is one of the key fields of research in medical imaging. The segmentation of dorsal hand vein patterns form NIR images is not only the basis for reliable biometric identification, but would also provide a significant tool in assisting medical intervention. Precise vein extraction would help medical workers to exactly determine the needle entry point to efficiently gain intravenous access for different clinical purposes, such as intravenous therapy, parenteral nutrition, blood analysis and so on. It would also eliminate repeated attempts at needle pricks and even facilitate an automatic injection procedure in the near future. In this paper, we present a combination of unsupervised and supervised dorsal hand vein segmentation from near-infrared images in the NCUT database. This method is convenient due to the lack of expert annotations of publicly available vein image databases. The novelty of our work is the automatic extraction of the veins in two phases. First, a geometrical approach identifies tubular structures corresponding to veins in the image. This step is considered gross segmentation and provides labels (Label I) for the second CNN-based segmentation phase. We visually observe that different CNNs obtain better segmentation on the test set. This is the reason for building an ensemble segmentor based on majority voting by nine different network architectures (U-Net, U-Net++ and U-Net3+, all trained with BCE, Dice and focal losses). The segmentation result of the ensemble is considered the second label (Label II). In our opinion, the new Label II is a better annotation of the NCUT database than the Label I obtained in the first step. The efficiency of computer vision algorithms based on artificial intelligence algorithms is determined by the quality and quantity of the labeled data used. Furthermore, we prove this statement by training ResNet–UNet in the same manner with the two different label sets. In our experiments, the Dice scores, sensitivity and specificity with ResNet–UNet trained on Label II are superior to the same classifier trained on Label I. The measured Dice scores of ResNet–UNet on the test set increase from 90.65% to 95.11%. It is worth mentioning that this article is one of very few in the domain of dorsal hand vein segmentation; moreover, it presents a general pipeline that may be applied for different medical image segmentation purposes. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition with Applications)
Show Figures

Figure 1

23 pages, 25616 KB  
Article
Breast Lesions Screening of Mammographic Images with 2D Spatial and 1D Convolutional Neural Network-Based Classifier
by Chia-Hung Lin, Hsiang-Yueh Lai, Pi-Yun Chen, Jian-Xing Wu, Ching-Chou Pai, Chun-Min Su and Hui-Wen Ho
Appl. Sci. 2022, 12(15), 7516; https://doi.org/10.3390/app12157516 - 26 Jul 2022
Cited by 4 | Viewed by 2281
Abstract
Mammography is a first-line imaging examination that employs low-dose X-rays to rapidly screen breast tumors, cysts, and calcifications. This study proposes a two-dimensional (2D) spatial and one-dimensional (1D) convolutional neural network (CNN) to early detect possible breast lesions (tumors) to reduce patients’ mortality [...] Read more.
Mammography is a first-line imaging examination that employs low-dose X-rays to rapidly screen breast tumors, cysts, and calcifications. This study proposes a two-dimensional (2D) spatial and one-dimensional (1D) convolutional neural network (CNN) to early detect possible breast lesions (tumors) to reduce patients’ mortality rates and to develop a classifier for use in mammographic images on regions of interest where breast lesions (tumors) may likely occur. The 2D spatial fractional-order convolutional processes are used to strengthen and sharpen the lesions’ features, denoise, and improve the feature extraction processes. Then, an automatic extraction task is performed using a specific bounding box to sequentially pick out feature patterns from each mammographic image. The multi-round 1D kernel convolutional processes can also strengthen and denoise 1D feature signals and assist in the identification of the differentiation levels of normality and abnormality signals. In the classification layer, a gray relational analysis-based classifier is used to screen the possible lesions, including normal (Nor), benign (B), and malignant (M) classes. The classifier development for clinical applications can reduce classifier’s training time, computational complexity level, computational time, and achieve a more accurate rate for meeting clinical/medical purpose. Mammographic images were selected from the mammographic image analysis society image database for experimental tests on breast lesions screening and K-fold cross-validations were performed. The experimental results showed promising performance in quantifying the classifier’s outcome for medical purpose evaluation in terms of recall (%), precision (%), accuracy (%), and F1 score. Full article
(This article belongs to the Special Issue Advanced Electronics and Digital Signal Processing)
Show Figures

Figure 1

11 pages, 2793 KB  
Article
A Skin-Conformal, Stretchable, and Breathable Fiducial Marker Patch for Surgical Navigation Systems
by Sangkyu Lee, Duhwan Seong, Jiyong Yoon, Sungjun Lee, Hyoung Won Baac, Deukhee Lee and Donghee Son
Micromachines 2020, 11(2), 194; https://doi.org/10.3390/mi11020194 - 13 Feb 2020
Cited by 5 | Viewed by 4964
Abstract
Augmented reality (AR) surgical navigation systems have attracted considerable attention as they assist medical professionals in visualizing the location of ailments within the human body that are not readily seen with the naked eye. Taking medical imaging with a parallel C-shaped arm (C-arm) [...] Read more.
Augmented reality (AR) surgical navigation systems have attracted considerable attention as they assist medical professionals in visualizing the location of ailments within the human body that are not readily seen with the naked eye. Taking medical imaging with a parallel C-shaped arm (C-arm) as an example, surgical sites are typically targeted using an optical tracking device and a fiducial marker in real-time. These markers then guide operators who are using a multifunctional endoscope apparatus by signaling the direction or distance needed to reach the affected parts of the body. In this way, fiducial markers are used to accurately protect the vessels and nerves exposed during the surgical process. Although these systems have already shown potential for precision implantation, delamination of the fiducial marker, which is a critical component of the system, from human skin remains a challenge due to a mechanical mismatch between the marker and skin, causing registration problems that lead to poor position alignments and surgical degradation. To overcome this challenge, the mechanical modulus and stiffness of the marker patch should be lowered to approximately 150 kPa, which is comparable to that of the epidermis, while improving functionality. Herein, we present a skin-conformal, stretchable yet breathable fiducial marker for the application in AR-based surgical navigation systems. By adopting pore patterns, we were able to create a fiducial marker with a skin-like low modulus and breathability. When attached to the skin, the fiducial marker was easily identified using optical recognition equipment and showed skin-conformal adhesion when stretched and shrunk repeatedly. As such, we believe the marker would be a good fiducial marker candidate for patients under surgical navigation systems. Full article
(This article belongs to the Special Issue Deformable Bioelectronics Based on Functional Micro/nanomaterials)
Show Figures

Figure 1

24 pages, 579 KB  
Review
Application of Stable Isotope-Assisted Metabolomics for Cell Metabolism Studies
by Le You, Baichen Zhang and Yinjie J. Tang
Metabolites 2014, 4(2), 142-165; https://doi.org/10.3390/metabo4020142 - 31 Mar 2014
Cited by 40 | Viewed by 13913
Abstract
The applications of stable isotopes in metabolomics have facilitated the study of cell metabolisms. Stable isotope-assisted metabolomics requires: (1) properly designed tracer experiments; (2) stringent sampling and quenching protocols to minimize isotopic alternations; (3) efficient metabolite separations; (4) high resolution mass spectrometry to [...] Read more.
The applications of stable isotopes in metabolomics have facilitated the study of cell metabolisms. Stable isotope-assisted metabolomics requires: (1) properly designed tracer experiments; (2) stringent sampling and quenching protocols to minimize isotopic alternations; (3) efficient metabolite separations; (4) high resolution mass spectrometry to resolve overlapping peaks and background noises; and (5) data analysis methods and databases to decipher isotopic clusters over a broad m/z range (mass-to-charge ratio). This paper overviews mass spectrometry based techniques for precise determination of metabolites and their isotopologues. It also discusses applications of isotopic approaches to track substrate utilization, identify unknown metabolites and their chemical formulas, measure metabolite concentrations, determine putative metabolic pathways, and investigate microbial community populations and their carbon assimilation patterns. In addition, 13C-metabolite fingerprinting and metabolic models can be integrated to quantify carbon fluxes (enzyme reaction rates). The fluxome, in combination with other “omics” analyses, may give systems-level insights into regulatory mechanisms underlying gene functions. More importantly, 13C-tracer experiments significantly improve the potential of low-resolution gas chromatography-mass spectrometry (GC-MS) for broad-scope metabolism studies. We foresee the isotope-assisted metabolomics to be an indispensable tool in industrial biotechnology, environmental microbiology, and medical research. Full article
(This article belongs to the Special Issue Cell and Tissue Metabolomics)
Show Figures

Figure 1

Back to TopTop