sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence in Medical Sensors II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (10 March 2024) | Viewed by 14193

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
Interests: disease diagnostics using artificial intelligence methods
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong, China
Interests: Machine Learning; Computational Intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical sensors are increasingly using artificial intelligence to diagnose patients more accurately and to monitor and treat them more effectively. In medicine, artificial-intelligence-and sensor-based devices have proliferated, especially in the areas of medical image analysis and medical monitoring systems. They foreshadow important and new challenges for the useful applications of artificial intelligence in medical care. In particular, recent research indicates that artificial intelligence can help to achieve outstanding performance in many health technology applications. At present, medical device companies are actively developing artificial intelligence applications within their manufacturing and supply chain operations. Artificial intelligence has been well accepted as the future of transformative technologies. From diagnostic and medical imaging technologies to therapeutic and medical sensor applications, the potential for artificial intelligence has extended to almost every corner of the world of MedTech.

Artificial intelligence technology has become a powerful auxiliary and support for the medical and health industry. This technology not only provides for the intelligent identification and analysis of medical-sensor-based health applications, but also provides quick and comprehensive enhancement for medical monitoring systems and diagnostics.

This Special Issue will bring together researchers to report recent findings in applying artificial intelligence to medical sensor applications.

The main topics of this Special Issue include, but are not limited to, the following:

  • Information fusion and knowledge transfer in biomedical and health technology applications.
  • Big data analytics on medical sensors.
  • Medical imaging devices.
  • Rehabilitation robotics with multi-sensor systems.
  • Therapeutic applications.
  • Analysis of medical data.
  • Advanced modeling, diagnosis, and treatment using AI and biosensors.
  • Bioinformatics and medical applications with multi-sensor networks.
  • Biomedical signal processing using AI.

Dr. Steve Ling
Prof. Dr. Robertas Damaševičius
Dr. Frank H. Leung
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biosensors
  • medical devices

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2228 KiB  
Article
Improving Text-Independent Forced Alignment to Support Speech-Language Pathologists with Phonetic Transcription
by Ying Li, Bryce Johannas Wohlan, Duc-Son Pham, Kit Yan Chan, Roslyn Ward, Neville Hennessey and Tele Tan
Sensors 2023, 23(24), 9650; https://doi.org/10.3390/s23249650 - 6 Dec 2023
Cited by 1 | Viewed by 1330
Abstract
Problem: Phonetic transcription is crucial in diagnosing speech sound disorders (SSDs) but is susceptible to transcriber experience and perceptual bias. Current forced alignment (FA) tools, which annotate audio files to determine spoken content and its placement, often require manual transcription, limiting their [...] Read more.
Problem: Phonetic transcription is crucial in diagnosing speech sound disorders (SSDs) but is susceptible to transcriber experience and perceptual bias. Current forced alignment (FA) tools, which annotate audio files to determine spoken content and its placement, often require manual transcription, limiting their effectiveness. Method: We introduce a novel, text-independent forced alignment model that autonomously recognises individual phonemes and their boundaries, addressing these limitations. Our approach leverages an advanced, pre-trained wav2vec 2.0 model to segment speech into tokens and recognise them automatically. To accurately identify phoneme boundaries, we utilise an unsupervised segmentation tool, UnsupSeg. Labelling of segments employs nearest-neighbour classification with wav2vec 2.0 labels, before connectionist temporal classification (CTC) collapse, determining class labels based on maximum overlap. Additional post-processing, including overfitting cleaning and voice activity detection, is implemented to enhance segmentation. Results: We benchmarked our model against existing methods using the TIMIT dataset for normal speakers and, for the first time, evaluated its performance on the TORGO dataset containing SSD speakers. Our model demonstrated competitive performance, achieving a harmonic mean score of 76.88% on TIMIT and 70.31% on TORGO. Implications: This research presents a significant advancement in the assessment and diagnosis of SSDs, offering a more objective and less biased approach than traditional methods. Our model’s effectiveness, particularly with SSD speakers, opens new avenues for research and clinical application in speech pathology. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors II)
Show Figures

Figure 1

13 pages, 5603 KiB  
Article
Comparative Analysis of Predictive Interstitial Glucose Level Classification Models
by Svjatoslavs Kistkins, Timurs Mihailovs, Sergejs Lobanovs, Valdis Pīrāgs, Harald Sourij, Othmar Moser and Dmitrijs Bļizņuks
Sensors 2023, 23(19), 8269; https://doi.org/10.3390/s23198269 - 6 Oct 2023
Cited by 1 | Viewed by 1535
Abstract
Background: New methods of continuous glucose monitoring (CGM) provide real-time alerts for hypoglycemia, hyperglycemia, and rapid fluctuations of glucose levels, thereby improving glycemic control, which is especially crucial during meals and physical activity. However, complex CGM systems pose challenges for individuals with diabetes [...] Read more.
Background: New methods of continuous glucose monitoring (CGM) provide real-time alerts for hypoglycemia, hyperglycemia, and rapid fluctuations of glucose levels, thereby improving glycemic control, which is especially crucial during meals and physical activity. However, complex CGM systems pose challenges for individuals with diabetes and healthcare professionals, particularly when interpreting rapid glucose level changes, dealing with sensor delays (approximately a 10 min difference between interstitial and plasma glucose readings), and addressing potential malfunctions. The development of advanced predictive glucose level classification models becomes imperative for optimizing insulin dosing and managing daily activities. Methods: The aim of this study was to investigate the efficacy of three different predictive models for the glucose level classification: (1) an autoregressive integrated moving average model (ARIMA), (2) logistic regression, and (3) long short-term memory networks (LSTM). The performance of these models was evaluated in predicting hypoglycemia (<70 mg/dL), euglycemia (70–180 mg/dL), and hyperglycemia (>180 mg/dL) classes 15 min and 1 h ahead. More specifically, the confusion matrices were obtained and metrics such as precision, recall, and accuracy were computed for each model at each predictive horizon. Results: As expected, ARIMA underperformed the other models in predicting hyper- and hypoglycemia classes for both the 15 min and 1 h horizons. For the 15 min forecast horizon, the performance of logistic regression was the highest of all the models for all glycemia classes, with recall rates of 96% for hyper, 91% for norm, and 98% for hypoglycemia. For the 1 h forecast horizon, the LSTM model turned out to be the best for hyper- and hypoglycemia classes, achieving recall values of 85% and 87% respectively. Conclusions: Our findings suggest that different models may have varying strengths and weaknesses in predicting glucose level classes, and the choice of model should be carefully considered based on the specific requirements and context of the clinical application. The logistic regression model proved to be more accurate for the next 15 min, particularly in predicting hypoglycemia. However, the LSTM model outperformed logistic regression in predicting glucose level class for the next hour. Future research could explore hybrid models or ensemble approaches that combine the strengths of multiple models to further enhance the accuracy and reliability of glucose predictions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors II)
Show Figures

Figure 1

36 pages, 8729 KiB  
Article
A Novel Low-Latency and Energy-Efficient Task Scheduling Framework for Internet of Medical Things in an Edge Fog Cloud System
by Kholoud Alatoun, Khaled Matrouk, Mazin Abed Mohammed, Jan Nedoma, Radek Martinek and Petr Zmij
Sensors 2022, 22(14), 5327; https://doi.org/10.3390/s22145327 - 16 Jul 2022
Cited by 40 | Viewed by 3245
Abstract
In healthcare, there are rapid emergency response systems that necessitate real-time actions where speed and efficiency are critical; this may suffer as a result of cloud latency because of the delay caused by the cloud. Therefore, fog computing is utilized in real-time healthcare [...] Read more.
In healthcare, there are rapid emergency response systems that necessitate real-time actions where speed and efficiency are critical; this may suffer as a result of cloud latency because of the delay caused by the cloud. Therefore, fog computing is utilized in real-time healthcare applications. There are still limitations in response time, latency, and energy consumption. Thus, a proper fog computing architecture and good task scheduling algorithms should be developed to minimize these limitations. In this study, an Energy-Efficient Internet of Medical Things to Fog Interoperability of Task Scheduling (EEIoMT) framework is proposed. This framework schedules tasks in an efficient way by ensuring that critical tasks are executed in the shortest possible time within their deadline while balancing energy consumption when processing other tasks. In our architecture, Electrocardiogram (ECG) sensors are used to monitor heart health at home in a smart city. ECG sensors send the sensed data continuously to the ESP32 microcontroller through Bluetooth (BLE) for analysis. ESP32 is also linked to the fog scheduler via Wi-Fi to send the results data of the analysis (tasks). The appropriate fog node is carefully selected to execute the task by giving each node a special weight, which is formulated on the basis of the expected amount of energy consumed and latency in executing this task and choosing the node with the lowest weight. Simulations were performed in iFogSim2. The simulation outcomes show that the suggested framework has a superior performance in reducing the usage of energy, latency, and network utilization when weighed against CHTM, LBS, and FNPA models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors II)
Show Figures

Figure 1

18 pages, 5368 KiB  
Article
Auto-Denoising for EEG Signals Using Generative Adversarial Network
by Yang An, Hak Keung Lam and Sai Ho Ling
Sensors 2022, 22(5), 1750; https://doi.org/10.3390/s22051750 - 23 Feb 2022
Cited by 12 | Viewed by 5907
Abstract
The brain–computer interface (BCI) has many applications in various fields. In EEG-based research, an essential step is signal denoising. In this paper, a generative adversarial network (GAN)-based denoising method is proposed to denoise the multichannel EEG signal automatically. A new loss function is [...] Read more.
The brain–computer interface (BCI) has many applications in various fields. In EEG-based research, an essential step is signal denoising. In this paper, a generative adversarial network (GAN)-based denoising method is proposed to denoise the multichannel EEG signal automatically. A new loss function is defined to ensure that the filtered signal can retain as much effective original information and energy as possible. This model can imitate and integrate artificial denoising methods, which reduces processing time; hence it can be used for a large amount of data processing. Compared to other neural network denoising models, the proposed model has one more discriminator, which always judges whether the noise is filtered out. The generator is constantly changing the denoising way. To ensure the GAN model generates EEG signals stably, a new normalization method called sample entropy threshold and energy threshold-based (SETET) normalization is proposed to check the abnormal signals and limit the range of EEG signals. After the denoising system is established, although the denoising model uses the different subjects’ data for training, it can still apply to the new subjects’ data denoising. The experiments discussed in this paper employ the HaLT public dataset. Correlation and root mean square error (RMSE) are used as evaluation criteria. Results reveal that the proposed automatic GAN denoising network achieves the same performance as the manual hybrid artificial denoising method. Moreover, the GAN network makes the denoising process automatic, representing a significant reduction in time. Full article
(This article belongs to the Special Issue Artificial Intelligence in Medical Sensors II)
Show Figures

Figure 1

Back to TopTop