Bioelectronic Technologies and Artiﬁcial Intelligence for Medical Diagnosis and Healthcare

logical Speciﬁcally, this aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classiﬁcation phases in rhino-cytology. A comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the ﬁeld images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difﬁcult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides. Equally important is to discover any infections during slide analysis based on bioﬁlm detection. Bioﬁlm is a thin layer of microorganisms coated on a surface, linked by an extracellular matrix made by polysaccharides synthesized by microorganisms themselves. are involved: if these microorganisms a risk to patient ﬁeld nasal cytology, bioﬁlm in microscopic samples of an the describe the design and testing of interesting diagnostic support, for the automatic detection of bioﬁlm, based on a convolutional neural network. Texture analysis is used, with Haralick feature extraction and dominant color. CNN-based bioﬁlm detection accuracy. study best


Introduction
The application of electronic findings to biology and medicine has significantly impacted health and wellbeing. Recent technology advances have allowed the development of new systems that can provide diagnostic information on portable point-of-devices or smartphones. The decreasing size of electronics technologies down to the atomic scale and the advances in system, cell, and molecular biology have the potential to increase the quality and reduce the costs of healthcare.
Clinicians have pervasive access to new data from complex sensors; imaging tools; and a multitude of other sources, including personal health e-records and smart environments. Humans are from being able to process this unprecedented volume of available data without advanced tools. Artificial intelligence (AI) can help clinicians to identify patterns from this huge amount of data to inform better choices for patients.
In this Special Issue, some original research papers focusing on recent advances have been collected, covering novel theories, innovative methods, and meaningful applications that could potentially lead to significant advances in the field.

Why Artificial Intelligence
Anywhere in the world, healthcare costs are increasing due to, among other factors, population ageing in high-income countries and wider population coverage in lowerincome ones.
For instance, in 2018 (i.e., last significant data before COVID-19 pandemic), according to the World Bank [1], the worldwide health expenditure pro-capita was $1700 (vs. $500 in 2000), although a huge disparity remains among high-income countries and lower-or middle-income countries (LMICs), which is where the majority of the global population lives. In fact, the 2018 health expenditure pro-capita was $10,600 (vs. Therefore, new strategies are needed to fil the gap among LMICs and high-income countries. The pro-capita health expenditure is not the best proxy for quality and equity, as demonstrated by the comparison between USA and Europe, where the healthcare system is quite effective. Nonetheless, the discrepancy among different regions requires a rethinking of the healthcare services. It will take decades and billions of dollars to harmonize numbers and quality of infrastructures (e.g., hospitals), healthcare professionals (i.e., Africa is known for having the lowest number of specialized doctors per population and suffering for a huge brain-drain), and lifestyles (e.g., junk food or smoke in LMICs vs. the one in Europe). Conversely, the introduction of AI to support healthcare can be much faster and more affordable. However, several challenges remain open, including the poor understanding of healthcare services organization or evidence generation among the community of AI experts. AI spreading is yet hindered by the poor contribution of our community to the evidence generation.
Often, chronic diseases are diagnosed late, with poor referral. Medical errors still occur and are recognized late, while their timely perception would prevent unnecessary deaths. For instance, medical errors cause 49,000 to 98,000 deaths in U.S. hospitals each year. A Johns Hopkins study found that up to 40,500 patients die each year in intensive care units in the United States due to misdiagnosis [2].
In this complex scenario, the international scientific community and policy makers are exploring whether artificial intelligence can improve healthcare quality while containing costs and risks.
The use of artificial intelligence in medicine dates back to the 1960s. Several attempts to support medical diagnosis using artificial intelligence were made then, for instance, to identify dangerous bacteria and suggest appropriate therapy. Interestingly, that system was able to propose good therapy with better performance than infectious disease experts [3].
Other attempts were made in the field of primary care to diagnose a disease (among thousands) based on the symptoms reported by the patient (also among thousands), each of which could lead to suspicion of different pathologies [4]. However, these algorithms have been very rarely used in clinical practice.
From the 1990s, neural networks began to gain popularity. Although these models shared many elements with the current ones, there were several issues. For instance, structured data were collected manually, leading to few available features and a limited number of samples for each study, which rendered and would render a machine learning approach ineffective [5]. Therefore, these models, while effective, did not fit well into clinical workflows as they could not rely on datasets of sufficient size to train the network and they were not generalizable in an acceptable measure.

The New Bet
In nearly 60 years, many objectives have remained unchanged, but we should ask ourselves what can still encourage this research, which has also failed in the previous decades. Is there a greater chance of success now?
One of the key factors is the unprecedented availability of high-quality data. While in the past the main applications of artificial intelligence in medicine were knowledge-based, most-recently data-driven approaches have been growing fast [3,4].
Today, the potential availability of data for research is exponentially growing, thanks, for example, to the spread of electronic health records [6]. This represents a new opportunity to design systems based on machine learning algorithms using structured data in electronic format. In fact, new types of patient health records may support physicians in shifting from a medical practice based on their personal experience towards data-driven evidence-based medicine [7]. These new records, allowing patients and care providers to share medical data and clinical information and access them whenever they need, can be considered enabling ambient assisted living technologies [8].
The growing interest around these new technologies poses serious questions regarding data integrity and transaction security. The huge amount of sensitive data stored in these new records attracts the interest of malicious hackers; therefore, it is necessary to guarantee the integrity and the security of servers and transactions. Blockchain technology can be an important turning point in the development of personal health records: in [7], the authors discuss some issues regarding the management and protection of health data exchanged through new medical or diagnostic devices.
In addition to medical record data, a growing amount of data of considerable interest also in clinical practice has become available in digital form from prescriptions and medical reports, including structured imaging laboratory tests, genomics and proteomics data, and biological data.
In recent years, the performance of machine-learning-based algorithms has been continuously improving and, increasingly, outperforming human skills on specific problems. For example, the collaboration of international researchers in object detection has allowed great progress through the competition and sharing of the rather expensive labeling effort [9]. Of course, the research that led to those advances in object recognition could lead to some improvement in health care but only up to a point. In fact, in those contexts the availability of big data and large training sets, which are fundamental for machine learning, may allow for the development of reliable algorithms, in particular convolutional neural networks that have played a huge role. In the healthcare sector, however, despite a large amount of data, we have a limited amount of labeled data, and much more experience is required to label medical data rather than objects. In any case, the greater availability of digital data has also resulted in the dissemination of numerous datasets for research, in many cases publicly available (i.e., PhysioNet platform). Another important contribution comes from the standardization of health data (diseases and symptoms) and drugs.

Let Us See in What Respects Machine Learning Can Transform Healthcare
According to common sense, the first example for relevance and criticality in the health sector would seem to be the case of the emergency room, in which the famous 'golden hour' can make the difference between life and death. An 'emergency room' is an interesting clinical context because from the moment a patient enters the hospital, a very short period of time is required to diagnose what is happening to him and start therapy. Typically, a context of this type is often underpowered, in terms of human resources, requiring the making of very critical decisions in a short time. So, this is an example where 'intelligent' support systems could potentially be of great help. One can imagine a system that can 'reason' about what is happening to the patient based on the data that are available (e.g., symptoms), and from the patient's medical history, possibly extracted from the electronic medical record.
The support system should not solely focus on diagnosis, although an early diagnosis would be very useful, but also on a number of other very sensitive issues, for example, a better triage, to understand which patients can be seen first. In fact, an early detection of adverse events, or even highlighting some unusual actions that could lead to medical errors, would be of great support.
Equally interesting would be to work on the diagnosis oriented to chronic pathologies. For example, systems could be dedicated to the analysis of images or signals, with the aim of reducing the need for specialized consultations. In fact, while it is easier and timelier to perform an instrumental test on a patient, it is more complex to obtain specialist advice. For example, performing an X-ray is simple and immediate, but having a radiologist available, in non-urgent contexts, can be expensive or time-consuming. In some nonurgent cases, it may take several days before obtaining a radiological report. The radiology sector has standardized data. A huge amount of freely available labeled chest radiographs can be used to design and develop machine learning image analysis algorithms using convolutional neural networks (CNNs), which have given a strong boost to automatic object recognition [10]. In this case, it would be possible to quickly highlight the suspicion of anomalies in the images and then generate an alert for the doctor.
The prediction of the progression of a chronic disease and precision medicine certainly pose great challenges for the future. Understanding how patients' disease will progress and when that progression will occur could be extremely beneficial, both for the patient and for the healthcare system. For many conditions, there are several different treatments, but it is not possible to know in advance which treatments work best for which patients. If we had predictive algorithms, perhaps based on the results of blood tests, the search for RNA to ascertain the patient's gene expression (for example, from a bone marrow sample) and so on, we could try to predict the patient's response to different treatments. This type of information would be fundamental in the targeted choice of patient therapy [11,12].

What Makes the Application of Algorithms in Medicine Different?
We have seen some examples in which machine learning can have interesting applications in the healthcare sector; we should also look at the peculiarities of the algorithms used in healthcare.
First, healthcare can be about life-or-death decisions for the patient, so we need robust algorithms that do not fail, for example, those tested with formal methods. The problem is that it is not easy to apply formal tests on machine learning algorithms. As we have seen, deep learning in recent years has allowed for the implementation of algorithms with excellent performance in many cases, but this gain in accuracy is offset by a loss in terms of transparency and control. Neural networks are able to perform their tasks well, but it is not easy to control the large number of neurons and parameters responsible for the decisions made by them. This lack of visibility has raised several concerns in the healthcare sector, and in other sectors in which a wrong decision made by an algorithm can cause significant damage. There are many advantages to understanding how or why an AI-enabled system has led to a specific output. Explainability can help developers ensure that the system is working as expected; it might be necessary to meet regulatory standards, or it might be important in allowing those affected by a decision to challenge or change that outcome [13].
Furthermore, sometimes we have very little data available: for example, for some rare diseases there are only a few tens or hundreds of cases in the world. In these cases, the important problem for machine learning is about learning from imbalanced datasets, with classes that contain a lot of data and other minorities.
Finally, the last issue we want to mention is privacy and security. We have already discussed the difficulty of obtaining access to data. Another reason, perhaps the most important in recent years, is that obtaining large amounts of public domain data is difficult because some data are sensitive. In many cases, it is easily possible to anonymize the data, but in many other cases (paper laboratory test results, handwritten prescriptions, nominative imaging, etc.) it is difficult or challenging to anonymize it. All this, of course, is a major obstacle to research.

This Special Issue
In this Special Issue, some original research papers focusing on recent advances have been collected, covering novel theories, innovative methods, and meaningful applications that could potentially lead to significant advances in healthcare.

Cell Studies
Some studies on the diagnosis of widespread pathologies through the analysis of the cells of the nasal mucosa are truly innovative. In recent years, cytological observations in the rhinology field have become increasingly utilized. For example, this development has proven to be important in driving changes in the previous classification of rhinitis. The simplicity of the technique makes nasal cytology a practical diagnostic tool for all rhino-allergology services. Microscopic observation requires prolonged effort by a specialist, but the modern scanning systems for cytological preparations and the new affordable digital microscopes allow one to design a software support system, based on deep learning techniques, to relieve specialist's tiring activity. By means of the system presented in [14], it is possible to automatically identify and classify cells present on a nasal cytological preparation based on a digital image of the preparation itself. Thus, an interesting diagnostic support has been made available to the rhino-cytologist, who can quickly verify that the cells have been correctly classified by the software system. Image processing and image segmentation techniques have been used to find images of cellular elements within the preparation, while the automated detection and classification of cells benefit from the capacity of deep learning techniques in processing digital images of the cyto-logical preparation. Specifically, this paper aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classification phases in rhino-cytology. A comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the field images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difficult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides.
Equally important is to discover any infections during slide analysis based on biofilm detection. Biofilm is a thin layer of microorganisms coated on a surface, linked by an extracellular matrix made by polysaccharides synthesized by microorganisms themselves. Bacteria are involved: if pathogenic, these microorganisms pose a risk to patient health. In the field of nasal cytology, the presence of biofilm in microscopic samples denotes the presence of an infection. In [15], the authors describe the design and testing of interesting diagnostic support, for the automatic detection of biofilm, based on a convolutional neural network. Texture analysis is used, with Haralick feature extraction and dominant color. The CNN-based biofilm detection system shows high accuracy. The CNN-based system designed in that study is confirmed as the most reliable among the best automatic image recognition technologies, in the specific context of that study. The developed system allows the specialist to obtain rapid and accurate identification of the biofilm in the slide images.
Furthermore, among the cells of the population of the nasal mucosa, ciliated cells are particularly important. In fact, the observation of these cells is essential to investigate primary ciliary dyskinesia, a rare and severe disease associated with other serious diseases such as respiratory diseases, situs inversus, heart disease, and male infertility. Assessment of ciliary function through measurements of the ciliary beating frequency (CBF) are usually required to facilitate diagnosis.
One of the most used techniques is measuring the ciliary beating frequency. Performing this operation manually is practically unfeasible or demanding. For this reason, in [16] the authors designed a low cost and easy-to-use system, based on image processing techniques, with the aim of automatically measuring CBF. That system performs cellregion-of-interest detection based on dense optical flow computation of cell body masking, focusing on the cilia movement and taking advantage of the structural characteristics of the ciliated cell and CBF estimation by applying a fast Fourier transform to extract the frequency with the peak amplitude. The experimental results show that it offers a reliable and fast CBF estimation method and can efficiently run on a consumer-grade smartphone. It can support rhinocytologists during cell observation, significantly reducing their efforts.

The Problem of Anemia Requires New Diagnostic Tools
With a quarter of the world's population interested, anemia is one of the top global health problems. It is mainly caused by nutritional factors, infectious diseases, or genetic factors. Severe anemia can compromise the availability of oxygen supplied to the cells and cause damage to vital organs. A correct diagnosis can be performed measuring hemoglobin concentration through blood cell counting. Often, the use of invasive methods is not recommended, for example, in the case of infants, the elderly, pregnant women, patients with anemia, and patients with sickle cell. In addition, frequent blood sampling creates significant discomfort to the patient and is quite expensive, especially in areas of the world that have limited economic resources. For this reason, it is of great interest to study methods and design tools that allow us to monitor hemoglobin concentration in a non-invasive way, with reduced costs, both in the laboratory and at home, sometimes even daily.
There is growing interest in non-invasive methods for monitoring and identifying potential risk of anemia [17,18]. Smartphone-based devices are promising in addressing this pathology. However, many critical issues must be discussed, and open problems solved as the authors of paper [19] underline.
Many studies show interest in the pallor of the exposed tissues of the human body to estimate anemia. Pallor is characterized by a lack of color in the skin and mucous membranes due to a low level of circulating hemoglobin. This may be evident on the entire body but is easily observed in areas where blood vessels are close to the surface, such as the palm, the nail bed and mucous membranes such as the tongue or conjunctivae [20,21].
There are numerous non-invasive methods and tools that indirectly measure the value of hemoglobin in the blood and the level of oxygen in human tissues. These include techniques such as photoplethysmography, reflectance spectroscopy, and fluorescence spectroscopy of oral tissue, but many of them are not available at affordable costs and are often not available as portable or wearable technologies.
The aim of paper [19] is to highlight some issues that seem worthy of common discussion and the subject of synergistic agreements. The authors discuss the economic and social implications related to these technologies and then, focusing on the more scientific aspects, they highlight some clues about the exposed human tissues that are subject to clinical analysis as a privileged region to estimate anemia.
In conclusion, a detailed discussion on the critical aspects is reported, which is very interesting for researchers who intend to study new methods and devices.

Medical Image Segmentation
One of the most challenging topics cited in papers [19] and [22] is about the correct segmentation of the conjunctiva region of interest. Regarding this topic, many papers have been published as underlined in paper [23]. The aim of the presented paper is to perform segmentation of the conjunctiva region for non-invasive anemia detection applications using deep learning. The proposed U-Net Based Conjunctiva Segmentation Model uses fine-tuned U-Net architecture for effective semantic segmentation of conjunctiva from the digital eye images captured by consumer-grade cameras in an uncontrolled environment. The experimentation showed good results and exhibited a comparable value of intersection over union score between the ground truth and the segmented mask. This work once again highlights how versatile the use of deep learning techniques in healthcare can be.
Speaking of segmentation, another extremely interesting study is reported in paper [24]. The histological assessment of glomeruli is fundamental for determining if a kidney is suitable for transplantation. The Karpinski score is essential to evaluate the need for a single or dual kidney transplant and includes the ratio between the number of sclerotic glomeruli and the overall number of glomeruli in a kidney section. The manual evaluation of kidney biopsies performed by pathologists is time-consuming and error-prone, so an automatic framework to delineate all the glomeruli present in a kidney section can be very useful.
Deep learning techniques are very promising for the segmentation of glomeruli, with a variety of existing approaches. Many methods only focus on semantic segmentation, which consists in segmentation of individual pixels, or ignore the problem of discriminating between non-sclerotic and sclerotic glomeruli, so these approaches are not optimal or inadequate for transplantation assessment.
In paper [24], the authors employed an end-to-end fully automatic approach based on Mask R-CNN, for instance, segmentation and classification of glomeruli. With respect to the existing literature, they improved the Mask R-CNN approach in sliding window contexts, by employing a variant of the non-maximum suppression algorithm, which they called non-maximum-area suppression. The proposed method is very promising for instance segmentation and classification of glomeruli, and allows one to make a robust evaluation of global glomerulosclerosis.

Investigation of the Cardiovascular System
Skipping to the cardiovascular system, two interesting papers have been collected in this Special Issue. With the advances in the field of biomedical imaging, digital images play a vital role in the early detection of abnormalities or diseases in the human body for any systems. Many intricate systems exist in the human body, namely, the nervous system, cardiac system, endocrine system, etc., that are important for survival. Out of these, the cardiac system is considered to be one of the most delicate systems.
The echocardiogram plays a crucial role in the diagnosis of cardiac diseases and is probably the most frequently used tool in the field of the cardiac system. It is a simple, non-invasive, and inexpensive technique that can precisely show the pressure gradient of heart lesions. Since it uses sound waves instead of radiation, echo is considered to be safe. Echo uses standard two-dimensional (2D), three-dimensional (3D), and Doppler ultrasound to create images of the heart. In "Deep Learning Methods for Classification of Certain Abnormalities in Echocardiography" [25], the authors experiment with deep learning methodologies in echocardiogram. They deal with two different kinds of classification in the echo; absence or presence of abnormalities has been experimented with using 2D echo images, 3D Doppler images, and videographic images. After, based on mitral, aortic and tricuspid regurgitation, and a combination of the three types, classification has been performed using videographic echo images.
Two deep-learning methodologies have been used for these purposes, a recurrent neural network based methodology (long short-term memory) and an Variational AutoEncoder methodology. The authors found that deep-learning methodologies perform better than SVM methodology in normal or abnormal classification. Overall, VAE performs better in 2D and 3D Doppler images (static images) while LSTM performs better in the case of videographic images.
A different approach to assessing specifically heart health is the one discussed in the paper 'Advances of ECG Sensors from Hardware, Software, and Format Interoperability Perspectives' [26]. Since fast, prompt, and accurate interpretation and decisions are important in saving the life of patients from sudden heart attack or cardiac arrest, many innovations have been made to ECG sensors, but the use of traditional ECG sensors is still prevalent in the clinical settings. A comprehensive survey on ECG sensors from hardware, software, and data format interoperability perspectives is cited there. The hardware perspective outlines a general hardware architecture of an ECG sensor along with the description of its hardware components. The software perspective describes various techniques (denoising, machine learning, deep learning, and privacy preservation) and other computer paradigms used in software development and deployment for ECG sensors. Finally, the format interoperability perspective offers a detailed taxonomy of current ECG formats and the relationship among these formats. Overall, this paper is very helpful for researchers in identifying future room for improvements from the three perspectives (hardware, software, and format interoperability) and facilitates the study of the development of modern ECG sensors that are suitable and approved for adoption in real clinical settings.
Last but not least, in this Special Issue the theme of a 'New Bi-Directional Solenoid Actuator for Active Locomotion Capsule Robots' was included [27].
In recent years, several technologies have been studied for promoting the development of capsule endoscopes, for inspecting small intestinal diseases. The endoscopic capsule, as a revolutionary medical device, was proposed to achieve advanced diagnostic and treatment functionalities for the small bowel tract, where conventional tethered flexible endoscopes are only partially able to access. Although some commercial capsule products have already been applied in clinical practice, they are propelled through the gastrointestinal tract in a passive manner, purely depending on peristalsis contractions of the digestive lumen. Due to the lack of an active and controllable locomotion mechanism in the conventional capsule endoscope to steer the scope in desired areas of the gastrointestinal tract for accurate and double inspections, false negative results may occur, reducing the effectiveness and reliability in clinical practice. A new bi-directional, simple-structured solenoid actuator for active locomotion capsule robots (CRs) is investigated in the paper [27]. This active actuator consists of two permanent magnets (PMs) attached to the two ends of the capsule body and a vibration inner mass formed by a solenoidal coil with an iron core. The proposed CR, designed as a sealed structure without external legs, wheels, or caterpillars, can achieve both forward and backward motions driven by the internal collision force. This new design concept has been successfully confirmed on a capsule prototype. The measured displacements show that its movement can be easily controlled by changing the supplied current amplitude and frequency of the solenoid actuator. To validate the new bi-directional CR prototype, various experimental as well as finite element analysis results have been presented in that paper.
The experimental results also demonstrated that this prototype can be easily controlled to achieve different forward/backward displacements and traveling speeds, while it should be tested in a more complex environment that can mimic the conditions inside the human body.

Conclusions
The research field of ''Bioelectronic Technologies and Artificial Intelligence for Medical Diagnosis and Healthcare" is growing fast. However, the research findings are difficult to integrate into CAD systems to support healthcare. Nevertheless, studying the papers presented in this Special Issue and in many other similar issues, many benefits can be drawn from increasing digitalization and publishing repositories with labeled data. In this scenario, artificial intelligence and deep learning methods have the potential to provide efficient solutions to many medical problems.
In order to speed up the 'time-to-market', collaborations between researchers, institutions, funders, and entrepreneurs are always welcome.