sensors-logo

Journal Browser

Journal Browser

Machine Learning and Signal Processing in Sensing and Sensor Applications

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensing and Imaging".

Viewed by 40461

Editors

Department of Computer Science, University of Salerno, 84084 Fisciano, Italy
Interests: soft computing algorithms; data mining and machine learning; deep learning; knowledge discovery; optimization problems; pervasive computing; trustworthiness modeling; high performance machines; parallel computing; big data analytics
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, Università degli Studi di Salerno, 84084 Fisciano, Italy
Interests: cryptography; information/data security; computer security; digital watermarking; cloud computing
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

In recent decades, machine learning (ML) technologies have made it possible to collect, analyze, and interpret a large amount of sensory information. As a result, a new era of intelligent sensors is emerging that changes the ways of perceiving and understanding the world. The integration of ML algorithms with artificial intelligence (AI) technology benefits other areas such as Industry 4.0, Internet of Things, etc. leveraging these two technologies, it is possible to design sensors tailored to specific applications. To this end, signal data, such as electrical signals, vibrations, sounds, accelerometer signals, as well as any other kind of sensory data like images, numerical data, etc. need to be analyzed and processed from real-time algorithms to mine useful insights and to embed these algorithms in sensors.

This Special Issue calls for innovative work that explores new frontiers and challenges in the field of applying ML/AI technologies and algorithms for high-sample-rate sensors. It includes new ML and AI models, hybrid systems, as well as case studies or reviews of the state-of-the-art.

The topics of interest include, but are not limited to the following:

  • ML algorithms in smart sensor systems
  • AI models in smart sensor systems
  • ML/AI‐enabled smart sensor systems
  • Practical smart-sensor applications
  • Practical smart-sensing systems
  • Health and disease data management
  • Medical image diagnosis and analysis
  • Biology data analysis
  • Smart visual imaging sensing systems
  • Object detection and recognition
  • Smart-sensors for environmental pollution management
  • Smart-sensors for precision agriculture and food science
  • Big data analytics for sensor data
  • Intelligent real-time algorithms for sensor data
  • Features for signal classification
  • Feature discovery
  • Applications of AI and ML in sensor domains: energy, IoT, Industry 4.0, etc.

Dr. Gianni D’Angelo
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (15 papers)

2024

Jump to: 2023, 2022, 2021

18 pages, 5064 KiB  
Article
A Machine-Learning Strategy to Detect Mura Defects in a Low-Contrast Image by Piecewise Gamma Correction
by Zo-Han Lin, Qi-Yuan Lai and Hung-Yuan Li
Sensors 2024, 24(5), 1484; https://doi.org/10.3390/s24051484 - 24 Feb 2024
Viewed by 455
Abstract
A detection and classification machine-learning model to inspect Thin Film Transistor Liquid Crystal Display (TFT-LCD) Mura is proposed in this study. To improve the capability of the machine-learning model to inspect panels’ low-contrast grayscale images, piecewise gamma correction and a Selective Search algorithm [...] Read more.
A detection and classification machine-learning model to inspect Thin Film Transistor Liquid Crystal Display (TFT-LCD) Mura is proposed in this study. To improve the capability of the machine-learning model to inspect panels’ low-contrast grayscale images, piecewise gamma correction and a Selective Search algorithm are applied to detect and optimize the feature regions based on the Semiconductor Equipment and Materials International Mura (SEMU) specifications. In this process, matching the segment proportions to gamma values of piecewise gamma is a task that involves derivative-free optimization which is trained by adaptive particle swarm optimization. The detection accuracy rate (DAR) is approximately 93.75%. An enhanced convolutional neural network model is then applied to classify the Mura type through using the Taguchi experimental design method that identifies the optimal combination of the convolution kernel and the maximum pooling kernel sizes. A remarkable defect classification accuracy rate (CAR) of approximately 96.67% is ultimately achieved. The entire defect detection and classification process can be completed in about 3 milliseconds. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022, 2021

21 pages, 5199 KiB  
Article
Automatic Life Detection Based on Efficient Features of Ground-Penetrating Rescue Radar Signals
by Di Shi, Gunnar Gidion, Leonhard M. Reindl and Stefan J. Rupitsch
Sensors 2023, 23(15), 6771; https://doi.org/10.3390/s23156771 - 28 Jul 2023
Cited by 1 | Viewed by 689
Abstract
Good feature engineering is a prerequisite for accurate classification, especially in challenging scenarios such as detecting the breathing of living persons trapped under building rubble using bioradar. Unlike monitoring patients’ breathing through the air, the measuring conditions of a rescue bioradar are very [...] Read more.
Good feature engineering is a prerequisite for accurate classification, especially in challenging scenarios such as detecting the breathing of living persons trapped under building rubble using bioradar. Unlike monitoring patients’ breathing through the air, the measuring conditions of a rescue bioradar are very complex. The ultimate goal of search and rescue is to determine the presence of a living person, which requires extracting representative features that can distinguish measurements with the presence of a person and without. To address this challenge, we conducted a bioradar test scenario under laboratory conditions and decomposed the radar signal into different range intervals to derive multiple virtual scenes from the real one. We then extracted physical and statistical quantitative features that represent a measurement, aiming to find those features that are robust to the complexity of rescue-radar measuring conditions, including different rubble sites, breathing rates, signal strengths, and short-duration disturbances. To this end, we utilized two methods, Analysis of Variance (ANOVA), and Minimum Redundancy Maximum Relevance (MRMR), to analyze the significance of the extracted features. We then trained the classification model using a linear kernel support vector machine (SVM). As the main result of this work, we identified an optimal feature set of four features based on the feature ranking and the improvement in the classification accuracy of the SVM model. These four features are related to four different physical quantities and independent from different rubble sites. Full article
Show Figures

Figure 1

19 pages, 8157 KiB  
Article
A Data-Driven Based Response Reconstruction Method of Plate Structure with Conditional Generative Adversarial Network
by He Zhang, Chengkan Xu, Jiqing Jiang, Jiangpeng Shu, Liangfeng Sun and Zhicheng Zhang
Sensors 2023, 23(15), 6750; https://doi.org/10.3390/s23156750 - 28 Jul 2023
Viewed by 760
Abstract
Structural-response reconstruction is of great importance to enrich monitoring data for better understanding of the structural operation status. In this paper, a data-driven based structural-response reconstruction approach by generating response data via a convolutional process is proposed. A conditional generative adversarial network (cGAN) [...] Read more.
Structural-response reconstruction is of great importance to enrich monitoring data for better understanding of the structural operation status. In this paper, a data-driven based structural-response reconstruction approach by generating response data via a convolutional process is proposed. A conditional generative adversarial network (cGAN) is employed to establish the spatial relationship between the global and local response in the form of a response nephogram. In this way, the reconstruction process will be independent of the physical modeling of the engineering problem. The validation via experiment of a steel frame in the lab and an in situ bridge test reveals that the reconstructed responses are of high accuracy. Theoretical analysis shows that as the sensor quantity increases, reconstruction accuracy rises and remains when the optimal sensor arrangement is reached. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023, 2021

19 pages, 2563 KiB  
Article
A Complement Method for Magnetic Data Based on TCN-SE Model
by Wenqing Chen, Rui Zhang, Chenguang Shi, Ye Zhu and Xiaodong Lin
Sensors 2022, 22(21), 8277; https://doi.org/10.3390/s22218277 - 28 Oct 2022
Cited by 3 | Viewed by 1255
Abstract
The magnetometer is a vital measurement component for attitude measurement of near-Earth satellites and autonomous magnetic navigation, and monitoring health is significant. However, due to the compact structure of the microsatellites, the stray magnetic changes caused by the complex working conditions of each [...] Read more.
The magnetometer is a vital measurement component for attitude measurement of near-Earth satellites and autonomous magnetic navigation, and monitoring health is significant. However, due to the compact structure of the microsatellites, the stray magnetic changes caused by the complex working conditions of each system will inevitably interfere with the magnetometer measurement. In addition, due to the limited capacity of the satellite–ground measurement channels and the telemetry errors caused by the harsh space environment, the magnetic data collected by the ground station are partially missing. Therefore, reconstructing the telemetry data on the ground has become one of the key technologies for establishing a high-precision magnetometer twin model. In this paper, firstly, the stray magnetic interference is eliminated by correcting the installation matrix for different working conditions. Then, the autocorrelation characteristics of the residuals are analyzed, and the TCN-SE (temporal convolutional network-squeeze and excitation) network with long-term memory is designed to model and extrapolate the historical residual data. In addition, MAE (mean absolute error) is used to analyze the data without missing at the corresponding time in the forecast period and decreases to 74.63 nT. The above steps realize the accurate mapping from the simulation values to the actual values, thereby achieving the reconstruction of missing data and establishing a solid foundation for the judgment of the health state of the magnetometer. Full article
Show Figures

Figure 1

22 pages, 8574 KiB  
Article
Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty
by MyeongSeop Kim, Jung-Su Kim, Myoung-Su Choi and Jae-Han Park
Sensors 2022, 22(19), 7266; https://doi.org/10.3390/s22197266 - 25 Sep 2022
Cited by 3 | Viewed by 4569
Abstract
Reinforcement learning (RL) trains an agent by maximizing the sum of a discounted reward. Since the discount factor has a critical effect on the learning performance of the RL agent, it is important to choose the discount factor properly. When uncertainties are involved [...] Read more.
Reinforcement learning (RL) trains an agent by maximizing the sum of a discounted reward. Since the discount factor has a critical effect on the learning performance of the RL agent, it is important to choose the discount factor properly. When uncertainties are involved in the training, the learning performance with a constant discount factor can be limited. For the purpose of obtaining acceptable learning performance consistently, this paper proposes an adaptive rule for the discount factor based on the advantage function. Additionally, how to use the advantage function in both on-policy and off-policy algorithms is presented. To demonstrate the performance of the proposed adaptive rule, it is applied to PPO (Proximal Policy Optimization) for Tetris in order to validate the on-policy case, and to SAC (Soft Actor-Critic) for the motion planning of a robot manipulator to validate the off-policy case. In both cases, the proposed method results in a better or similar performance compared with cases using the best constant discount factors found by exhaustive search. Hence, the proposed adaptive discount factor automatically finds a discount factor that leads to comparable training performance, and that can be applied to representative deep reinforcement learning problems. Full article
Show Figures

Figure 1

17 pages, 11455 KiB  
Article
Water Quality Measurement and Modelling Based on Deep Learning Techniques: Case Study for the Parameter of Secchi Disk
by Feng Lin, Libo Gan, Qiannan Jin, Aiju You and Lei Hua
Sensors 2022, 22(14), 5399; https://doi.org/10.3390/s22145399 - 20 Jul 2022
Cited by 2 | Viewed by 2178
Abstract
The Secchi disk is often used to monitor the transparency of water. However, the results of personal measurement are easily affected by subjective experience and objective environment, and it is time-consuming. With the rapid development of computer technology, using image processing technology is [...] Read more.
The Secchi disk is often used to monitor the transparency of water. However, the results of personal measurement are easily affected by subjective experience and objective environment, and it is time-consuming. With the rapid development of computer technology, using image processing technology is more objective and accurate than personal observation. A transparency measurement algorithm is proposed by combining deep learning, image processing technology, and Secchi disk measurement. The white part of the Secchi disk is cropped by image processing. The classification network based on resnet18 is applied to classify the segmentation results and determine the critical position of the Secchi disk. Then, the semantic segmentation network Deeplabv3+ is used to segment the corresponding water gauge at this position, and subsequently segment the characters on the water gauge. The segmentation results are classified by the classification network based on resnet18. Finally, the transparency value is calculated according to the segmentation and classification results. The results from this algorithm are more accurate and objective than that of personal observation. The experiments show the effectiveness of this algorithm. Full article
Show Figures

Figure 1

15 pages, 12301 KiB  
Article
Spatial Attention-Based 3D Graph Convolutional Neural Network for Sign Language Recognition
by Muneer Al-Hammadi, Mohamed A. Bencherif, Mansour Alsulaiman, Ghulam Muhammad, Mohamed Amine Mekhtiche, Wadood Abdul, Yousef A. Alohali, Tareq S. Alrayes, Hassan Mathkour, Mohammed Faisal, Mohammed Algabri, Hamdi Altaheri, Taha Alfakih and Hamid Ghaleb
Sensors 2022, 22(12), 4558; https://doi.org/10.3390/s22124558 - 16 Jun 2022
Cited by 12 | Viewed by 2945
Abstract
Sign language is the main channel for hearing-impaired people to communicate with others. It is a visual language that conveys highly structured components of manual and non-manual parameters such that it needs a lot of effort to master by hearing people. Sign language [...] Read more.
Sign language is the main channel for hearing-impaired people to communicate with others. It is a visual language that conveys highly structured components of manual and non-manual parameters such that it needs a lot of effort to master by hearing people. Sign language recognition aims to facilitate this mastering difficulty and bridge the communication gap between hearing-impaired people and others. This study presents an efficient architecture for sign language recognition based on a convolutional graph neural network (GCN). The presented architecture consists of a few separable 3DGCN layers, which are enhanced by a spatial attention mechanism. The limited number of layers in the proposed architecture enables it to avoid the common over-smoothing problem in deep graph neural networks. Furthermore, the attention mechanism enhances the spatial context representation of the gestures. The proposed architecture is evaluated on different datasets and shows outstanding results. Full article
Show Figures

Figure 1

11 pages, 4779 KiB  
Communication
Steganography in IoT: Information Hiding with APDS-9960 Proximity and Gestures Sensor
by Katarzyna Koptyra and Marek R. Ogiela
Sensors 2022, 22(7), 2612; https://doi.org/10.3390/s22072612 - 29 Mar 2022
Cited by 7 | Viewed by 1838
Abstract
This article describes a steganographic system for IoT based on an APDS-9960 gesture sensor. The sensor is used in two modes: as a trigger or data input. In trigger mode, gestures control when to start and finish the embedding process; then, the data [...] Read more.
This article describes a steganographic system for IoT based on an APDS-9960 gesture sensor. The sensor is used in two modes: as a trigger or data input. In trigger mode, gestures control when to start and finish the embedding process; then, the data come from an external source or are pre-existing. In data input mode, the data to embed come directly from the sensor that may detect gestures or RGB color. The secrets are embedded in time-lapse photographs, which are later converted to videos. Selected hardware and steganographic methods allowed for smooth operation in the IoT environment. The system may cooperate with a digital camera and other sensors. Full article
Show Figures

Figure 1

19 pages, 2399 KiB  
Article
Parallel Genetic Algorithms’ Implementation Using a Scalable Concurrent Operation in Python
by Vladislav Skorpil and Vaclav Oujezsky
Sensors 2022, 22(6), 2389; https://doi.org/10.3390/s22062389 - 20 Mar 2022
Cited by 7 | Viewed by 2544
Abstract
This paper presents an implementation of the parallelization of genetic algorithms. Three models of parallelized genetic algorithms are presented, namely the Master–Slave genetic algorithm, the Coarse-Grained genetic algorithm, and the Fine-Grained genetic algorithm. Furthermore, these models are compared with the basic serial genetic [...] Read more.
This paper presents an implementation of the parallelization of genetic algorithms. Three models of parallelized genetic algorithms are presented, namely the Master–Slave genetic algorithm, the Coarse-Grained genetic algorithm, and the Fine-Grained genetic algorithm. Furthermore, these models are compared with the basic serial genetic algorithm model. Four modules, Multiprocessing, Celery, PyCSP, and Scalable Concurrent Operation in Python, were investigated among the many parallelization options in Python. The Scalable Concurrent Operation in Python was selected as the most favorable option, so the models were implemented using the Python programming language, RabbitMQ, and SCOOP. Based on the implementation results and testing performed, a comparison of the hardware utilization of each deployed model is provided. The results’ implementation using SCOOP was investigated from three aspects. The first aspect was the parallelization and integration of the SCOOP module into the resulting Python module. The second was the communication within the genetic algorithm topology. The third aspect was the performance of the parallel genetic algorithm model depending on the hardware. Full article
Show Figures

Figure 1

20 pages, 1160 KiB  
Article
An Intelligent Approach for Cloud-Fog-Edge Computing SDN-VANETs Based on Fuzzy Logic: Effect of Different Parameters on Coordination and Management of Resources
by Ermioni Qafzezi, Kevin Bylykbashi, Phudit Ampririt, Makoto Ikeda, Keita Matsuo and Leonard Barolli
Sensors 2022, 22(3), 878; https://doi.org/10.3390/s22030878 - 24 Jan 2022
Cited by 13 | Viewed by 2753
Abstract
The integration of cloud-fog-edge computing in Software-Defined Vehicular Ad hoc Networks (SDN-VANETs) brings a new paradigm that provides the needed resources for supporting a myriad of emerging applications. While an abundance of resources may offer many benefits, it also causes management problems. In [...] Read more.
The integration of cloud-fog-edge computing in Software-Defined Vehicular Ad hoc Networks (SDN-VANETs) brings a new paradigm that provides the needed resources for supporting a myriad of emerging applications. While an abundance of resources may offer many benefits, it also causes management problems. In this work, we propose an intelligent approach to flexibly and efficiently manage resources in these networks. The proposed approach makes use of an integrated fuzzy logic system that determines the most appropriate resources that vehicles should use when set under various circumstances. These circumstances cover the quality of the network created between the vehicles, its size and longevity, the number of available resources, and the requirements of applications. We evaluated the proposed approach by computer simulations. The results demonstrate the feasibility of the proposed approach in coordinating and managing the available SDN-VANETs resources. Full article
Show Figures

Figure 1

2021

Jump to: 2024, 2023, 2022

14 pages, 3887 KiB  
Article
Two-Exposure Image Fusion Based on Optimized Adaptive Gamma Correction
by Yan-Tsung Peng, He-Hao Liao and Ching-Fu Chen
Sensors 2022, 22(1), 24; https://doi.org/10.3390/s22010024 - 22 Dec 2021
Cited by 3 | Viewed by 2538
Abstract
In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same [...] Read more.
In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results. Full article
Show Figures

Figure 1

17 pages, 3689 KiB  
Article
Event Detection for Distributed Acoustic Sensing: Combining Knowledge-Based, Classical Machine Learning, and Deep Learning Approaches
by Mugdim Bublin
Sensors 2021, 21(22), 7527; https://doi.org/10.3390/s21227527 - 12 Nov 2021
Cited by 14 | Viewed by 3301
Abstract
Distributed Acoustic Sensing (DAS) is a promising new technology for pipeline monitoring and protection. However, a big challenge is distinguishing between relevant events, like intrusion by an excavator near the pipeline, and interference, like land machines. This paper investigates whether it is possible [...] Read more.
Distributed Acoustic Sensing (DAS) is a promising new technology for pipeline monitoring and protection. However, a big challenge is distinguishing between relevant events, like intrusion by an excavator near the pipeline, and interference, like land machines. This paper investigates whether it is possible to achieve adequate detection accuracy with classic machine learning algorithms using simulations and real system implementation. Then, we compare classical machine learning with a deep learning approach and analyze the advantages and disadvantages of both approaches. Although acceptable performance can be achieved with both approaches, preliminary results show that deep learning is the more promising approach, eliminating the need for laborious feature extraction and offering a six times lower event detection delay and twelve times lower execution time. However, we achieved the best results by combining deep learning with the knowledge-based and classical machine learning approaches. At the end of this manuscript, we propose general guidelines for efficient system design combining knowledge-based, classical machine learning, and deep learning approaches. Full article
Show Figures

Figure 1

14 pages, 1854 KiB  
Communication
Luminance-Degradation Compensation Based on Multistream Self-Attention to Address Thin-Film Transistor-Organic Light Emitting Diode Burn-In
by Seong-Chel Park, Kwan-Ho Park and Joon-Hyuk Chang
Sensors 2021, 21(9), 3182; https://doi.org/10.3390/s21093182 - 03 May 2021
Cited by 1 | Viewed by 2866
Abstract
We propose a deep-learning algorithm that directly compensates for luminance degradation because of the deterioration of organic light-emitting diode (OLED) devices to address the burn-in phenomenon of OLED displays. Conventional compensation circuits are encumbered by high cost of the development and manufacturing processes [...] Read more.
We propose a deep-learning algorithm that directly compensates for luminance degradation because of the deterioration of organic light-emitting diode (OLED) devices to address the burn-in phenomenon of OLED displays. Conventional compensation circuits are encumbered by high cost of the development and manufacturing processes because of their complexity. However, given that deep-learning algorithms are typically mounted onto systems on chip (SoC), the complexity of the circuit design is reduced, and the circuit can be reused by only relearning the changed characteristics of the new pixel device. The proposed approach comprises deep-feature generation and multistream self-attention, which decipher the importance of the variables, and the correlation between burn-in-related variables. It also utilizes a deep neural network that identifies the nonlinear relationship between extracted features and luminance degradation. Thereafter, luminance degradation is estimated from burn-in-related variables, and the burn-in phenomenon can be addressed by compensating for luminance degradation. Experiment results revealed that compensation was successfully achieved within an error range of 4.56%, and demonstrated the potential of a new approach that could mitigate the burn-in phenomenon by directly compensating for pixel-level luminance deviation. Full article
Show Figures

Figure 1

15 pages, 1815 KiB  
Article
CAFD: Context-Aware Fault Diagnostic Scheme towards Sensor Faults Utilizing Machine Learning
by Umer Saeed, Young-Doo Lee, Sana Ullah Jan and Insoo Koo
Sensors 2021, 21(2), 617; https://doi.org/10.3390/s21020617 - 17 Jan 2021
Cited by 25 | Viewed by 3585
Abstract
Sensors’ existence as a key component of Cyber-Physical Systems makes it susceptible to failures due to complex environments, low-quality production, and aging. When defective, sensors either stop communicating or convey incorrect information. These unsteady situations threaten the safety, economy, and reliability of a [...] Read more.
Sensors’ existence as a key component of Cyber-Physical Systems makes it susceptible to failures due to complex environments, low-quality production, and aging. When defective, sensors either stop communicating or convey incorrect information. These unsteady situations threaten the safety, economy, and reliability of a system. The objective of this study is to construct a lightweight machine learning-based fault detection and diagnostic system within the limited energy resources, memory, and computation of a Wireless Sensor Network (WSN). In this paper, a Context-Aware Fault Diagnostic (CAFD) scheme is proposed based on an ensemble learning algorithm called Extra-Trees. To evaluate the performance of the proposed scheme, a realistic WSN scenario composed of humidity and temperature sensor observations is replicated with extreme low-intensity faults. Six commonly occurring types of sensor fault are considered: drift, hard-over/bias, spike, erratic/precision degradation, stuck, and data-loss. The proposed CAFD scheme reveals the ability to accurately detect and diagnose low-intensity sensor faults in a timely manner. Moreover, the efficiency of the Extra-Trees algorithm in terms of diagnostic accuracy, F1-score, ROC-AUC, and training time is demonstrated by comparison with cutting-edge machine learning algorithms: a Support Vector Machine and a Neural Network. Full article
Show Figures

Figure 1

15 pages, 4626 KiB  
Article
Automatic Cephalometric Landmark Identification System Based on the Multi-Stage Convolutional Neural Networks with CBCT Combination Images
by Min-Jung Kim, Yi Liu, Song Hee Oh, Hyo-Won Ahn, Seong-Hun Kim and Gerald Nelson
Sensors 2021, 21(2), 505; https://doi.org/10.3390/s21020505 - 12 Jan 2021
Cited by 15 | Viewed by 3778
Abstract
This study was designed to develop and verify a fully automated cephalometry landmark identification system, based on multi-stage convolutional neural networks (CNNs) architecture, using a combination dataset. In this research, we trained and tested multi-stage CNNs with 430 lateral and 430 MIP lateral [...] Read more.
This study was designed to develop and verify a fully automated cephalometry landmark identification system, based on multi-stage convolutional neural networks (CNNs) architecture, using a combination dataset. In this research, we trained and tested multi-stage CNNs with 430 lateral and 430 MIP lateral cephalograms synthesized by cone-beam computed tomography (CBCT) to make a combination dataset. Fifteen landmarks were manually and respectively identified by experienced examiner, at the preprocessing phase. The intra-examiner reliability was high (ICC = 0.99) in manual identification. The results of prediction of the system for average mean radial error (MRE) and standard deviation (SD) were 1.03 mm and 1.29 mm, respectively. In conclusion, different types of image data might be the one of factors that affect the prediction accuracy of a fully-automated landmark identification system, based on multi-stage CNNs. Full article
Show Figures

Figure 1

Back to TopTop