Next Issue
Volume 3, December
Previous Issue
Volume 3, June
 
 

Signals, Volume 3, Issue 3 (September 2022) – 14 articles

Cover Story (view full-size image): Our contemporary society has never been more connected and aware of vital information in real-time, through the use of innovative technologies. A considerable number of applications have transitioned into the cyber-physical domain, optimizing their routines and processes via the dense network of sensing devices and the immense volumes of data they collect and instantly share. Researchers have proposed an innovative architecture based on the monitoring, analysis, planning, and execution (MAPE) paradigm for network and service performance optimization based on learning algorithms trained by datasets enriched with the users' empirical opinions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
22 pages, 809 KiB  
Article
GRAPE: Grammatical Algorithms in Python for Evolution
by Allan de Lima, Samuel Carvalho, Douglas Mota Dias, Enrique Naredo, Joseph P. Sullivan and Conor Ryan
Signals 2022, 3(3), 642-663; https://doi.org/10.3390/signals3030039 - 15 Sep 2022
Cited by 4 | Viewed by 2461
Abstract
GRAPE is an implementation of Grammatical Evolution (GE) in DEAP, an Evolutionary Computation framework in Python, which consists of the necessary classes and functions to evolve a population of grammar-based solutions, while reporting essential measures. This tool was developed at the Bio-computing and [...] Read more.
GRAPE is an implementation of Grammatical Evolution (GE) in DEAP, an Evolutionary Computation framework in Python, which consists of the necessary classes and functions to evolve a population of grammar-based solutions, while reporting essential measures. This tool was developed at the Bio-computing and Developmental Systems (BDS) Research Group, the birthplace of GE, as an easy to use (compared to the canonical C++ implementation, libGE) tool that inherits all the advantages of DEAP, such as selection methods, parallelism and multiple search techniques, all of which can be used with GRAPE. In this paper, we address some problems to exemplify the use of GRAPE and to perform a comparison with PonyGE2, an existing implementation of GE in Python. The results show that GRAPE has a similar performance, but is able to avail of all the extra facilities and functionality found in the DEAP framework. We further show that GRAPE enables GE to be applied to systems identification problems and we demonstrate this on two benchmark problems. Full article
Show Figures

Figure 1

22 pages, 1511 KiB  
Article
Verilog Design, Synthesis, and Netlisting of IoT-Based Arithmetic Logic and Compression Unit for 32 nm HVT Cells
by Raj Mouli Jujjavarapu and Alwin Poulose
Signals 2022, 3(3), 620-641; https://doi.org/10.3390/signals3030038 - 13 Sep 2022
Cited by 1 | Viewed by 2069
Abstract
Micro-processor designs have become a revolutionary technology almost in every industry. They brought the reality of automation and also electronic gadgets. While trying to improvise these hardware modules to handle heavy computational loads, they have substantially reached a limit in size, power efficiency, [...] Read more.
Micro-processor designs have become a revolutionary technology almost in every industry. They brought the reality of automation and also electronic gadgets. While trying to improvise these hardware modules to handle heavy computational loads, they have substantially reached a limit in size, power efficiency, and similar avenues. Due to these constraints, many manufacturers and corporate entities are trying many ways to optimize these mini beasts. One such approach is to design microprocessors based on the specified operating system. This approach came to the limelight when many companies launched their microprocessors. In this paper, we will look into one method of using an arithmetic logic unit (ALU) module for internet of things (IoT)-enabled devices. A specific set of operations is added to the classical ALU to help fast computational processes in IoT-specific programs. We integrated a compression module and a fast multiplier based on the Vedic algorithm in the 16-bit ALU module. The designed ALU module is also synthesized under a 32-nm HVT cell library from the Synopsys database to generate an overview of the areal efficiency, logic levels, and layout of the designed module; it also gives us a netlist from this database. The synthesis provides a complete overview of how the module will be manufactured if sent to a foundry. Full article
(This article belongs to the Special Issue Advances of Signal Processing for Signal, Image and Video Technology)
Show Figures

Figure 1

9 pages, 362 KiB  
Article
Language Inference Using Elman Networks with Evolutionary Training
by Nikolaos Anastasopoulos, Ioannis G. Tsoulos, Evangelos Dermatas and Evangelos Karvounis
Signals 2022, 3(3), 611-619; https://doi.org/10.3390/signals3030037 - 06 Sep 2022
Viewed by 1359
Abstract
In this paper, a novel Elman-type recurrent neural network (RNN) is presented for the binary classification of arbitrary symbol sequences, and a novel training method, including both evolutionary and local search methods, is evaluated using sequence databases from a wide range of scientific [...] Read more.
In this paper, a novel Elman-type recurrent neural network (RNN) is presented for the binary classification of arbitrary symbol sequences, and a novel training method, including both evolutionary and local search methods, is evaluated using sequence databases from a wide range of scientific areas. An efficient, publicly available, software tool is implemented in C++, accelerating significantly (more than 40 times) the RNN weights estimation process using both simd and multi-thread technology. The experimental results, in all databases, with the hybrid training method show improvements in a range of 2% to 25% compared with the standard genetic algorithm. Full article
24 pages, 1172 KiB  
Article
Intelligent Network Service Optimization in the Context of 5G/NFV
by Panagiotis A. Karkazis, Konstantinos Railis, Stelios Prekas, Panagiotis Trakadas and Helen C. Leligou
Signals 2022, 3(3), 587-610; https://doi.org/10.3390/signals3030036 - 02 Sep 2022
Cited by 6 | Viewed by 1585
Abstract
Our contemporary society has never been more connected and aware of vital information in real time, through the use of innovative technologies. A considerable number of applications have transitioned into the cyber-physical domain, automating and optimizing their routines and processes via the dense [...] Read more.
Our contemporary society has never been more connected and aware of vital information in real time, through the use of innovative technologies. A considerable number of applications have transitioned into the cyber-physical domain, automating and optimizing their routines and processes via the dense network of sensing devices and the immense volumes of data they collect and instantly share. In this paper, we propose an innovative architecture based on the monitoring, analysis, planning, and execution (MAPE) paradigm for network and service performance optimization. Our study confirms distinct evidence that the utilization of learning algorithms, consuming datasets enriched with the users’ empirical opinions as input during the analysis and planning phases, contributes greatly to the optimization of video streaming quality, especially by handling different packet loss rates, paving the way for the achievable provision of a resilient communications platform for calamity assessment and management. Full article
(This article belongs to the Special Issue B5G/6G Networks: Directions and Advances)
Show Figures

Figure 1

10 pages, 2623 KiB  
Review
A Survey on Denoising Techniques of Electroencephalogram Signals Using Wavelet Transform
by Maximilian Grobbelaar, Souvik Phadikar, Ebrahim Ghaderpour, Aaron F. Struck, Nidul Sinha, Rajdeep Ghosh and Md. Zaved Iqubal Ahmed
Signals 2022, 3(3), 577-586; https://doi.org/10.3390/signals3030035 - 17 Aug 2022
Cited by 21 | Viewed by 3482
Abstract
Electroencephalogram (EEG) artifacts such as eyeblink, eye movement, and muscle movements widely contaminate the EEG signals. Those unwanted artifacts corrupt the information contained in the EEG signals and degrade the performance of qualitative analysis of clinical applications and as well as EEG-based brain–computer [...] Read more.
Electroencephalogram (EEG) artifacts such as eyeblink, eye movement, and muscle movements widely contaminate the EEG signals. Those unwanted artifacts corrupt the information contained in the EEG signals and degrade the performance of qualitative analysis of clinical applications and as well as EEG-based brain–computer interfaces (BCIs). The applications of wavelet transform in denoising EEG signals are increasing day by day due to its capability of handling non-stationary signals. All the reported wavelet denoising techniques for EEG signals are surveyed in this paper in terms of the quality of noise removal and retrieving important information. In order to evaluate the performance of wavelet denoising techniques for EEG signals and to express the quality of reconstruction, the techniques were evaluated based on the results shown in the respective literature. We also compare certain features in the evaluation of the wavelet denoising techniques, such as the requirement of reference channel, automation, online, and performance on a single channel. Full article
(This article belongs to the Special Issue Advancing Signal Processing and Analytics of EEG Signals)
Show Figures

Figure 1

18 pages, 1357 KiB  
Article
Compressive Sensing Based Space Flight Instrument Constellation for Measuring Gravitational Microlensing Parallax
by Asmita Korde-Patel, Richard K. Barry and Tinoosh Mohsenin
Signals 2022, 3(3), 559-576; https://doi.org/10.3390/signals3030034 - 15 Aug 2022
Cited by 1 | Viewed by 1251
Abstract
In this work, we provide a compressive sensing architecture for implementing on a space based observatory for detecting transient photometric parallax caused by gravitational microlensing events. Compressive sensing (CS) is a simultaneous data acquisition and compression technique, which can greatly reduce on-board resources [...] Read more.
In this work, we provide a compressive sensing architecture for implementing on a space based observatory for detecting transient photometric parallax caused by gravitational microlensing events. Compressive sensing (CS) is a simultaneous data acquisition and compression technique, which can greatly reduce on-board resources required for space flight data storage and ground transmission. We simulate microlensing parallax observations using a space observatory constellation, based on CS detectors. Our results show that average CS error is less than 0.5% using 25% Nyquist rate samples. The error at peak magnification time is significantly lower than the error for distinguishing any two microlensing parallax curves at their peak magnification. Thus, CS is an enabling technology for detecting microlensing parallax, without causing any loss in detection accuracy. Full article
(This article belongs to the Special Issue Compressive Sensing and Its Applications)
Show Figures

Figure 1

9 pages, 2639 KiB  
Case Report
Case Report: Modulation of Effective Connectivity in Brain Networks after Prosthodontic Tooth Loss Repair
by Antonella Muroni, Daniel Barbar, Matteo Fraschini, Marco Monticone, Giovanni Defazio and Francesco Marrosu
Signals 2022, 3(3), 550-558; https://doi.org/10.3390/signals3030033 - 05 Aug 2022
Cited by 1 | Viewed by 1809
Abstract
INTRODUCTION. Recent neuroimaging studies suggest that dental loss replacements induce changes in neuroplasticity as well as in correlated connectivity between brain networks. However, as the typical temporal delay in detecting brain activity by neuroimaging cannot account for the influence one neural system exerts [...] Read more.
INTRODUCTION. Recent neuroimaging studies suggest that dental loss replacements induce changes in neuroplasticity as well as in correlated connectivity between brain networks. However, as the typical temporal delay in detecting brain activity by neuroimaging cannot account for the influence one neural system exerts over another in a context of real activation (“effective” connectivity), it seems of interest to approach this dynamic aspect of brain networking in the time frame of milliseconds by exploiting electroencephalographic (EEG) data. MATERIAL AND METHODS. The present study describes one subject who received a new prosthodontic provisional implant in substitution for previous dental repairs. Two EEG sessions led with a portable device were recorded before and after positioning the new dental implant. By following MATLAB-EEGLAB processing supported by the plugins FIELDTRIP and SIFT, the independent component analysis (ICA) derived from EEG raw signals was rendered as current density fields and interpolated with the dipoles generated by each electrode for a dynamic study of the effective connectivity. One more recording session was undertaken six months after the placement of the final implant. RESULTS. Compared to the baseline, the new prosthodontic implant induced a novel modulation of the neuroplasticity in sensory-motor areas which was maintained following the definitive implant after six months, as revealed by changes in the effective connectivity from the basal strong enslavement of a single brain area over the others, to an equilibrate inter-related connectivity evenly distributed along the frontotemporal regions of both hemispheres. CONCLUSIONS. The rapid shift of the effective connectivity after positioning the new prosthodontic implant and its substantial stability after six months suggest the possibility that synaptic modifications, induced by novel sensory motor conditions, modulate the neuroplasticity and reshape the final dynamic frame of the interarea connectivity. Moreover, given the viability of the EEG practice, this approach could be of some interest in assessing the association between oral pathophysiology and neuronal networking. Full article
(This article belongs to the Special Issue Advancing Signal Processing and Analytics of EEG Signals)
Show Figures

Figure 1

15 pages, 31062 KiB  
Article
Text Line Extraction in Historical Documents Using Mask R-CNN
by Ahmad Droby, Berat Kurar Barakat, Reem Alaasam, Boraq Madi, Irina Rabaev and Jihad El-Sana
Signals 2022, 3(3), 535-549; https://doi.org/10.3390/signals3030032 - 04 Aug 2022
Cited by 6 | Viewed by 2832
Abstract
Text line extraction is an essential preprocessing step in many handwritten document image analysis tasks. It includes detecting text lines in a document image and segmenting the regions of each detected line. Deep learning-based methods are frequently used for text line detection. However, [...] Read more.
Text line extraction is an essential preprocessing step in many handwritten document image analysis tasks. It includes detecting text lines in a document image and segmenting the regions of each detected line. Deep learning-based methods are frequently used for text line detection. However, only a limited number of methods tackle the problems of detection and segmentation together. This paper proposes a holistic method that applies Mask R-CNN for text line extraction. A Mask R-CNN model is trained to extract text lines fractions from document patches, which are further merged to form the text lines of an entire page. The presented method was evaluated on the two well-known datasets of historical documents, DIVA-HisDB and ICDAR 2015-HTR, and achieved state-of-the-art results. In addition, we introduce a new challenging dataset of Arabic historical manuscripts, VML-AHTE, where numerous diacritics are present. We show that the presented Mask R-CNN-based method can successfully segment text lines, even in such a challenging scenario. Full article
Show Figures

Figure 1

11 pages, 2123 KiB  
Article
Urban Plants Classification Using Deep-Learning Methodology: A Case Study on a New Dataset
by Marina Litvak, Sarit Divekar and Irina Rabaev
Signals 2022, 3(3), 524-534; https://doi.org/10.3390/signals3030031 - 03 Aug 2022
Cited by 4 | Viewed by 3370
Abstract
Plant classification requires the eye of an expert in botanics when the subtle differences in stem or petals differentiate between different species. Hence, an accurate automatic plant classification might be of great assistance to a person who studies agriculture, travels, or explores rare [...] Read more.
Plant classification requires the eye of an expert in botanics when the subtle differences in stem or petals differentiate between different species. Hence, an accurate automatic plant classification might be of great assistance to a person who studies agriculture, travels, or explores rare species. This paper focuses on a specific task of urban plants classification. The possible practical application of this work is a tool which assists people, growing plants at home, to recognize new species and to provide the relevant caring instructions. Because urban species are barely covered by the benchmark datasets, these species cannot be accurately recognized by the state-of-the-art pre-trained classification models. This paper introduces a new dataset, Urban Planter, for plant species classification with 1500 images categorized into 15 categories. The dataset contains 15 urban species, which can be grown at home in any climate (mostly desert) and are barely covered by existing datasets. We performed an extensive analysis of this dataset, aimed at answering the following research questions: (1) Does the Urban Planter dataset provide enough information to train accurate deep learning models? (2) Can pre-trained classification models be successfully applied on Urban Planter, and is the pre-training on ImageNet beneficial in comparison to the pre-training on a much smaller but more relevant dataset? (3) Does two-step transfer learning further improve the classification accuracy? We report the results of experiments designed to answer these questions. In addition, we provide the link to the installation code of the alpha version and the demo video of the web app for urban plants classification based on the best evaluated model. To conclude, our contribution is three-fold: (1) We introduce a new dataset of urban plant images; (2) We report the results of an extensive case study with several state-of-the-art deep networks and different configurations for transfer learning; (3) We provide a web application based on the best evaluated model. In addition, we believe that, by extending our dataset in the future to eatable plants and assisting people to grow food at home, our research contributes to achieve the United Nations’ 2030 Agenda for Sustainable Development. Full article
Show Figures

Figure 1

18 pages, 2489 KiB  
Article
Deep Learning Beehive Monitoring System for Early Detection of the Varroa Mite
by George Voudiotis, Anna Moraiti and Sotirios Kontogiannis
Signals 2022, 3(3), 506-523; https://doi.org/10.3390/signals3030030 - 28 Jul 2022
Cited by 9 | Viewed by 4423
Abstract
One of the most critical causes of colony collapse disorder in beekeeping is caused by the Varroa mite. This paper presents an embedded camera module supported by a deep learning algorithm for the process of early detecting of Varroa infestations. This is achieved [...] Read more.
One of the most critical causes of colony collapse disorder in beekeeping is caused by the Varroa mite. This paper presents an embedded camera module supported by a deep learning algorithm for the process of early detecting of Varroa infestations. This is achieved using a deep learning algorithm that tries to identify bees inside the brood frames carrying the mite in real-time. The end-node device camera module is placed inside the brood box. It is equipped with offline detection in remote areas of limited network coverage or online imagery data transmission and mite detection over the cloud. The proposed deep learning algorithm uses a deep learning network for bee object detection and an image processing step to identify the mite on the previously detected objects. Finally, the authors present their proof of concept experimentation of their approach that can offer a total bee and varroa detection accuracy of close to 70%. The authors present in detail and discuss their experimental results. Full article
Show Figures

Figure 1

9 pages, 2015 KiB  
Article
The Analysis and Verification of Unbiased Estimator for Multilateral Positioning
by Yang Yang, Shihao Sun, Ao Chen, Siyang You, Yuqi Shen, Zhijun Li and Dayang Sun
Signals 2022, 3(3), 497-505; https://doi.org/10.3390/signals3030029 - 12 Jul 2022
Viewed by 1361
Abstract
The ranging error model is generally very complicated in actual ranging technologies. This paper gives an analysis of the biased distance substitution and proposes an unbiased multilateral positioning method to revise the biased substitution, making it an unbiased estimate of the squared distance. [...] Read more.
The ranging error model is generally very complicated in actual ranging technologies. This paper gives an analysis of the biased distance substitution and proposes an unbiased multilateral positioning method to revise the biased substitution, making it an unbiased estimate of the squared distance. An unbiased estimate of the multilateral positioning formula is derived to solve the target node coordinates. Through simulation experiments, it is proved that the algorithm can improve the positioning accuracy, and the improvement is more obvious when the error variance is larger. Experiments using SX1280 also show that the ranging conforms to the biased error model, and the accuracy can be improved by using the unbiased estimator. When the actual experimental error standard deviation is 0.16 m, the accuracy can be improved by 0.15 m. Full article
(This article belongs to the Special Issue Intelligent Wireless Sensing and Positioning)
Show Figures

Figure 1

14 pages, 1640 KiB  
Article
Saliency-Guided Local Full-Reference Image Quality Assessment
by Domonkos Varga
Signals 2022, 3(3), 483-496; https://doi.org/10.3390/signals3030028 - 11 Jul 2022
Cited by 7 | Viewed by 2524
Abstract
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible [...] Read more.
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as weights in the weighted averaging of local image quality scores, emphasizing image regions that are salient to human observers. In contrast to this common practice, visual saliency is applied in the computation of local image quality in this study, based on the observation that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on KADID-10k, TID2013, TID2008, and CSIQ have shown that the proposed method was able to improve the state-of-the-art’s performance at low computational costs. Full article
Show Figures

Figure 1

15 pages, 7329 KiB  
Article
Transmission Line Fault Classification of Multi-Dataset Using CatBoost Classifier
by Vincent Nsed Ogar, Sajjad Hussain and Kelum A. A. Gamage
Signals 2022, 3(3), 468-482; https://doi.org/10.3390/signals3030027 - 05 Jul 2022
Cited by 6 | Viewed by 2867
Abstract
Transmission line fault classification forms the basis of fault protection management in power systems. Because faults have adverse effects on transmission lines, adequate measures must be implemented to avoid power outages. This paper focuses on using the categorical boosting (CatBoost) algorithm classifier to [...] Read more.
Transmission line fault classification forms the basis of fault protection management in power systems. Because faults have adverse effects on transmission lines, adequate measures must be implemented to avoid power outages. This paper focuses on using the categorical boosting (CatBoost) algorithm classifier to analyse and train multiple voltage and current data from a 330 kV and 500 km-long simulated faulty transmission line model designed using Matlab/Simulink. From it, 93,340 fault data sizes were extracted. The CatBoost classifier was employed to classify the faults after different machine learning algorithms were used to train the same data with different parameters. The trainer achieved the best accuracy of 99.54%, with an error of 0.46% for 748 iterations out of 1000. The algorithm was selected for its high performance in classifying faults based on accuracy, precision and speed. In addition, it is easy to use and handles multiple data-sets. In contrast, a support vector machine and an artificial neural network each has a longer training time than the proposed method’s 58.5 s. Proper fault classification techniques assist in the effective fault management and planning of power system control thereby preventing energy waste and providing high performance. Full article
Show Figures

Figure 1

40 pages, 29963 KiB  
Article
Internet of Spacecraft for Multi-Planetary Defense and Prosperity
by Yiming Huo
Signals 2022, 3(3), 428-467; https://doi.org/10.3390/signals3030026 - 22 Jun 2022
Cited by 2 | Viewed by 7784
Abstract
Recent years have seen unprecedentedly fast-growing prosperity in the commercial space industry. Several privately funded aerospace manufacturers, such as Space Exploration Technologies Corporation (SpaceX) and Blue Origin have transformed what we used to know about this capital-intense industry and gradually reshaped the future [...] Read more.
Recent years have seen unprecedentedly fast-growing prosperity in the commercial space industry. Several privately funded aerospace manufacturers, such as Space Exploration Technologies Corporation (SpaceX) and Blue Origin have transformed what we used to know about this capital-intense industry and gradually reshaped the future of human civilization. As private spaceflight and multi-planetary immigration gradually become realities from science fiction (sci-fi) and theory, both opportunities and challenges will be presented. In this article, we first review the progress in space exploration and the underlying space technologies. Next, we revisit the K-Pg extinction event and the Chelyabinsk event and predict extra-terrestrialization, terraformation, and planetary defense, including the emerging near-Earth object (NEO) observation and NEO impact avoidance technologies and strategies. Furthermore, a framework for the Solar Communication and Defense Networks (SCADN) with advanced algorithms and high efficacy is proposed to enable an Internet of distributed deep-space sensing, communications, and defense to cope with disastrous incidents such as asteroid/comet impacts. Furthermore, perspectives on the legislation, management, and supervision of founding the proposed SCADN are also discussed in depth. Full article
(This article belongs to the Special Issue Internet of Things for Smart Planet: Present and Future)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop