Special Issue "Applied Sciences Based on and Related to Computer and Control"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 April 2019).

Special Issue Editor

Prof. Dr. Kuei-Hsiang Chao
Website
Guest Editor
Department of Electrical Engineering, National Chin-Yi University of Technology, Taiwan
Interests: power converters; application of microelectronics; power electronics; automotive electronics; micro electro mechanical systems and systems-on-chip
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

We would like to invite you to submit your valuable research for a Special Issue of Applied Sciences on the subject area of “Applied Sciences Based on and Related to Computer and Control”. The themes of this Special Issue cover advanced multimedia, computer, telecommunication, consumer electronics, renewable energy, systems and control, and digital signal processing. Original high-quality papers related to these themes are especially solicited, including theories, methodologies, and applications in computing, consumer and control. Topics to be covered in this Special Issue, include, but are not limited to, the following areas:

  • Computer Networks, Mobile Computing, and Cloud Computing Technologies
  • Digital Content, Information Security, and Web Service
  • Software Engineering, Service-Oriented Architecture, and Databases
  • Artificial Intelligence, Knowledge Discovery, Heuristic Algorithms, and Fuzzy Systems
  • Digital Right and Watermarking
  • Hardware and Software for Multimedia Systems
  • Virtual Reality, AR, MR, 3D Processing and Application
  • Signal, Audio, Speech Analysis and Processing
  • Image Processing and Applications
  • Computer Vision, Motion, Tracking Algorithms and Applications
  • Wireless and Mobile Communication
  • Internet Applications
  • Systems on Chip
  • Application of Microelectronics
  • Device Modeling, Simulation and Design
  • Human-Machine Interfaces
  • Robots
  • Computer and Microprocessor-Based Control
  • Automotive Electronics
  • Display System Design and Implementation
  • Renewable Energy Technologies
  • Photovoltaic and Wind Energy Technologies
  • Power Conversions
  • Applications of Power Electronics in Power Systems
  • Smart Grid Systems
  • System Modeling and Simulation, Dynamics and Control
  • Intelligent and Learning Control
  • Robust and Nonlinear Control
  • Biomedical Systems and Control
  • Digital Signal Processing Theory and Methods
  • Statistical Signal Processing and Applications
  • Biomedical and Biological Signal Processing
  • Neural Networks, Fuzzy Systems, Expert Systems, Genetic Algorithms and Data Fusion for Signal Processing
  • Embedded Systems for Signal Processing

Prof. Kuei-Hsiang Chao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer Networks
  • Computing Technologies
  • Software Engineering
  • Artificial Intelligence
  • Signal Processing
  • Computer-Based Control
  • Renewable Energy Technologies
  • Smart Grid Systems
  • Intelligent Control
  • Robust and Nonlinear Control
  • Biomedical Systems and Control

Published Papers (36 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
A Multi-Factor Approach for Selection of Developers to Fix Bugs in a Program
Appl. Sci. 2019, 9(16), 3327; https://doi.org/10.3390/app9163327 - 13 Aug 2019
Abstract
In a software tracking system, the bug assignment problem refers to the activities that developers perform during software maintenance to fix bugs. As many bugs are submitted on a daily basis, the number of developers required is quite large, and it therefore becomes [...] Read more.
In a software tracking system, the bug assignment problem refers to the activities that developers perform during software maintenance to fix bugs. As many bugs are submitted on a daily basis, the number of developers required is quite large, and it therefore becomes difficult to assign the right developers to resolve issues with specific bugs. Inappropriate dispatches results in delayed processing of bug reports. In this paper, we propose an algorithm called ABC-DR to solve the bug assignment problem. The ABC-DR algorithm is a two-part composite approach that includes analysis between bug reports (i.e., B-based analysis) and analysis between developers and bug reports (i.e., D-based analysis). For analysis between bug reports, we use the multi-label k-nearest neighbor (ML-KNN) algorithm to find similar bug reports when compared with the new bug reports, and the developers who analyze similar bug reports recommend developers for the new bug report. For analysis between developers and bug reports, we find developer rankings similar to the new bug report by calculating the relevance scores between developers and similar bug reports. We use the artificial bee colony (ABC) algorithm to calculate the weight of each part. We evaluated the proposed algorithms on three datasets—GCC, Mozilla, and NetBeans—comparing ABC-DR with DevRec, DREX, and Bugzie. The experimental results show that the proposed ABC-DR algorithm achieves the highest improvement of 51.2% and 53.56% over DevRec for [email protected] and [email protected] in the NetBeans dataset. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
A Mobile-Oriented System for Integrity Preserving in Audio Forensics
Appl. Sci. 2019, 9(15), 3097; https://doi.org/10.3390/app9153097 - 31 Jul 2019
Cited by 1
Abstract
This paper addresses a problem in the field of audio forensics. With the aim of providing a solution that helps Chain of Custody (CoC) processes, we propose an integrity verification system that includes capture (mobile based), hash code calculation and cloud storage. When [...] Read more.
This paper addresses a problem in the field of audio forensics. With the aim of providing a solution that helps Chain of Custody (CoC) processes, we propose an integrity verification system that includes capture (mobile based), hash code calculation and cloud storage. When the audio is recorded, a hash code is generated in situ by the capture module (an application), and it is sent immediately to the cloud. Later, the integrity of the audio recording given as evidence can be verified according to the information stored in the cloud. To validate the properties of the proposed scheme, we conducted several tests to evaluate if two different inputs could generate the same hash code (collision resistance), and to evaluate how much the hash code changes when small changes occur in the input (sensitivity analysis). According to the results, all selected audio signals provide different hash codes, and these values are very sensitive to small changes over the recorded audio. On the other hand, in terms of computational cost, less than 2 s per minute of recording are required to calculate the hash code. With the above results, our system is useful to verify the integrity of audio recordings that may be relied on as digital evidence. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Smart Fault-Tolerant Control System Based on Chaos Theory and Extension Theory for Locating Faults in a Three-Level T-Type Inverter
Appl. Sci. 2019, 9(15), 3071; https://doi.org/10.3390/app9153071 - 30 Jul 2019
Cited by 2
Abstract
This study proposes a smart fault-tolerant control system based on the theory of Lorenz chaotic system and extension theory for locating faults and executing tolerant control in a three-level T-type inverter. First, the system constantly monitors the fault states of the 12 power [...] Read more.
This study proposes a smart fault-tolerant control system based on the theory of Lorenz chaotic system and extension theory for locating faults and executing tolerant control in a three-level T-type inverter. First, the system constantly monitors the fault states of the 12 power transistor switches of the three-level T-type inverter; if a power transistor fails, the corresponding output phase voltage waveform is converted by a Lorenz chaotic system. Chaos eye coordinates are then extracted from a scatter diagram of chaotic dynamic states and considered as fault characteristics. The system then executes fault diagnosis based on extension theory. The fault characteristic value is used as the input signal for correlation analysis; thus, the faulty power transistor can be located and the fault diagnosis can be achieved for the inverter. The fault-tolerant control system can maintain the three-phase balanced output of the three-level T-type inverter, thereby improving the reliability of the motor drive system. The feasibility of the proposed smart fault-tolerant control system was assessed by conducting simulations in this study, and the results verified its feasibility. Accordingly, after the occurrence of the fault in power switches, the balanced three-phase output line voltage remained unchanged, and the quality of the output voltage was not reduced by using the integration of the proposed fault diagnosis system and fault-tolerant control system for a three-level T-type Inverter. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Heuristic Approaches to Attribute Reduction for Generalized Decision Preservation
Appl. Sci. 2019, 9(14), 2841; https://doi.org/10.3390/app9142841 - 16 Jul 2019
Cited by 2
Abstract
Attribute reduction is a challenging problem in rough set theory, which has been applied in many research fields, including knowledge representation, machine learning, and artificial intelligence. The main objective of attribute reduction is to obtain a minimal attribute subset that can retain the [...] Read more.
Attribute reduction is a challenging problem in rough set theory, which has been applied in many research fields, including knowledge representation, machine learning, and artificial intelligence. The main objective of attribute reduction is to obtain a minimal attribute subset that can retain the same classification or discernibility properties as the original information system. Recently, many attribute reduction algorithms, such as positive region preservation, generalized decision preservation, and distribution preservation, have been proposed. The existing attribute reduction algorithms for generalized decision preservation are mainly based on the discernibility matrix and are, thus, computationally very expensive and hard to use in large-scale and high-dimensional data sets. To overcome this problem, we introduce the similarity degree for generalized decision preservation. On this basis, the inner and outer significance measures are proposed. By using heuristic strategies, we develop two quick reduction algorithms for generalized decision preservation. Finally, theoretical and experimental results show that the proposed heuristic reduction algorithms are effective and efficient. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Faster-than-Nyquist Signal Processing Based on Unequal Error Probability for High-Throughput Wireless Communications
Appl. Sci. 2019, 9(12), 2413; https://doi.org/10.3390/app9122413 - 13 Jun 2019
Abstract
Faster-than-Nyquist (FTN) signal processing, which transmits signals faster than the Nyquist rate, is a representative method for improving throughput efficiency sacrificed performance degradation due to inter-symbol interference. To overcome this problem, this paper proposed FTN signal processing based on the unequal error probability [...] Read more.
Faster-than-Nyquist (FTN) signal processing, which transmits signals faster than the Nyquist rate, is a representative method for improving throughput efficiency sacrificed performance degradation due to inter-symbol interference. To overcome this problem, this paper proposed FTN signal processing based on the unequal error probability to improve performance. The unequal error probability method divides encoded bits into groups according to priority, and FTN interference rates are differently applied to each group. A lower FTN interference ratio is allocated to the group to which high-priority encoded bits belong and a higher FTN interference ratio is allocated to the group to which low-priority encoded bits belong, thus performance improvement can be obtained compared to the conventional FTN method, with the same interference ratio. In addition, we applied the proposed FTN signal processing, based on the unequal error probability method, to the OFDM (orthogonal frequency division multiplexing) system in multipath channel environments. In the simulations, the performance of the proposed method was better than that of the conventional FTN method by about 0.2 dB to 0.3 dB, with an interference ratio of 20%, 30%, and 40%. In addition, in multipath channels, we confirmed that by applying the proposed unequal error probability, the OFDM-FTN method improves performance to a larger extent than the conventional OFDM-FTN method. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
An Experimental Analytics on Discovering Work Transference Networks from Workflow Enactment Event Logs
Appl. Sci. 2019, 9(11), 2368; https://doi.org/10.3390/app9112368 - 10 Jun 2019
Cited by 1
Abstract
Work transference network is a type of enterprise social network centered on the interactions among performers participating in the workflow processes. It is thought that the work transference networks hidden in workflow enactment histories are able to denote not only the structure of [...] Read more.
Work transference network is a type of enterprise social network centered on the interactions among performers participating in the workflow processes. It is thought that the work transference networks hidden in workflow enactment histories are able to denote not only the structure of the enterprise social network among performers but also imply the degrees of relevancy and intensity between them. The purpose of this paper is to devise a framework that can discover and analyze work transference networks from workflow enactment event logs. The framework includes a series of conceptual definitions to formally describe the overall procedure of the network discovery. To support this conceptual framework, we implement a system that provides functionalities for the discovery, analysis and visualization steps. As a sanity check for the framework, we carry out a mining experiment on a dataset of real-life event logs by using the implemented system. The experiment results show that the framework is valid in discovering transference networks correctly and providing primitive knowledge pertaining to the discovered networks. Finally, we expect that the analytics of the work transference network facilitates assessing the workflow fidelity in human resource planning and its observed performance, and eventually enhances the workflow process from the organizational aspect. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Wasserstein Generative Adversarial Network Based De-Blurring Using Perceptual Similarity
Appl. Sci. 2019, 9(11), 2358; https://doi.org/10.3390/app9112358 - 08 Jun 2019
Abstract
The de-blurring of blurred images is one of the most important image processing methods and it can be used for the preprocessing step in many multimedia and computer vision applications. Recently, de-blurring methods have been performed by neural network methods, such as the [...] Read more.
The de-blurring of blurred images is one of the most important image processing methods and it can be used for the preprocessing step in many multimedia and computer vision applications. Recently, de-blurring methods have been performed by neural network methods, such as the generative adversarial network (GAN), which is a powerful generative network. Among many different types of GAN, the proposed method is performed using the Wasserstein generative adversarial network with gradient penalty (WGANGP). Since edge information is the most important factor in an image, the style loss function is applied to represent the perceptual information of the edge in order to preserve small edge information and capture its perceptual similarity. As a result, the proposed method improves the similarity between sharp and blurred images by minimizing the Wasserstein distance, and it captures well the perceptual similarity using the style loss function, considering the correlation of features in the convolutional neural network (CNN). To confirm the performance of the proposed method, three experiments are conducted using two datasets: the GOPRO Large and Kohler dataset. The optimal solutions are found by changing the parameter values experimentally. Consequently, the experiments depict that the proposed method achieves 0.98 higher performance in structural similarity (SSIM) and outperforms other de-blurring methods in the case of both datasets. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Modeling the Power Consumption of Function-Level Code Relocation for Low-Power Embedded Systems
Appl. Sci. 2019, 9(11), 2354; https://doi.org/10.3390/app9112354 - 08 Jun 2019
Abstract
The problems associated with the battery life of embedded systems were addressed by focusing on memory components that are heterogeneous and are known to meaningfully affect the power consumption and have not been fully exploited thus far. Our study establishes a model that [...] Read more.
The problems associated with the battery life of embedded systems were addressed by focusing on memory components that are heterogeneous and are known to meaningfully affect the power consumption and have not been fully exploited thus far. Our study establishes a model that predicts and orders the efficiency of function-level code relocation. This is based on extensive code profiling that was performed on an actual system to discover the impact and was achieved by using function-level code relocation between the different types of memory, i.e., flash memory and static RAM, to reduce the power consumption. This was accomplished by grouping the assembly instructions to evaluate the distinctive power reduction efficiency depending on function code placement. As a result of the profiling, the efficiency of the function-level code relocation was the lowest at 11.517% for the branch and control groups and the highest at 12.623% for the data processing group. Further, we propose a prior relocation-scoring model to estimate the effective relocation order among functions in a program. To demonstrate the effectiveness of the proposed model, benchmarks in the MiBench benchmark suite were selected as case studies. The experimental results are consistent in terms of the scored outputs produced by the proposed model and measured power reduction efficiencies. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Reconstruction of PC Display from Common-Mode Noise Emitting on Electrical Power Line
Appl. Sci. 2019, 9(11), 2328; https://doi.org/10.3390/app9112328 - 06 Jun 2019
Abstract
This paper presents a method for reconstruction of a personal computer (PC) display image from common-mode noises coupling with monitor signals on a PC power cable. While the signal cable, which connects the PC and the monitor, is usually near the user, the [...] Read more.
This paper presents a method for reconstruction of a personal computer (PC) display image from common-mode noises coupling with monitor signals on a PC power cable. While the signal cable, which connects the PC and the monitor, is usually near the user, the power cable is connected to the outside electrical network of the office or the building. Thus, the power cables may become dominant gateways and/or antennas for emission and conduction of the common-mode noise, which may lead to a serious security issue. The measured common-mode was found to include both the monitor signal and undesired beats, which were caused by step responses of the signal and conceal the meaningful information. The original monitor signal was then calculated by excluding the beats, which could be measured by using standard up-step and down-step responses, from the measured common-mode noise and using an inverse function of the noise current level. The experimental results show that the beats were removed almost completely from the noise waveform for a monochromatic image. Alphabetic character strings, each of which were composed of, at most, 9 × 9 dots, were confirmed to be reconstructed clearly both in the monitor resolutions of 800 × 600 pixels and 1280 × 1024 pixels from the common-mode noise. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
A Novel Wavelet-Based Algorithm for Detection of QRS Complex
Appl. Sci. 2019, 9(10), 2142; https://doi.org/10.3390/app9102142 - 26 May 2019
Cited by 5
Abstract
Accurate QRS detection is an important first step for almost all automatic electrocardiogram (ECG) analyzing systems. However, QRS detection is difficult, not only because of the wide variety of ECG waveforms but also because of the interferences caused by various types of noise. [...] Read more.
Accurate QRS detection is an important first step for almost all automatic electrocardiogram (ECG) analyzing systems. However, QRS detection is difficult, not only because of the wide variety of ECG waveforms but also because of the interferences caused by various types of noise. This study proposes an improved QRS complex detection algorithm based on a four-level biorthogonal spline wavelet transform. A noise evaluation method is proposed to quantify the noise amount and to select a lower-noise wavelet detail signal instead of removing high-frequency components in the preprocessing stage. The QRS peaks can be detected by the extremum pairs in the selected wavelet detail signal and the proposed decision rules. The results show the high accuracy of the proposed algorithm, which achieves a 0.25% detection error rate, 99.84% sensitivity, and 99.92% positive prediction value, evaluated using the MIT-BIT arrhythmia database. The proposed algorithm improves the accuracy of QRS detection in comparison with several wavelet-based and non-wavelet-based approaches. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Particle Swarm Optimization and Cuckoo Search-Based Approaches for Quadrotor Control and Trajectory Tracking
Appl. Sci. 2019, 9(8), 1719; https://doi.org/10.3390/app9081719 - 25 Apr 2019
Cited by 4
Abstract
This paper explores the full control of a quadrotor Unmanned Aerial Vehicles (UAVs) by exploiting the nature-inspired algorithms of Particle Swarm Optimization (PSO), Cuckoo Search (CS), and the cooperative Particle Swarm Optimization-Cuckoo Search (PSO-CS). The proposed PSO-CS algorithm combines the ability of social [...] Read more.
This paper explores the full control of a quadrotor Unmanned Aerial Vehicles (UAVs) by exploiting the nature-inspired algorithms of Particle Swarm Optimization (PSO), Cuckoo Search (CS), and the cooperative Particle Swarm Optimization-Cuckoo Search (PSO-CS). The proposed PSO-CS algorithm combines the ability of social thinking in PSO with the local search capability in CS, which helps to overcome the problem of low convergence speed of CS. First, the quadrotor dynamic modeling is defined using Newton-Euler formalism. Second, PID (Proportional, Integral, and Derivative) controllers are optimized by using the intelligent proposed approaches and the classical method of Reference Model (RM) for quadrotor full control. Finally, simulation results prove that PSO and PSO-CS are more efficient in tuning of optimal parameters for the quadrotor control. Indeed, the ability of PSO and PSO-CS to track the imposed trajectories is well seen from 3D path tracking simulations and even in presence of wind disturbances. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Model and Data-Driven System Portfolio Selection Based on Value and Risk
Appl. Sci. 2019, 9(8), 1657; https://doi.org/10.3390/app9081657 - 22 Apr 2019
Abstract
System portfolio selection is a kind of tradeoff analysis and decision-making on multiple systems as a whole to fulfill the overall performance on the perspective of System of Systems (SoS). To avoid the subjectivity of traditional expert experience-dependent models, a model and data-driven [...] Read more.
System portfolio selection is a kind of tradeoff analysis and decision-making on multiple systems as a whole to fulfill the overall performance on the perspective of System of Systems (SoS). To avoid the subjectivity of traditional expert experience-dependent models, a model and data-driven approach is proposed to make an advance on the system portfolio selection. Two criteria of value and risk are used to indicate the quality of system portfolios. A capability gap model is employed to determine the value of system portfolios, with the weight information determined by correlation analysis. Then, the risk is represented by the remaining useful life (RUL), which is predicted by analyzing time series of system operational data. Next, based on the value and risk, an optimization model is proposed. Finally, a case with 100 candidate systems is studied under the scenario of anti-missile. By utilizing the Non-dominated Sorting Differential Evolution (NSDE) algorithm, a Pareto set with 200 individuals is obtained. Some characters of the Pareto set are analyzed by discussing the frequency of being selected and the association rules. Through the conclusion of the whole procedures, it can be proved that the proposed model and data-driven approach is feasible and effective for system portfolio selection. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Analysis of the Weights of Service Quality Indicators for Drone Filming and Photography by the Fuzzy Analytic Network Process
Appl. Sci. 2019, 9(6), 1236; https://doi.org/10.3390/app9061236 - 24 Mar 2019
Abstract
The service of drone filming and photography has been getting more and more popular. However, the service provider does not have enough information about service quality indicators and its weights. Analyzing the weights of service quality indicators by the Fuzzy Analytic Network Process [...] Read more.
The service of drone filming and photography has been getting more and more popular. However, the service provider does not have enough information about service quality indicators and its weights. Analyzing the weights of service quality indicators by the Fuzzy Analytic Network Process (FANP) combined with Similarity Aggregation Method (SAM) is an important research topic. Therefore, in order to solve this real life problem, based on the SERVQUAL scale, this research analyzes the weights and the rankings from a comprehensive consensus by FANP combined with geometric mean and SAM, and then compares the differences between them. The results reveal that both the comprehensive consensus of experts’ opinions deemed that the most important dimension and indicator are reliability and “Employees are professional and get adequate support to do their jobs well.” The 2nd to 4th indicators from a comprehensive consensus of experts’ opinions are the same but the order is different. They are: “Drone service team’s employees give custom personal attention,” “Drone service team has up-to-date equipment,” and “Drone service team provides service legally, safely, and reliably.” The findings of the research reveal the weights of dimensions and indicators and help us to keep good service quality of filming and photography by drone. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Research on Control of Intelligent Vehicle Human-Simulated Steering System Based on HSIC
Appl. Sci. 2019, 9(5), 905; https://doi.org/10.3390/app9050905 - 04 Mar 2019
Cited by 1
Abstract
The experienced drivers with good driving skills are used as objects of learning, and road steering test data of skilled drivers are collected in this article. First, a nonlinear fitting was made to the driving trajectory of skilled driver in order to achieve [...] Read more.
The experienced drivers with good driving skills are used as objects of learning, and road steering test data of skilled drivers are collected in this article. First, a nonlinear fitting was made to the driving trajectory of skilled driver in order to achieve human-simulated control. The segmental polynomial expression was solved for two typical steering conditions of normal right-steering and U-turn, and the hp adaptive pseudo-spectral method was used to solve the connection problem of the vehicle segmental driving trajectory. Second, a new Electric Power Steering (EPS) system was proposed, and the intelligent vehicle human-simulated steering system control model based on human simulated intelligent control (HSIC) was established in Simulink/Carsim joint simulation environment to simulate and analyze. Finally, in order to further verify the effectiveness of the proposed algorithm in this article, an intelligent vehicle steering system test bench with a steering resistance torque simulation device was built, and the dSPACE rapid prototype controller was used to realize human-simulated intelligent control law. The results show that the human-simulated steering control algorithm is superior to the traditional proportion integration differentiation (PID) control in the tracking effect of the steering characteristic parameters and passenger comfort. The steering wheel angle and torque can better track the angle and torque variation curve of real vehicle steering experiment of the skilled driver, and the effectiveness of the intelligent vehicle human-simulated steering control algorithm based on HSIC proposed in this article is verified. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Graphical abstract

Open AccessArticle
On the n-Dimensional Phase Portraits
Appl. Sci. 2019, 9(5), 872; https://doi.org/10.3390/app9050872 - 28 Feb 2019
Abstract
The phase portrait for dynamic systems is a tool used to graphically determine the instantaneous behavior of its trajectories for a set of initial conditions. Classic phase portraits are limited to two dimensions and occasionally snapshots of 3D phase portraits are presented; unfortunately, [...] Read more.
The phase portrait for dynamic systems is a tool used to graphically determine the instantaneous behavior of its trajectories for a set of initial conditions. Classic phase portraits are limited to two dimensions and occasionally snapshots of 3D phase portraits are presented; unfortunately, a single point of view of a third or higher order system usually implies information losses. To solve that limitation, some authors used an additional degree of freedom to represent phase portraits in three dimensions, for example color graphics. Other authors perform states combinations, empirically, to represent higher dimensions, but the question remains whether it is possible to extend the two-dimensional phase portraits to higher order and their mathematical basis. In this paper, it is reported that the combinations of states to generate a set of phase portraits is enough to determine without loss of information the complete behavior of the immediate system dynamics for a set of initial conditions in an n-dimensional state space. Further, new graphical tools are provided capable to represent methodically the phase portrait for higher order systems. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
An Efficient Encoding Algorithm Using Local Path on Huffman Encoding Algorithm for Compression
Appl. Sci. 2019, 9(4), 782; https://doi.org/10.3390/app9040782 - 22 Feb 2019
Abstract
Huffman encoding and arithmetic coding algorithms have shown great potential in the field of image compression. These algorithms are the origin of current image compression techniques. Nevertheless, there are some deficiencies in both algorithms that use the frequencies of the characters in the [...] Read more.
Huffman encoding and arithmetic coding algorithms have shown great potential in the field of image compression. These algorithms are the origin of current image compression techniques. Nevertheless, there are some deficiencies in both algorithms that use the frequencies of the characters in the data. They aim to represent the symbols used in the data in the shortest bit sequence. However, they represent data that has a low frequency of use with very long bit sequences. The arithmetic coding algorithm was developed to address the shortcomings of the Huffman encoding algorithm. This paper proposes an efficient, alternative encoding algorithm that uses the Huffman encoding algorithm. The main objective of the proposed algorithm is to reduce the number of bits that are symbolized with long bit codewords by the Huffman encoding algorithm. Initially, the Huffman encoding algorithm is applied to the data. The characters that are represented by the short bit sequence from the Huffman encoding algorithm are ignored. Flag bits are then added according to whether the successive symbols are on the same leaf. If the next character is not on the same leaf, flag bit “0” is added, otherwise flag bit “1” is added between the characters. In other words, the key significance of this algorithm is that it uses the effective aspects of the Huffman encoding algorithm, and it also proposes a solution to long bit sequences that cannot be efficiently represented. Most importantly, the validity of the algorithm is meticulously evaluated with three different groups of images. Randomly selected images from the USC-SIPI and STARE databases, and randomly selected standard images on internet, are used. The algorithm encodes compressing operations for images successfully. Some images that have a balanced tree structure have yielded close results compared to other algorithms. However, when the total results are inspected, the proposed encoding algorithm achieved excellent results. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
S-Box Based Image Encryption Application Using a Chaotic System without Equilibrium
Appl. Sci. 2019, 9(4), 781; https://doi.org/10.3390/app9040781 - 22 Feb 2019
Cited by 20
Abstract
Chaotic systems without equilibrium are of interest because they are the systems with hidden attractors. A nonequilibrium system with chaos is introduced in this work. Chaotic behavior of the system is verified by phase portraits, Lyapunov exponents, and entropy. We have implemented a [...] Read more.
Chaotic systems without equilibrium are of interest because they are the systems with hidden attractors. A nonequilibrium system with chaos is introduced in this work. Chaotic behavior of the system is verified by phase portraits, Lyapunov exponents, and entropy. We have implemented a real electronic circuit of the system and reported experimental results. By using this new chaotic system, we have constructed S-boxes which are applied to propose a novel image encryption algorithm. In the designed encryption algorithm, three S-boxes with strong cryptographic properties are used for the sub-byte operation. Particularly, the S-box for the sub-byte process is selected randomly. In addition, performance analyses of S-boxes and security analyses of the encryption processes have been presented. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
A Data-Driven Game Theoretic Strategy for Developers in Software Crowdsourcing: A Case Study
Appl. Sci. 2019, 9(4), 721; https://doi.org/10.3390/app9040721 - 19 Feb 2019
Cited by 3
Abstract
Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have [...] Read more.
Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Defining the Minimum Security Baseline in a Multiple Security Standards Environment by Graph Theory Techniques
Appl. Sci. 2019, 9(4), 681; https://doi.org/10.3390/app9040681 - 17 Feb 2019
Cited by 1
Abstract
One of the best ways to protect an organization’s assets is to implement security requirements defined by different standards or best practices. However, such an approach is complicated and requires specific skills and knowledge. In case an organization applies multiple security standards, several [...] Read more.
One of the best ways to protect an organization’s assets is to implement security requirements defined by different standards or best practices. However, such an approach is complicated and requires specific skills and knowledge. In case an organization applies multiple security standards, several problems can arise related to overlapping or conflicting security requirements, increased expenses on security requirement implementation, and convenience of security requirement monitoring. To solve these issues, we propose using graph theory techniques. Graphs allow the presentation of security requirements of a standard as graph vertexes and edges between vertexes, and would show the relations between different requirements. A vertex cover algorithm is proposed for minimum security requirement identification, while graph isomorphism is proposed for comparing existing organization controls against a set of minimum requirements identified in the previous step. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Graphical abstract

Open AccessArticle
Effect of Fiber Weave Structure in Printed Circuit Boards on Signal Transmission Characteristics
Appl. Sci. 2019, 9(2), 353; https://doi.org/10.3390/app9020353 - 21 Jan 2019
Cited by 1
Abstract
In this paper, we characterized and compared signal transmission performances of traces with different specifications of fiber weave. Measurements demonstrated that the dielectric constant, impedance fluctuation, and differential skew were all affected by fiber weave style. For flattened fiber weaves, the dielectric constant [...] Read more.
In this paper, we characterized and compared signal transmission performances of traces with different specifications of fiber weave. Measurements demonstrated that the dielectric constant, impedance fluctuation, and differential skew were all affected by fiber weave style. For flattened fiber weaves, the dielectric constant fluctuation reached 0.18, the impedance fluctuation amplitude was 1.0 Ω, and the differential skew was 2 ps/in. For conventional fiber weaves, the three parameters were 0.44, 2.5 Ω, and 4 ps/inch respectively. Flattened fiber weave was more favorable for high-speed signal control. We also discussed the other methods to improve the fiber weave effect. It turned out that NE-glass (new electronic glass) fiber weave also had better performance in reducing impedance fluctuation and differential skew. Furthermore, made the signal traces and fiber weave bundles with an angle or designing the long signal line parallel to the weft direction both are simple and effective methods to solve this problem. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Spreadsheets as Interoperability Solution for Business Process Representation
Appl. Sci. 2019, 9(2), 345; https://doi.org/10.3390/app9020345 - 20 Jan 2019
Cited by 1
Abstract
Business process models help to visualize processes of an organization. In enterprises, these processes are often specified in internal regulations, resolutions or other law acts of a company. Such descriptions, like task lists, have mostly form of enumerated lists or spreadsheets. In this [...] Read more.
Business process models help to visualize processes of an organization. In enterprises, these processes are often specified in internal regulations, resolutions or other law acts of a company. Such descriptions, like task lists, have mostly form of enumerated lists or spreadsheets. In this paper, we present a mapping of process model elements into a spreadsheet representation. As a process model can be represented in various notations, this can be seen as an interoperability solution for process knowledge interchange between different representations. In presenting the details of the solution, we focus on the popular BPMN representation, which is a de facto standard for business process modeling. We present a method how to generate a BPMN process model from a spreadsheet-based representation. In contrast to the other existing approaches concerning spreadsheets, our method does not require explicit specification of gateways in the spreadsheet, but it takes advantage of nested list form. Such a spreadsheet can be created either manually or merged from the task list specifications provided by users. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Graphical abstract

Open AccessArticle
Dynamic Parameter Identification of a Lower Extremity Exoskeleton Using RLS-PSO
Appl. Sci. 2019, 9(2), 324; https://doi.org/10.3390/app9020324 - 17 Jan 2019
Cited by 3
Abstract
The lower extremity exoskeleton is a device for auxiliary assistance of human movement. The interaction performance between the exoskeleton and the human is determined by the lower extremity exoskeleton’s controller. The performance of the controller is affected by the accuracy of the dynamic [...] Read more.
The lower extremity exoskeleton is a device for auxiliary assistance of human movement. The interaction performance between the exoskeleton and the human is determined by the lower extremity exoskeleton’s controller. The performance of the controller is affected by the accuracy of the dynamic equation. Therefore, it is necessary to study the dynamic parameter identification of lower extremity exoskeleton. The existing dynamic parameter identification algorithms for lower extremity exoskeletons are generally based on Least Square (LS). There are some internal drawbacks, such as complicated experimental processes and low identification accuracy. A dynamic parameter identification algorithm based on Particle Swarm Optimization (PSO) with search space defined by Recursive Least Square (RLS) is developed in this investigation. The developed algorithm is named RLS-PSO. By defining the search space of PSO, RLS-PSO not only avoids the convergence of identified parameters to the local minima, but also improves the identification accuracy of exoskeleton dynamic parameters. Under the same experimental conditions, the identification accuracy of RLS-PSO, PSO and LS was quantitatively compared and analyzed. The results demonstrated that the identification accuracy of RLS-PSO is higher than that of LS and PSO. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Background Knowledge Based Multi-Stream Neural Network for Text Classification
Appl. Sci. 2018, 8(12), 2472; https://doi.org/10.3390/app8122472 - 03 Dec 2018
Cited by 11
Abstract
As a foundation and typical task in natural language processing, text classification has been widely applied in many fields. However, as the basis of text classification, most existing corpus are imbalanced and often result in the classifier tending its performance to those categories [...] Read more.
As a foundation and typical task in natural language processing, text classification has been widely applied in many fields. However, as the basis of text classification, most existing corpus are imbalanced and often result in the classifier tending its performance to those categories with more texts. In this paper, we propose a background knowledge based multi-stream neural network to make up for the imbalance or insufficient information caused by the limitations of training corpus. The multi-stream network mainly consists of the basal stream, which retained original sequence information, and background knowledge based streams. Background knowledge is composed of keywords and co-occurred words which are extracted from external corpus. Background knowledge based streams are devoted to realizing supplemental information and reinforce basal stream. To better fuse the features extracted from different streams, early-fusion and two after-fusion strategies are employed. According to the results obtained from both Chinese corpus and English corpus, it is demonstrated that the proposed background knowledge based multi-stream neural network performs well in classification tasks. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
An Efficient Routing Protocol Using the History of Delivery Predictability in Opportunistic Networks
Appl. Sci. 2018, 8(11), 2215; https://doi.org/10.3390/app8112215 - 10 Nov 2018
Cited by 1
Abstract
In opportunistic networks such as delay tolerant network, a message is delivered to a final destination node using the opportunistic routing protocol since there is no guaranteed routing path from a sending node to a receiving node and most of the connections between [...] Read more.
In opportunistic networks such as delay tolerant network, a message is delivered to a final destination node using the opportunistic routing protocol since there is no guaranteed routing path from a sending node to a receiving node and most of the connections between nodes are temporary. In opportunistic routing, a message is delivered using a ‘store-carry-forward’ strategy, where a message is stored in the buffer of a node, a node carries the message while moving, and the message is forwarded to another node when a contact occurs. In this paper, we propose an efficient opportunistic routing protocol using the history of delivery predictability of mobile nodes. In the proposed routing protocol, if a node receives a message from another node, the value of the delivery predictability of the receiving node to the destination node for the message is managed, which is defined as the previous delivery predictability. Then, when two nodes contact, a message is forwarded only if the delivery predictability of the other node is higher than both the delivery predictability and previous delivery predictability of the sending node. Performance analysis results show that the proposed protocol performs best, in terms of delivery ratio, overhead ratio, and delivery latency for varying buffer size, message generation interval, and the number of nodes. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
A Chaotic System with Infinite Equilibria and Its S-Box Constructing Application
Appl. Sci. 2018, 8(11), 2132; https://doi.org/10.3390/app8112132 - 02 Nov 2018
Cited by 18
Abstract
Systems with many equilibrium points have attracted considerable interest recently. A chaotic system with a line equilibrium has been studied in this work. The system has infinite equilibria and exhibits coexisting chaotic attractors. The system with an infinite number of equilibria has been [...] Read more.
Systems with many equilibrium points have attracted considerable interest recently. A chaotic system with a line equilibrium has been studied in this work. The system has infinite equilibria and exhibits coexisting chaotic attractors. The system with an infinite number of equilibria has been realized by an electronic circuit, which confirms the feasibility of the system. Based on such a system, we have developed a new S-Box generation algorithm. With the developed algorithm, two new S-Boxes are produced. Performance tests of S-Boxes are performed. The tests have shown that proposed S-Boxes have good performance results. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Research of Message Scheduling for In-Vehicle FlexRay Network Static Segment Based on Next Fit Decreasing (NFD) Algorithm
Appl. Sci. 2018, 8(11), 2071; https://doi.org/10.3390/app8112071 - 26 Oct 2018
Cited by 2
Abstract
FlexRay is becoming the in-vehicle communication network of the next generation. In this study, the main contents are the FlexRay network static segment scheduling algorithm and optimization strategy, improve the scheduling efficiency of vehicle network and optimize the performance of communication network. The [...] Read more.
FlexRay is becoming the in-vehicle communication network of the next generation. In this study, the main contents are the FlexRay network static segment scheduling algorithm and optimization strategy, improve the scheduling efficiency of vehicle network and optimize the performance of communication network. The FlexRay static segment characteristic was first analyzed, then selected bandwidth utilization as the performance metrics to scheduling problem. A signal packing method is proposed based on Next Fit Decreasing (NFD) algorithm. Then Frame ID (FID) multiplexing method was used to minimize the number of FIDs. Finally, experimental simulation by CANoe. FlexRay software, that shows the model can quickly obtain the message schedule of each node, effectively control the message payload size and reduced bus payload by 16.3%, the number of FID drops 53.8% while improving bandwidth utilization by 32.8%. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Co-Occurrence Network of High-Frequency Words in the Bioinformatics Literature: Structural Characteristics and Evolution
Appl. Sci. 2018, 8(10), 1994; https://doi.org/10.3390/app8101994 - 20 Oct 2018
Cited by 3
Abstract
The subjects of literature are the direct expression of the author’s research results. Mining valuable knowledge helps to save time for the readers to understand the content and direction of the literature quickly. Therefore, the co-occurrence network of high-frequency words in the bioinformatics [...] Read more.
The subjects of literature are the direct expression of the author’s research results. Mining valuable knowledge helps to save time for the readers to understand the content and direction of the literature quickly. Therefore, the co-occurrence network of high-frequency words in the bioinformatics literature and its structural characteristics and evolution were analysed in this paper. First, 242,891 articles from 47 top bioinformatics periodicals were chosen as the object of the study. Second, the co-occurrence relationship among high-frequency words of these articles was analysed by word segmentation and high-frequency word selection. Then, a co-occurrence network of high-frequency words in bioinformatics literature was built. Finally, the conclusions were drawn by analysing its structural characteristics and evolution. The results showed that the co-occurrence network of high-frequency words in the bioinformatics literature was a small-world network with scale-free distribution, rich-club phenomenon and disassortative matching characteristics. At the same time, the high-frequency words used by authors changed little in 2–3 years but varied greatly in four years because of the influence of the state-of-the-art technology. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
A Method of Free-Space Point-of-Regard Estimation Based on 3D Eye Model and Stereo Vision
Appl. Sci. 2018, 8(10), 1769; https://doi.org/10.3390/app8101769 - 30 Sep 2018
Cited by 1
Abstract
This paper proposes a 3D point-of-regard estimation method based on 3D eye model and a corresponding head-mounted gaze tracking device. Firstly, a head-mounted gaze tracking system is given. The gaze tracking device uses two pairs of stereo cameras to capture the left and [...] Read more.
This paper proposes a 3D point-of-regard estimation method based on 3D eye model and a corresponding head-mounted gaze tracking device. Firstly, a head-mounted gaze tracking system is given. The gaze tracking device uses two pairs of stereo cameras to capture the left and right eye images, respectively, and then sets a pair of scene cameras to capture the scene images. Secondly, a 3D eye model and the calibration process are established. Common eye features are used to estimate the eye model parameters. Thirdly, a 3D point-of-regard estimation algorithm is proposed. Three main parts of this method are summarized as follows: (1) the spatial coordinates of the eye features are directly calculated by using stereo cameras; (2) the pupil center normal is used to the initial value for the estimation of optical axis; (3) a pair of scene cameras are used to solve the actual position of the objects being watched in the calibration process, and the calibration for the proposed eye model does not need the assistance of the light source. Experimental results show that the proposed method can output the coordinates of 3D point-of-regard more accurately. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
H Robust Load Frequency Control for Multi-Area Interconnected Power System with Hybrid Energy Storage System
Appl. Sci. 2018, 8(10), 1748; https://doi.org/10.3390/app8101748 - 27 Sep 2018
Cited by 3
Abstract
To enhance the quality of output power from regional interconnected power grid and strengthen the stability of overall system, a hybrid energy storage system (HESS) is applied to traditional multi-area interconnected power system to improve the performance of load frequency control. A novel [...] Read more.
To enhance the quality of output power from regional interconnected power grid and strengthen the stability of overall system, a hybrid energy storage system (HESS) is applied to traditional multi-area interconnected power system to improve the performance of load frequency control. A novel topology structure of interconnected power system with the HESS is proposed. Considering the external disturbances of the system and the interconnected factors between each control area, the dynamic mathematical model of each area in the new topology is established in the form of state-space equation. Combining the state feedback robust control theory with linear matrix inequality (LMI) theory, the controller is designed to calculate how much power the HESS should provide to power grid in real time, according to the load change of system. Taking the four-area interconnected power system as study object, the simulation results obtained by MATLAB prove that the application of HESS can well improve the frequency stability of multi-area interconnected system and the H robust controller proposed in this paper is effective. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Secure Multiple-Input Multiple-Output Communications Based on F–M Synchronization of Fractional-Order Chaotic Systems with Non-Identical Dimensions and Orders
Appl. Sci. 2018, 8(10), 1746; https://doi.org/10.3390/app8101746 - 27 Sep 2018
Cited by 4
Abstract
This paper investigates the F M synchronization between non-identical fractional-order systems characterized by different dimensions and different orders. F M synchronization combines the inverse generalized synchronization with the matrix projective synchronization. In particular, the proposed approach enables the F M [...] Read more.
This paper investigates the F M synchronization between non-identical fractional-order systems characterized by different dimensions and different orders. F M synchronization combines the inverse generalized synchronization with the matrix projective synchronization. In particular, the proposed approach enables the F M synchronization to be achieved between an n-dimensional master system and an m-dimensional slave system. The developed approach is applied to chaotic and hyperchaotic fractional systems with the aim of illustrating its applicability and suitability. A multiple-input multiple-output (MIMO) secure communication system is also developed by using the F M synchronization and verified through computer simulations. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Memory-Enhanced Dynamic Multi-Objective Evolutionary Algorithm Based on Lp Decomposition
Appl. Sci. 2018, 8(9), 1673; https://doi.org/10.3390/app8091673 - 15 Sep 2018
Cited by 6
Abstract
Decomposition-based multi-objective evolutionary algorithms provide a good framework for static multi-objective optimization. Nevertheless, there are few studies on their use in dynamic optimization. To solve dynamic multi-objective optimization problems, this paper integrates the framework into dynamic multi-objective optimization and proposes a memory-enhanced dynamic [...] Read more.
Decomposition-based multi-objective evolutionary algorithms provide a good framework for static multi-objective optimization. Nevertheless, there are few studies on their use in dynamic optimization. To solve dynamic multi-objective optimization problems, this paper integrates the framework into dynamic multi-objective optimization and proposes a memory-enhanced dynamic multi-objective evolutionary algorithm based on L p decomposition (denoted by dMOEA/D- L p ). Specifically, dMOEA/D- L p decomposes a dynamic multi-objective optimization problem into a number of dynamic scalar optimization subproblems and coevolves them simultaneously, where the L p decomposition method is adopted for decomposition. Meanwhile, a subproblem-based bunchy memory scheme that stores good solutions from old environments and reuses them as necessary is designed to respond to environmental change. Experimental results verify the effectiveness of the L p decomposition method in dynamic multi-objective optimization. Moreover, the proposed dMOEA/D- L p achieves better performance than other popular memory-enhanced dynamic multi-objective optimization algorithms. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
A Low Cost Vision-Based Road-Following System for Mobile Robots
Appl. Sci. 2018, 8(9), 1635; https://doi.org/10.3390/app8091635 - 13 Sep 2018
Cited by 6
Abstract
Navigation is necessary for autonomous mobile robots that need to track the roads in outdoor environments. These functions could be achieved by fusing data from costly sensors, such as GPS/IMU, lasers and cameras. In this paper, we propose a novel method for road [...] Read more.
Navigation is necessary for autonomous mobile robots that need to track the roads in outdoor environments. These functions could be achieved by fusing data from costly sensors, such as GPS/IMU, lasers and cameras. In this paper, we propose a novel method for road detection and road following without prior knowledge, which is more suitable with small single lane roads. The proposed system consists of a road detection system and road tracking system. A color-based road detector and a texture line detector are designed separately and fused to track the target in the road detection system. The top middle area of the road detection result is regarded as the road-following target and is delivered to the road tracking system for the robot. The road tracking system maps the tracking position in camera coordinates to position in world coordinates, which is used to calculate the control commands by the traditional tracking controllers. The robustness of the system is enhanced with the development of an Unscented Kalman Filter (UKF). The UKF estimates the best road borders from the measurement and presents a smooth road transition between frame to frame, especially in situations such as occlusion or discontinuous roads. The system is tested to achieve a recognition rate of about 98.7% under regular illumination conditions and with minimal road-following error within a variety of environments under various lighting conditions. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Optimal Robust Control of Path Following and Rudder Roll Reduction for a Container Ship in Heavy Waves
Appl. Sci. 2018, 8(9), 1631; https://doi.org/10.3390/app8091631 - 12 Sep 2018
Cited by 5
Abstract
This paper presents an optimal approach to the multi-objective synthesis of path following and rudder roll reduction for a container ship in heavy waves. An improved line of sight principle with course-keeping in track-belt is proposed to guide the ship in accordance with [...] Read more.
This paper presents an optimal approach to the multi-objective synthesis of path following and rudder roll reduction for a container ship in heavy waves. An improved line of sight principle with course-keeping in track-belt is proposed to guide the ship in accordance with marine practice. Concise robust controllers for the course and roll motion based on Backstepping and closed-loop gain shaping are developed. The control parameters have obvious physical significance. The determination method is given and much effort is made to guarantee the uniform asymptotic stability of the closed-loop systems by Lyapunov synthesis. Furthermore, the multi-objective optimization method a fast and elitist multi-objective genetic algorithm (NSGA-II) is used to solve the restrictions caused by the model perturbation, external disturbance and performance trade-off. Contrasting with the existing literature, the research strategy and control performance are more in line with marine engineering practice. Simulation results illustrate the performances and effectiveness of the proposed system. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
Channel Estimation Based on Statistical Frames and Confidence Level in OFDM Systems
Appl. Sci. 2018, 8(9), 1607; https://doi.org/10.3390/app8091607 - 10 Sep 2018
Cited by 4
Abstract
Channel estimation is an important module for improving the performance of the orthogonal frequency division multiplexing (OFDM) system. The pilot-based least square (LS) algorithm can improve the channel estimation accuracy and the symbol error rate (SER) performance of the communication system. In pilot-based [...] Read more.
Channel estimation is an important module for improving the performance of the orthogonal frequency division multiplexing (OFDM) system. The pilot-based least square (LS) algorithm can improve the channel estimation accuracy and the symbol error rate (SER) performance of the communication system. In pilot-based channel estimation, a certain number of pilots are inserted at fixed intervals between OFDM symbols to estimate the initial channel information, and channel estimation results can be obtained by one-dimensional linear interpolation. The minimum mean square error (MMSE) and linear minimum mean square error (LMMSE) algorithms involve the inverse operation of the channel matrix. If the number of subcarriers increases, the dimension of the matrix becomes large. Therefore, the inverse operation is more complex. To overcome the disadvantages of the conventional channel estimation methods, this paper proposes a novel OFDM channel estimation method based on statistical frames and the confidence level. The noise variance in the estimated channel impulse response (CIR) can be largely reduced under statistical frames and the confidence level; therefore, it reduces the computational complexity and improves the accuracy of channel estimation. Simulation results verify the effectiveness of the proposed channel estimation method based on the confidence level in time-varying dynamic wireless channels. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Graphical abstract

Open AccessArticle
An Approach to Participatory Business Process Modeling: BPMN Model Generation Using Constraint Programming and Graph Composition
Appl. Sci. 2018, 8(9), 1428; https://doi.org/10.3390/app8091428 - 21 Aug 2018
Cited by 5
Abstract
Designing business process models plays a vital role in business process management. The acquisition of such models may consume up to 60% of the project time. This time can be shortened using methods for the automatic or semi-automatic generation of process models. In [...] Read more.
Designing business process models plays a vital role in business process management. The acquisition of such models may consume up to 60% of the project time. This time can be shortened using methods for the automatic or semi-automatic generation of process models. In this paper, we present a user-friendly method of business process composition. It uses a set of predefined constraints to generate a synthetic log of the process based on a simplified, unordered specification, which describes activities to be performed. Such a log can be used to generate a correct BPMN model. To achieve this, we propose the use of one of the existing process discovery algorithms or executing the activity graph-based composition algorithm, which generates the process model directly from the input log file. The proposed approach allows process participants to take part in process modeling. Moreover, it can be a support for business analysts or process designers in visualizing the workflow without the necessity to design the model explicitly in a graphical editor. The BPMN diagram is generated as an interchangeable XML file, which allows its further modification and adjustment. The included comparative analysis shows that our method is capable of generating process models characterized by high flow complexity and can support BPMN constructs, which are sufficient for about 70% of business cases. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Open AccessArticle
An Improved Opportunistic Routing Protocol Based on Context Information of Mobile Nodes
Appl. Sci. 2018, 8(8), 1344; https://doi.org/10.3390/app8081344 - 10 Aug 2018
Cited by 3
Abstract
Delay tolerant network (DTN) protocol was proposed for a network where connectivity is not available. In DTN, a message is delivered to a destination node via store-carry-forward approach while using opportunistic contacts. Probabilistic routing protocol for intermittently connected networks (PRoPHET) is one of [...] Read more.
Delay tolerant network (DTN) protocol was proposed for a network where connectivity is not available. In DTN, a message is delivered to a destination node via store-carry-forward approach while using opportunistic contacts. Probabilistic routing protocol for intermittently connected networks (PRoPHET) is one of the widely studied DTN protocols. In PRoPHET, a message is forwarded to a contact node, if the contact node has a higher delivery predictability to the destination node of the message. In this paper, we propose an improved opportunistic routing protocol, where context information of average distance travelled and average time elapsed from the reception of a message to the delivery of the message to the destination node is used. In the proposed protocol, the average distance and average time are updated whenever a message is delivered to a destination node. Then, both average distance and average time as well as delivery predictability of PRoPHET protocol are used to decide a message forwarding. The performance of the proposed protocol is analyzed and compared with that of PRoPHET and reachable probability centrality (RPC) protocol, which is one of the latest protocols using the contact history information of a mobile node. Simulation results show that the proposed protocol has better performance than both PRoPHET and RPC, from the aspect of delivery ratio, overhead ratio, and delivery latency for varying buffer size, message generation interval, and the number of nodes. Full article
(This article belongs to the Special Issue Applied Sciences Based on and Related to Computer and Control)
Show Figures

Figure 1

Back to TopTop