Special Issue "Computational and Mathematical Methods in Engineering and Information Science"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (1 April 2021).

Special Issue Editor

Dr. Ahmed Farouk
E-Mail Website
Guest Editor
Department of Physics and Computer Science, Faculty of Science, Wilfrid Laurier University, Waterloo, Canada
Interests: quantum information and computation; information security and privacy
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The scope of this Special Issue covers computational and mathematical methods in engineering, information science, and their applications. Additionally, it includes intelligent systems research and focuses on the safety, trust, privacy, and security of augmented human technologies. Computational methods are the interface between applied mathematics, statistics, information science, and engineering. It focuses on analytically intractable problems of classical and quantum computations. Many of the most exciting phenomena involve specific quantum effects and entanglement, such as quantum phase transitions, spin fractionalization, and topological order. It is clear then that researchers in these areas share an interest in developing techniques using computational and mathematical methods to quantify, analyze, and understand recent applications in the more complicated systems. Over the past several years, very fruitful interactions between researchers in computational methods, information science, quantum information, and statistical mechanics have already taken place, and we expect this activity to continue and to intensify over the next years.

This Special Issue is devoted to considering original research articles, as well as review articles on computational and mathematical methods in engineering, information science, and their applications. We require gathering relevant contributions addressed to introduce new techniques for study complex systems driven by computational methods. Contributions are expected to provide new insights by combining new models with uncertainty and computational techniques. Interdisciplinary applications are particularly welcome. More precisely, we will select the best contributions dealing with significant cutting edge advances in the realm of computational statistics methods in information sciences and engineering and their applications. We will place particular emphasis on information science problems that contain engineering insights and consider stochastic approaches in their treatment.

Dr. Ahmed Farouk
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computational statistics
  • Software and system engineering
  • Quantum information and computation
  • Parallel and distributed systems
  • Image and signal processing
  • Computer and mathematical modelling
  • Cyber security and strategies
  • Big data and data mining
  • Data science applications
  • Emerging computer applications
  • Information and knowledge engineering
  • Data mining techniques
  • Data security, privacy, and cryptology
  • Sensor networks
  • Remote sensing.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Supervision of the Infection in an SI (SI-RC) Epidemic Model by Using a Test Loss Function to Update the Vaccination and Treatment Controls
Appl. Sci. 2020, 10(20), 7183; https://doi.org/10.3390/app10207183 - 15 Oct 2020
Viewed by 364
Abstract
This paper studies and proposes some supervisory techniques to update the vaccination and control gains through time in a modified SI (susceptible-infectious) epidemic model involving the susceptible and subpopulations. Since the presence of linear feedback controls are admitted, a compensatory recovered (or immune) [...] Read more.
This paper studies and proposes some supervisory techniques to update the vaccination and control gains through time in a modified SI (susceptible-infectious) epidemic model involving the susceptible and subpopulations. Since the presence of linear feedback controls are admitted, a compensatory recovered (or immune) extra subpopulation is added to the model under zero initial conditions to deal with the recovered subpopulations transferred from the vaccination and antiviral/antibiotic treatment on the susceptible and the infectious, respectively. Therefore, the modified model is referred to as an SI(RC) epidemic model since it integrates the susceptible, infectious and compensatory recovered subpopulations. The defined time-integral supervisory loss function can evaluate weighted losses involving, in general, both the susceptible and the infectious subpopulations. It is admitted, as a valid supervisory loss function, that which involves only either the infectious or the susceptible subpopulations. Its concrete definition involving only the infectious is related to the Shannon information entropy. The supervision problem is basically based on the implementation of a parallel control structure with different potential control gains to be judiciously selected and updated through time. A higher decision level structure of the supervisory scheme updates the appropriate active controller (i.e., that with the control gain values to be used along the next time window), as well as the switching time instants. In this way, the active controller is that which provides the best associated supervisory loss function along the next inter-switching time interval. Basically, a switching action from one active controller to another one is decided as a better value of the supervisory loss function is detected for distinct controller gain values to the current ones. Full article
Show Figures

Figure 1

Open AccessArticle
Nonlinear Dynamics of a Cavity Containing a Two-Mode Coherent Field Interacting with Two-Level Atomic Systems
Appl. Sci. 2020, 10(20), 7150; https://doi.org/10.3390/app10207150 - 14 Oct 2020
Cited by 1 | Viewed by 380
Abstract
This study analytically explored two coupled two-level atomic systems (TLAS) as two qubits interacting with two modes of an electromagnetic field (EMF) cavity via two-photon transitions in the presence of dipole–dipole interactions between the atoms and intrinsic damping. Using special unitary su(1,1) Lie algebra, the general solution of an intrinsic noise model is obtained when an EMF is initially in a generalized coherent state. We investigated the population inversion of two TLAS and the generated quantum coherence of some partitions (including the EMF, two TLAS, and TLAS–EMF). It is possible to generate quantum coherence (mixedness and entanglement) from the initial pure state. The robustness of the quantum coherence produced and the sudden appearance and disappearance of coherence depended not only on dipole–dipole coupling but also on the intrinsic noise rate. The growth of mixedness and entanglement may be enhanced by increasing dipole–dipole coupling, leading to more robustness against intrinsic noise. Full article
Show Figures

Figure 1

Open AccessArticle
Product Development Using Online Customer Reviews: A Case Study of the South Korean Subcompact Sport Utility Vehicles Market
Appl. Sci. 2020, 10(19), 6918; https://doi.org/10.3390/app10196918 - 02 Oct 2020
Viewed by 591
Abstract
This study focuses on improving multifunctional product development. Instead of face-to-face or other survey methods, we used text mining of online reviews to confirm which characteristics consumers prefer. The reference probability (importance), and the difference between positive and negative opinions (satisfaction) were indexed. [...] Read more.
This study focuses on improving multifunctional product development. Instead of face-to-face or other survey methods, we used text mining of online reviews to confirm which characteristics consumers prefer. The reference probability (importance), and the difference between positive and negative opinions (satisfaction) were indexed. By linking “importance” and “satisfaction” with a product’s quantitative performance, the correlation between satisfaction and quantitative performance was confirmed, and the method of setting a product’s design requirements was presented. To verify the validity of the method, we used the subcompact SUV (Sports Utility Vehicle) market in South Korea as a case study. The average importance and satisfaction with each performance aspect of the cars in the market was extracted, and the successful entry of new products in the market, which reflects these market characteristics, was confirmed. The proposed methodology is meaningful in that it reduces the risk (biased, inefficient) of existing consumer survey methods by utilizing big data to identify consumer preferences. Companies can use these findings during the product development process to improve customer satisfaction. This study improves product development methods by combining them with the latest advances in big data-related technologies. Full article
Show Figures

Figure 1

Open AccessArticle
An External Client-Based Approach for the Extract Class Refactoring: A Theoretical Model and an Empirical Approach
Appl. Sci. 2020, 10(17), 6038; https://doi.org/10.3390/app10176038 - 31 Aug 2020
Cited by 1 | Viewed by 450
Abstract
A commonly observed ambiguity of a class is simply a reflection of multiple methods’ implementation within an individual class. The process of Extract Class refactoring is, therefore, used to separate the different responsibilities of a class into different classes. A major limitation in [...] Read more.
A commonly observed ambiguity of a class is simply a reflection of multiple methods’ implementation within an individual class. The process of Extract Class refactoring is, therefore, used to separate the different responsibilities of a class into different classes. A major limitation in existing approaches of the Extract Class refactoring is based on factors that are internal to the class, i.e., structural and semantic relationships between methods, in order to identify and separate the responsibilities of the class which are inadequate in many cases. Thus, we propose a novel approach that exploits the clients of the class to support the Extract Class refactoring. The importance of this approach lies in its usefulness to support existing approaches since it involves factors external to the class, i.e., the clients. Moreover, an extensive empirical evaluation is presented to support the proposed method through the utilization of real classes selected from two open source systems. The result shows the potential of our proposed approach and usefulness that leads to an improvement in the quality of the considered classes. Full article
Show Figures

Figure 1

Open AccessArticle
Comparison on Search Failure between Hash Tables and a Functional Bloom Filter
Appl. Sci. 2020, 10(15), 5218; https://doi.org/10.3390/app10155218 - 29 Jul 2020
Cited by 1 | Viewed by 471
Abstract
Hash-based data structures have been widely used in many applications. An intrinsic problem of hashing is collision, in which two or more elements are hashed to the same value. If a hash table is heavily loaded, more collisions would occur. Elements that could [...] Read more.
Hash-based data structures have been widely used in many applications. An intrinsic problem of hashing is collision, in which two or more elements are hashed to the same value. If a hash table is heavily loaded, more collisions would occur. Elements that could not be stored in a hash table because of the collision cause search failures. Many variant structures have been studied to reduce the number of collisions, but none of the structures completely solves the collision problem. In this paper, we claim that a functional Bloom filter (FBF) provides a lower search failure rate than hash tables, when a hash table is heavily loaded. In other words, a hash table can be replaced with an FBF because the FBF is more effective than hash tables in the search failure rate in storing a large amount of data to a limited size of memory. While hash tables require to store each input key in addition to its return value, a functional Bloom filter stores return values without input keys, because different index combinations according to each input key can be used to identify the input key. In search failure rates, we theoretically compare the FBF with hash-based data structures, such as multi-hash table, cuckoo hash table, and d-left hash table. We also provide simulation results to prove the validity of our theoretical results. The simulation results show that the search failure rates of hash tables are larger than that of the functional Bloom filter when the load factor is larger than 0.6. Full article
Show Figures

Figure 1

Open AccessArticle
Approximation of the Mechanical Response of Large Lattice Domains Using Homogenization and Design of Experiments
Appl. Sci. 2020, 10(11), 3858; https://doi.org/10.3390/app10113858 - 01 Jun 2020
Viewed by 624
Abstract
Lattice-based workpieces contain patterned repetition of individuals of a basic topology (Schwarz, ortho-walls, gyroid, etc.) with each individual having distinct geometric grading. In the context of the design, analysis and manufacturing of lattice workpieces, the problem of rapidly assessing the mechanical behavior of [...] Read more.
Lattice-based workpieces contain patterned repetition of individuals of a basic topology (Schwarz, ortho-walls, gyroid, etc.) with each individual having distinct geometric grading. In the context of the design, analysis and manufacturing of lattice workpieces, the problem of rapidly assessing the mechanical behavior of large domains is relevant for pre-evaluation of designs. In this realm, two approaches can be identified: (1) numerical simulations which usually bring accuracy but limit the size of the domains that can be studied due to intractable data sizes, and (2) material homogenization strategies that sacrifice precision to favor efficiency and allow for simulations of large domains. Material homogenization synthesizes diluted material properties in a lattice, according to the volume occupancy factor of such a lattice. Preliminary publications show that material homogenization is reasonable in predicting displacements, but is not in predicting stresses (highly sensitive to local geometry). As a response to such shortcomings, this paper presents a methodology that systematically uses design of experiments (DOE) to produce simple mathematical expressions (meta-models) that relate the stress–strain behavior of the lattice domain and the displacements of the homogeneous domain. The implementation in this paper estimates the von Mises stress in large Schwarz primitive lattice domains under compressive loads. The results of our experiments show that (1) material homogenization can efficiently and accurately approximate the displacements field, even in complex lattice domains, and (2) material homogenization and DOE can produce rough estimations of the von Mises stress in large domains (more than 100 cells). The errors in the von Mises stress estimations reach 42 % for domains of up to 24 cells. This result means that coarse stress–strain estimations may be possible in lattice domains by combining DOE and homogenized material properties. This option is not suitable for precise stress prediction in sensitive contexts wherein high accuracy is needed. Future work is required to refine the meta-models to improve the accuracies of the estimations. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Quantum Correlation via Skew Information and Bell Function Beyond Entanglement in a Two-Qubit Heisenberg XYZ Model: Effect of the Phase Damping
Appl. Sci. 2020, 10(11), 3782; https://doi.org/10.3390/app10113782 - 29 May 2020
Cited by 5 | Viewed by 582
Abstract
In this paper, we analyze the dynamics of non-local correlations (NLCs) in an anisotropic two-qubit Heisenberg XYZ model under the effect of the phase damping. An analytical solution is obtained by applying a method based on the eigenstates and the eigenvalues of the [...] Read more.
In this paper, we analyze the dynamics of non-local correlations (NLCs) in an anisotropic two-qubit Heisenberg XYZ model under the effect of the phase damping. An analytical solution is obtained by applying a method based on the eigenstates and the eigenvalues of the Hamiltonian. It is observed that the generated NLCs are controlled by the Dzyaloshinskii–Moriya interaction, the purity indicator, the interaction with the environment, and the anisotropy. Furthermore, it is found that the quantum correlations, as well as the sudden death and sudden birth phenomena, depend on the considered physical parameters. In particular, the system presents a special correlation: the skew-information correlation. The log-negativity and the uncertainty-induced non-locality exhibit the sudden-change behavior. The purity of the initial states plays a crucial role on the generated nonlocal correlations. These correlations are sensitive to the DM interaction, anisotropy, and phase damping. Full article
Show Figures

Figure 1

Open AccessArticle
XOR Multiplexing Technique for Nanocomputers
Appl. Sci. 2020, 10(8), 2825; https://doi.org/10.3390/app10082825 - 19 Apr 2020
Viewed by 617
Abstract
In emerging nanotechnologies, due to the manufacturing process, a significant percentage of components may be faulty. In order to make systems based on unreliable nano-scale components reliable, it is necessary to design fault-tolerant architectures. This paper presents a novel fault-tolerant technique for nanocomputers, [...] Read more.
In emerging nanotechnologies, due to the manufacturing process, a significant percentage of components may be faulty. In order to make systems based on unreliable nano-scale components reliable, it is necessary to design fault-tolerant architectures. This paper presents a novel fault-tolerant technique for nanocomputers, namely the XOR multiplexing technique. This hardware redundancy technique is based on a numerous duplication of faulty components. We analyze the error distributions of the XOR multiplexing unit and the error distributions of multiple stages of the XOR multiplexing system, then compare them to the NAND multiplexing unit and the NAND multiplexing multiple stages system, respectively. The simulation results show that XOR multiplexing is more reliable than NAND multiplexing. Bifurcation theory is used to analyze the fault-tolerant ability of the system and the results show that XOR multiplexing technique has a high fault-tolerant ability. Similarly to the NAND multiplexing technique, this fault-tolerant technique is a potentially effective fault tolerant technique for future nanoelectronics. Full article
Show Figures

Figure 1

Open AccessArticle
Enhancement of Computational Efficiency in Seeking Liveness-Enforcing Supervisors for Advanced Flexible Manufacturing Systems with Deadlock States
Appl. Sci. 2020, 10(7), 2620; https://doi.org/10.3390/app10072620 - 10 Apr 2020
Viewed by 591
Abstract
In industry 4.0, all kinds of intelligent workstations are designed for use in manufacturing industries. Among them, flexible manufacturing systems (FMSs) use smart robots to achieve their production capacity under the condition of a high degree of resources sharing. As a result, deadlock [...] Read more.
In industry 4.0, all kinds of intelligent workstations are designed for use in manufacturing industries. Among them, flexible manufacturing systems (FMSs) use smart robots to achieve their production capacity under the condition of a high degree of resources sharing. As a result, deadlock states usually appear unexpectedly. For solving the damage deadlock problem, many pioneers have proposed new policies. However, it is very difficult to make systems maximally permissive even if their policies can solve the deadlock problem of FMSs. According to our survey, the Maximal number of Forbidding First Bad Marking (FBM) Problems (MFFP) seems to be the best technology to obtain systems’ maximally permissive states in the existing literature. More importantly, the number of added control places (CP) is the smallest among the existing research works. However, when the complexity of a flexible manufacturing system increases, the computational burden rises rapidly. To reduce computational cost, we define a new concept named Pre Idle Places (PIP) to enhance the computational efficiency in Seeking Liveness-Enforcing Supervisors. We can bypass all PIP once they can be identified from a deadlock system under the process of solving MFFP. According to the data showed in three classical examples, our proposed Improved MFFP is better than conventional MFFP in terms of computational efficiency with the same controllers. Full article
Show Figures

Figure 1

Open AccessArticle
An Analog Circuit Fault Diagnosis Method Based on Circle Model and Extreme Learning Machine
Appl. Sci. 2020, 10(7), 2386; https://doi.org/10.3390/app10072386 - 31 Mar 2020
Viewed by 628
Abstract
The fault diagnosis of analog circuits faces problems, such as inefficient feature extraction and fault identification. To solve the problems, this paper combines the circle model and the extreme learning machine (ELM) into a fault diagnosis method for the linear analog circuit. Firstly, [...] Read more.
The fault diagnosis of analog circuits faces problems, such as inefficient feature extraction and fault identification. To solve the problems, this paper combines the circle model and the extreme learning machine (ELM) into a fault diagnosis method for the linear analog circuit. Firstly, a circle model for the voltage features of fault elements was established in the complex domain, according to the relationship between the circuit response, element position and circuit topology. To eliminate the impacts of tolerances and signal aliasing, the 3D feature was introduced to make the indistinguishable features in fuzzy groups distinguishable. Fault feature separability is very important to improve the fault diagnosis accuracy. In addition, an effective classier can improve the precision and the time taken. With less computational complexity and a simpler process, the ELM algorithm has a fast speed and a good classification performance. The effectiveness of the proposed method is verified by simulation. The simulation results show the ELM-based algorithm classifier with the circle model can enhance precision and reduce time taken by about 80% in comparison with other methods for analog circuit fault diagnosis. To sum up, this proposed method offers a fault diagnosis method that reduces the complexity in generating fault features, improves the isolation probability of faults, speeds up fault classification, and simplifies fault testing. Full article
Show Figures

Figure 1

Open AccessArticle
One Computational Innovation Transition-Based Recovery Policy for Flexible Manufacturing Systems Using Petri nets
Appl. Sci. 2020, 10(7), 2332; https://doi.org/10.3390/app10072332 - 29 Mar 2020
Cited by 2 | Viewed by 631
Abstract
In the third and fourth industrial revolutions, smart or artificial intelligence flexible manufacturing systems (FMS) seem to be the key machine equipment for capacity of factory production. However, deadlocks could hence appear due to resources competition between robots. Therefore, how to prevent deadlocks [...] Read more.
In the third and fourth industrial revolutions, smart or artificial intelligence flexible manufacturing systems (FMS) seem to be the key machine equipment for capacity of factory production. However, deadlocks could hence appear due to resources competition between robots. Therefore, how to prevent deadlocks of FMS occurring is a very important and hot issue. Based on Petri nets (PN) theory, in existing literature almost all research adopts control places as their deadlock prevention mean. However, under this strategy the real optimal reachable markings are not achieved even if they claimed that their control policy is maximally permissive. Accordingly, in this paper, the author propose one novel transition-based control policy to solve the deadlock problem of FMS. The proposed control policy could also be viewed as deadlock recovery since it can recover all initial deadlock and quasi-deadlock markings. Furthermore, control transitions can be calculated and obtained once the proposed three-dimension matrix, called generating and comparing aiding matrix (GCAM) in this paper, is built. Finally, an iteration method is used until all deadlock markings become live ones. Experimental results reveal that our control policy seems still the best one among all existing methods in the literature regardless of whether these methods belong to places or transitions based. Full article
Show Figures

Figure 1

Open AccessArticle
Adaptive Dynamic Disturbance Strategy for Differential Evolution Algorithm
Appl. Sci. 2020, 10(6), 1972; https://doi.org/10.3390/app10061972 - 13 Mar 2020
Viewed by 699
Abstract
To overcome the problems of slow convergence speed, premature convergence leading to local optimization and parameter constraints when solving high-dimensional multi-modal optimization problems, an adaptive dynamic disturbance strategy for differential evolution algorithm (ADDSDE) is proposed. Firstly, this entails using the chaos mapping strategy [...] Read more.
To overcome the problems of slow convergence speed, premature convergence leading to local optimization and parameter constraints when solving high-dimensional multi-modal optimization problems, an adaptive dynamic disturbance strategy for differential evolution algorithm (ADDSDE) is proposed. Firstly, this entails using the chaos mapping strategy to initialize the population to increase population diversity, and secondly, a new weighted mutation operator is designed to weigh and combinemutation strategies of the standard differential evolution (DE). The scaling factor and crossover probability are adaptively adjusted to dynamically balance the global search ability and local exploration ability. Finally, a Gauss perturbation operator is introduced to generate a random disturbance variation, and to accelerate premature individuals to jump out of local optimization. The algorithm runs independently on five benchmark functions 20 times, and the results show that the ADDSDE algorithm has better global optimization search ability, faster convergence speed and higher accuracy and stability compared with other optimization algorithms, which provide assistance insolving high-dimensionaland complex problems in engineering and information science. Full article
Show Figures

Figure 1

Open AccessArticle
A Secure Architecture for Modular Division over a Prime Field against Fault Injection Attacks
Appl. Sci. 2020, 10(5), 1700; https://doi.org/10.3390/app10051700 - 02 Mar 2020
Viewed by 598
Abstract
Fault injection attacks pose a serious threat to many cryptographic devices. The security of most cryptographic devices hinges on a key block called modular division (MD) over a prime field. Although a lot of research has been done to implement the MD over [...] Read more.
Fault injection attacks pose a serious threat to many cryptographic devices. The security of most cryptographic devices hinges on a key block called modular division (MD) over a prime field. Although a lot of research has been done to implement the MD over a prime field in hardware efficiently, studies on secure architecture against fault injection attack are very few. A few of the studies that focused on secure architecture against fault injection attack can only detect faults but not locate faults. In this regard, this paper designs a novel secure architecture for the MD over a prime field, which can not only detect faults, but also can locate the error processing element. In order to seek the best optimal performance, four word-oriented systolic structures of a main function module (MFM) were designed, and three error detection schemes were developed based on different linear arithmetic codes (LACs). The MFM structures were combined flexibly with the error detection schemes. The time and area overheads of our architecture were analyzed through the implementation in an application-specific integrated circuit (ASIC), while the error detection and location capabilities of our architecture were demonstrated by C++ simulation, in comparison to two existing methods. The results show that our architecture can detect single-bit error (SBE) with 100% accuracy and locate the erroneous processing element (PE), and correctly identify most of the single PE errors and almost all of the multi-PE errors (when there are more than three erroneous PEs). The only weakness of our architecture is the relatively high time and area overhead ratios. Full article
Show Figures

Figure 1

Open AccessArticle
Entanglement Control of Two-Level Atoms in Dissipative Cavities
Appl. Sci. 2020, 10(4), 1510; https://doi.org/10.3390/app10041510 - 23 Feb 2020
Cited by 2 | Viewed by 600
Abstract
An open quantum bipartite system consisting of two independent two-level atoms interacting nonlinearly with a two-mode electromagnetic cavity field is investigated by proposing a suitable non-Hermitian generalization of the Hamiltonian. The mathematical procedure of obtaining the corresponding wave function of the system is [...] Read more.
An open quantum bipartite system consisting of two independent two-level atoms interacting nonlinearly with a two-mode electromagnetic cavity field is investigated by proposing a suitable non-Hermitian generalization of the Hamiltonian. The mathematical procedure of obtaining the corresponding wave function of the system is clearly given. Pancharatnam phase is studied to give a precise information about the required initial system state, which is related to artificial phase jumps, to control the degree of entanglement (DEM) and get the highest concurrence. We discuss the effect of time-variation coupling, and dissipation of both atoms and cavity. The effect of the time-variation function appears as frequency modulation (FM) effect in the radio waves. Concurrence rapidly reaches the disentangled state (death of entanglement) by increasing the effect of field decay. On the contrary, the atomic decay has no effect. Full article
Show Figures

Graphical abstract

Open AccessArticle
An Intelligent Classification Model for Surface Defects on Cement Concrete Bridges
Appl. Sci. 2020, 10(3), 972; https://doi.org/10.3390/app10030972 - 02 Feb 2020
Cited by 2 | Viewed by 860
Abstract
This paper mainly improves the visual geometry group network-16 (VGG-16), which is a classic convolutional neural network (CNN), to classify the surface defects on cement concrete bridges in an accurate manner. Specifically, the number of fully connected layers was reduced by one, and [...] Read more.
This paper mainly improves the visual geometry group network-16 (VGG-16), which is a classic convolutional neural network (CNN), to classify the surface defects on cement concrete bridges in an accurate manner. Specifically, the number of fully connected layers was reduced by one, and the Softmax classifier was replaced with a Softmax classification layer with seven defect tags. The weight parameters of convolutional and pooling layers were shared in the pre-trained model, and the rectified linear unit (ReLU) function was taken as the activation function. The original images were collected by a road inspection vehicle driving across bridges on national and provincial highways in Jiangxi Province, China. The images on surface defects of cement concrete bridges were selected, and divided into a training set and a test set, and preprocessed through morphology-based weight adaptive denoising. To verify its performance, the improved VGG-16 was compared with traditional shallow neural networks (NNs) like the backpropagation neural network (BPNN), support vector machine (SVM), and deep CNNs like AlexNet, GoogLeNet, and ResNet on the same sample dataset of surface defects on cement concrete bridges. Judging by mean detection accuracy and top-5 accuracy, our model outperformed all the contrastive methods, and accurately differentiated between images with seven classes of defects such as normal, cracks, fracturing, plate fracturing, corner rupturing, edge/corner exfoliation, skeleton exposure, and repairs. The results indicate that our model can effectively extract the multi-layer features from surface defect images, which highlights the edges and textures. The research findings shed important new light on the detection of surface defects and classification of defect images. Full article
Show Figures

Figure 1

Back to TopTop