Next Issue
Volume 10, January-1
Previous Issue
Volume 9, December-1

Appl. Sci., Volume 9, Issue 24 (December-2 2019) – 309 articles

Cover Story (view full-size image): A thermal tire model was proposed to describe the tread, carcass, and inflation gas temperatures. The magic formula was modified to incorporate the temperature effect on the force capability of a tire to achieve higher accuracy in the simulation environment. The proposed tire model was evaluated for various maneuvers of the Formula Student vehicle. The simulation results demonstrated a high correlation to the measurement data for steady-state maneuvers and limited accuracy in highly dynamic driving. In addition, the results showed that neglecting temperature results in higher root mean square error of lateral acceleration and yaw rate. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Machine Learning for Quantitative Finance Applications: A Survey
Appl. Sci. 2019, 9(24), 5574; https://doi.org/10.3390/app9245574 - 17 Dec 2019
Cited by 6 | Viewed by 1571
Abstract
The analysis of financial data represents a challenge that researchers had to deal with. The rethinking of the basis of financial markets has led to an urgent demand for developing innovative models to understand financial assets. In the past few decades, researchers have [...] Read more.
The analysis of financial data represents a challenge that researchers had to deal with. The rethinking of the basis of financial markets has led to an urgent demand for developing innovative models to understand financial assets. In the past few decades, researchers have proposed several systems based on traditional approaches, such as autoregressive integrated moving average (ARIMA) and the exponential smoothing model, in order to devise an accurate data representation. Despite their efficacy, the existing works face some drawbacks due to poor performance when managing a large amount of data with intrinsic complexity, high dimensionality and casual dynamicity. Furthermore, these approaches are not suitable for understanding hidden relationships (dependencies) between data. This paper proposes a review of some of the most significant works providing an exhaustive overview of recent machine learning (ML) techniques in the field of quantitative finance showing that these methods outperform traditional approaches. Finally, the paper also presents comparative studies about the effectiveness of several ML-based systems. Full article
Show Figures

Figure 1

Open AccessErratum
Erratum: An ECCD—Electronic Charge Compensation Device—As a Quantum Dissipative System. Applied Sciences 2019, 9, 4879
Appl. Sci. 2019, 9(24), 5573; https://doi.org/10.3390/app9245573 - 17 Dec 2019
Viewed by 449
Abstract
We wish to make the following correction to the published paper [...] Full article
Open AccessArticle
Development and Evaluation of Crack Band Model Implemented Progressive Failure Analysis Method for Notched Composite Laminate
Appl. Sci. 2019, 9(24), 5572; https://doi.org/10.3390/app9245572 - 17 Dec 2019
Viewed by 845
Abstract
Progressive failure analysis (PFA) is widely used to predict the failure behavior of composite materials. As a structure becomes more complex with discontinuities, prediction of failure becomes more difficult and mesh dependence must be taken into account. In this study, a PFA model [...] Read more.
Progressive failure analysis (PFA) is widely used to predict the failure behavior of composite materials. As a structure becomes more complex with discontinuities, prediction of failure becomes more difficult and mesh dependence must be taken into account. In this study, a PFA model was developed using the Hashin failure criterion and crack band model. The failure initiation was evaluated using the Hashin failure criterion. If failure initiation occurred, the damage variables at each failure mode (fiber tension and compression; matrix tension and compression) were calculated according to linear softening degradation and they were then used to derive the damaged stiffness matrix. This matrix reflected a degraded material, and PFA was continued until the damage variables became “1,” implying complete material failure. A series of processes were performed using the finite element method program ABAQUS with a user-defined material subroutine. To evaluate the proposed PFA model, experimental results of open-hole composite laminate tests were compared with the obtained numerical results. The strain behaviors were compared using a digital image correlation system. The obtained numerical results were in good agreement with the experimental ones. Full article
(This article belongs to the Special Issue Selected Papers from the ICMR 2019)
Show Figures

Figure 1

Open AccessArticle
Obstacle Avoidance Drone by Deep Reinforcement Learning and Its Racing with Human Pilot
Appl. Sci. 2019, 9(24), 5571; https://doi.org/10.3390/app9245571 - 17 Dec 2019
Cited by 1 | Viewed by 1096
Abstract
Drones with obstacle avoidance capabilities have attracted much attention from researchers recently. They typically adopt either supervised learning or reinforcement learning (RL) for training their networks. The drawback of supervised learning is that labeling of the massive dataset is laborious and time-consuming, whereas [...] Read more.
Drones with obstacle avoidance capabilities have attracted much attention from researchers recently. They typically adopt either supervised learning or reinforcement learning (RL) for training their networks. The drawback of supervised learning is that labeling of the massive dataset is laborious and time-consuming, whereas RL aims to overcome such a problem by letting an agent learn with the data from its environment. The present study aims to utilize diverse RL within two categories: (1) discrete action space and (2) continuous action space. The former has the advantage in optimization for vision datasets, but such actions can lead to unnatural behavior. For the latter, we propose a U-net based segmentation model with an actor-critic network. Performance is compared between these RL algorithms with three different environments such as the woodland, block world, and the arena world, as well as racing with human pilots. Results suggest that our best continuous algorithm easily outperformed the discrete ones and yet was similar to an expert pilot. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Small Infrared Target Detection via a Mexican-Hat Distribution
Appl. Sci. 2019, 9(24), 5570; https://doi.org/10.3390/app9245570 - 17 Dec 2019
Cited by 1 | Viewed by 640
Abstract
Although infrared small target detection has been broadly used in airborne early warning, infrared guidance, surveillance and tracking, it is still an open issue due to the low signal-to-noise ratio, less texture information, background clutters, and so on. Aiming to detect a small [...] Read more.
Although infrared small target detection has been broadly used in airborne early warning, infrared guidance, surveillance and tracking, it is still an open issue due to the low signal-to-noise ratio, less texture information, background clutters, and so on. Aiming to detect a small target in an infrared image with complex background clutters, this paper carefully studies the characteristics of a target in an IR image filtered by the difference of Gaussian filter, concluding that the intensity of the adjacent region around a small infrared target roughly has a Mexican-hat distribution. Based on such a conclusion, a raw infrared image is sequentially processed with the modified top-hat transformation and the difference of Gaussian filter. Then, the adjacent region around each pixel in the processed image is radially divided into three sub-regions. Next, the pixels that distribute as the Mexican-hat are determined as the candidates of targets. Finally, a real small target is segmented out by locating the pixel with the maximum intensity. Our experimental results on both real-world and synthetic infrared images show that the proposed method is so effective in enhancing small targets that target detection gets very easy. Our method achieves true detection rates of 0.9900 and 0.9688 for sequence 1 and sequence 2, respectively, and the false detection rates of 0.0100 and 0 for those two sequences, which are superior over both conventional detectors and state-of-the-art detectors. Moreover, our method runs at 1.8527 and 0.8690 s per frame for sequence 1 and sequence 2, respectively, which is faster than RLCM, LIG, Max–Median, Max–Mean. Full article
Show Figures

Figure 1

Open AccessReview
Systematic Literature Review of Predictive Analysis Tools in Higher Education
Appl. Sci. 2019, 9(24), 5569; https://doi.org/10.3390/app9245569 - 17 Dec 2019
Cited by 5 | Viewed by 828
Abstract
The topic of predictive algorithms is often regarded among the most relevant fields of study within the data analytics discipline. They have applications in multiple contexts, education being an important one of them. Focusing on higher education scenarios, most notably universities, predictive analysis [...] Read more.
The topic of predictive algorithms is often regarded among the most relevant fields of study within the data analytics discipline. They have applications in multiple contexts, education being an important one of them. Focusing on higher education scenarios, most notably universities, predictive analysis techniques are present in studies that estimate academic outcomes using different kinds of student-related data. Furthermore, predictive algorithms are the basis of tools such as early warning systems (EWS): applications able to foresee future risks, such as the likelihood of students failing or dropping out of a course, and alert of such risks so that corrective measures can be taken. The purpose of this literature review is to provide an overview of the current state of research activity regarding predictive analytics in higher education, highlighting the most relevant instances of predictors and EWS that have been used in practice. The PRISMA guidelines for systematic literature reviews were followed in this study. The document search process yielded 1382 results, out of which 26 applications were selected as relevant examples of predictors and EWS, each of them defined by the contexts where they were applied and the data that they used. However, one common shortcoming is that they are usually applied in limited scenarios, such as a single course, evidencing that building a predictive application able to work well under different teaching and learning methodologies is an arduous task. Full article
(This article belongs to the Special Issue Smart Learning)
Show Figures

Figure 1

Open AccessArticle
Numerical Study on the Critical Frequency Response of Jet Engine Rotors for Blade-Off Conditions against Bird Strike
Appl. Sci. 2019, 9(24), 5568; https://doi.org/10.3390/app9245568 - 17 Dec 2019
Cited by 3 | Viewed by 653
Abstract
Vibrations are usually induced in aero engines under their normal operating conditions. Therefore, it is necessary to predict the critical frequencies of the rotating components carefully. Blade deformation of a jet engine under its normal operating conditions due to fatigue or bird strike [...] Read more.
Vibrations are usually induced in aero engines under their normal operating conditions. Therefore, it is necessary to predict the critical frequencies of the rotating components carefully. Blade deformation of a jet engine under its normal operating conditions due to fatigue or bird strike is a realistic possibility. This puts the deformed blade as one of the major safety concerns in commercially operating civil aviation. A bird strike introduces unbalanced forces and non-linearities into the engine rotor system. Such dynamic behavior is a primary cause of catastrophic failures. The introduction of unbalanced forces due to a deformed blade, as a result of a bird strike, can change the critical frequency behavior of engine rotor systems. Therefore, it is necessary to predict their critical frequencies and dynamic behavior carefully. The simplified approach of the one-dimensional and two-dimensional elements can be used to predict critical frequencies and critical mode shapes in many cases, but the use of three-dimensional elements is the best method to achieve the goals of a modal analysis. This research explores the effect of a bird strike on the critical frequencies of an engine rotor. The changes in critical mode shapes and critical frequencies as a result of a bird strike on an engine blade are studied in this research. Commercially available analysis software ANSYS version 18.2 is used in this study. In order to account for the material nonlinearities, a Johnson Cook material model is used for the fan blades and an isotropic–elastic–plastic–hydrodynamic material model is used for modeling the bird. The bird strike event is analyzed using Eularian and smoothed particle hydrodynamics (SPH) techniques. A difference of 0.1% is noted in the results of both techniques. In the modal analysis simulation of the engine rotor before and after the bird strike event, the critical failure modes remain same. However, a change in the critical frequencies of the modes is observed. An increase in the critical frequencies and excitation RPMs (revolution per minute) of each mode are observed. As the mode order is increased, the higher the rise in critical frequency and excitation RPMs. Also, a change in the whirl direction of the different modes is noted. Full article
(This article belongs to the Special Issue Advances in Rotordynamics)
Show Figures

Figure 1

Open AccessArticle
Design and Simulation of a Capacity Management Model Using a Digital Twin Approach Based on the Viable System Model: Case Study of an Automotive Plant
Appl. Sci. 2019, 9(24), 5567; https://doi.org/10.3390/app9245567 - 17 Dec 2019
Cited by 4 | Viewed by 1799
Abstract
Matching supply capacity and customer demand is challenging for companies. Practitioners often fail due to a lack of information or delays in the decision-making process. Moreover, researchers fail to holistically consider demand patterns and their dynamics over time. Thus, the aim of this [...] Read more.
Matching supply capacity and customer demand is challenging for companies. Practitioners often fail due to a lack of information or delays in the decision-making process. Moreover, researchers fail to holistically consider demand patterns and their dynamics over time. Thus, the aim of this study is to propose a holistic approach for manufacturing organizations to change or manage their capacity. The viable system model was applied in this study. The focus of the research is the clustering of manufacturing and assembly companies. The goal of the developed capacity management model is to be able to react to all potential demand scenarios by making decisions regarding labor and correct investments and in the right moment based on the needed information. To ensure this, demand data series are analyzed enabling autonomous decision-making. In conclusion, the proposed approach enables companies to have internal mechanisms to increase their adaptability and reactivity to customer demands. In order to prove the conceptual model, a simulation of an automotive plant case study was performed, comparing it to classical approaches. Full article
Show Figures

Figure 1

Open AccessArticle
How Can Low-Frequency Noise Exposure Interact with the Well-Being of a Population? Some Results from a Portuguese Municipality
Appl. Sci. 2019, 9(24), 5566; https://doi.org/10.3390/app9245566 - 17 Dec 2019
Cited by 1 | Viewed by 782
Abstract
Noise pollution is the second most harmful environmental stressor in Europe. Portugal is the fourth European country most affected by noise pollution, whereby 23.0% of the population is affected. This article aims to analyze the effects of exposure to low frequency noise pollution, [...] Read more.
Noise pollution is the second most harmful environmental stressor in Europe. Portugal is the fourth European country most affected by noise pollution, whereby 23.0% of the population is affected. This article aims to analyze the effects of exposure to low frequency noise pollution, emitted by power poles and power lines, on the population’s well-being, based on a study of “exposed” and “unexposed” individuals in two predominantly urban areas in north-western Portugal. To develop the research, we used sound level (n = 62) and sound recording measurements, as well as adapted audiometric test performance (n = 14) and surveys conducted with the resident population (n = 200). The sound levels were measured (frequency range between 10 to 160 Hz) and compared with a criterion curve developed by the Department for Environment, Food and Rural Affairs (DEFRA). The sound recorded was performed 5 m away from the source (400 kV power pole). Surveys were carried out with the “exposed” and “unexposed” populations, and adapted audiometric tests were performed to complement the analysis and to determine the threshold of audibility of “exposed” and “unexposed” volunteers. The “exposed” area has higher sound levels and, consequently, more problems with well-being and health than the “unexposed” population. The audiometric tests also revealed that the “exposed” population appears to be less sensitive to low frequencies than the “unexposed” population. Full article
(This article belongs to the Special Issue New and Old Environmental Impacts on Population Well Being)
Show Figures

Graphical abstract

Open AccessArticle
Benchmarking Daily Line Loss Rates of Low Voltage Transformer Regions in Power Grid Based on Robust Neural Network
Appl. Sci. 2019, 9(24), 5565; https://doi.org/10.3390/app9245565 - 17 Dec 2019
Cited by 1 | Viewed by 586
Abstract
Line loss is inherent in transmission and distribution stages, which can cause certain impacts on the profits of power-supply corporations. Thus, it is an important indicator and a benchmark value of which is needed to evaluate daily line loss rates in low voltage [...] Read more.
Line loss is inherent in transmission and distribution stages, which can cause certain impacts on the profits of power-supply corporations. Thus, it is an important indicator and a benchmark value of which is needed to evaluate daily line loss rates in low voltage transformer regions. However, the number of regions is usually very large, and the dataset of line loss rates contains massive outliers. It is critical to develop a regression model with both great robustness and efficiency when trained on big data samples. In this case, a novel method based on robust neural network (RNN) is proposed. It is a multi-path network model with denoising auto-encoder (DAE), which takes the advantages of dropout, L2 regularization and Huber loss function. It can achieve several different outputs, which are utilized to compute benchmark values and reasonable intervals. Based on the comparison results, the proposed RNN possesses both superb robustness and accuracy, which outperforms the testing conventional regression models. According to the benchmark analysis, there are about 13% outliers in the collected dataset and about 45% regions that hold outliers within a month. Hence, the quality of line loss rate data should still be further improved. Full article
Show Figures

Figure 1

Open AccessArticle
Polysaccharides-Reinforced Bitumens: Specificities and Universality of Rheological Behavior
Appl. Sci. 2019, 9(24), 5564; https://doi.org/10.3390/app9245564 - 17 Dec 2019
Cited by 7 | Viewed by 607
Abstract
The rheological properties of bitumens can be modified by the addition of specific chemical additives. Taking into account the molecular complex aggregation pattern, we hypothesized that macromolecules characterized by long, flexible, and hydrophilic chains can establish soft bridges connecting the different polar aggregates [...] Read more.
The rheological properties of bitumens can be modified by the addition of specific chemical additives. Taking into account the molecular complex aggregation pattern, we hypothesized that macromolecules characterized by long, flexible, and hydrophilic chains can establish soft bridges connecting the different polar aggregates of asphaltenes, strengthening their overall hierarchical supra-structures, and consequently increasing rheological performance at higher temperatures. Here, we propose the use of low cost and high availability polysaccharides as chemical additives to improve the rheological characteristics of a bitumen and to strengthen its thermal resistance. Fourteen different low-cost and high-availability polysaccharides, (flours, gums, and extracts from vegetable products) have been tested. While alghae euchemae have proved to be the most effective additive, corn and 00 flours are the least effective. Attempts to explain their differences have been made considering their chemical interactions with the polar molecules of asphaltenes within the complex framework of their supramolecular hierarchical structures. Through Arrhenius analysis, a correlation between activation energy and preexponential factor has been found, which can be useful for practical purposes, together with an unexpected consistency with the behavior of simple liquids, despite the striking differences in structure. Furthermore, a qualitative model has been suggested. The added value of this work is the focus on polysaccharides constituting low-cost, high availability materials which are sometimes even found as waste in industrial processes, all factors which, together with the environmental issues connected with their use, can be considered for large-scale applications. Full article
(This article belongs to the Special Issue Self-Assembly in Chemistry and Supramolecular Chemistry)
Show Figures

Figure 1

Open AccessArticle
Micro Particle Sizing Using Hilbert Transform Time Domain Signal Analysis Method in Self-Mixing Interferometry
Appl. Sci. 2019, 9(24), 5563; https://doi.org/10.3390/app9245563 - 17 Dec 2019
Cited by 1 | Viewed by 462
Abstract
The present work envisages the development of a novel and low-cost self-mixing interferometry (SMI) technology-based single particle sensing system in a microchannel chip for real time single micro-scale particle sizing. We proposed a novel theoretical framework to describe the impulse SMI signal expression [...] Read more.
The present work envisages the development of a novel and low-cost self-mixing interferometry (SMI) technology-based single particle sensing system in a microchannel chip for real time single micro-scale particle sizing. We proposed a novel theoretical framework to describe the impulse SMI signal expression in the time domain induced by a flowing particle. Using Hilbert transform, the interferometric fringe number of the impulse SMI signal was retrieved precisely for particle size discrimination. For the ease of particle sensing, a hydrodynamic focusing microfluidic channel was employed by varying the flow rate ratio between the sample stream and the sheath liquid, and the particle stream of a controllable width was formed very easily. The experimental results presented good agreement with the theoretical values, providing a 300 nm resolution for the particle sizing measurement. Full article
(This article belongs to the Section Applied Physics)
Show Figures

Figure 1

Open AccessArticle
LeSSA: A Unified Framework based on Lexicons and Semi-Supervised Learning Approaches for Textual Sentiment Classification
Appl. Sci. 2019, 9(24), 5562; https://doi.org/10.3390/app9245562 - 17 Dec 2019
Cited by 2 | Viewed by 707
Abstract
Sentiment Analysis (SA) is an active research area. SA aims to classify the online unstructured user-generated contents (UUGC) into positive and negative classes. A reliable training data is vital to learn a sentiment classifier for textual sentiment classification, but due to domain heterogeneity, [...] Read more.
Sentiment Analysis (SA) is an active research area. SA aims to classify the online unstructured user-generated contents (UUGC) into positive and negative classes. A reliable training data is vital to learn a sentiment classifier for textual sentiment classification, but due to domain heterogeneity, manually construction of reliable labeled sentiment corpora is a laborious and time-consuming task. In the absence of enough labeled data, the alternative usage of sentiment lexicons and semi-supervised learning approaches for sentiment classification have substantially attracted the attention of the research community. However, state-of-the-art techniques for semi-supervised sentiment classification present research challenges expressed in questions like the following. How to effectively utilize the concealed significant information in the unstructured data? How to learn the model while considering the most effective sentiment features? How to remove the noise and redundant features? How to refine the initial training data for initial model learning as the random selection may lead to performance degradation? Besides, mainly existing lexicons have trouble with word coverage, which may ignore key domain-specific sentiment words. Further research is required to improve the sentiment lexicons for textual sentiment classification. In order to address such research issues, in this paper, we propose a novel unified sentiment analysis framework for textual sentiment classification called LeSSA. Our main contributions are threefold. (a) lexicon construction, generating quality and wide coverage sentiment lexicon. (b) training classification models based on a high-quality training dataset generated by using k-mean clustering, active learning, self-learning, and co-training algorithms. (c) classification fusion, whereby the predictions from numerous learners are confluences to determine final sentiment polarity based on majority voting, and (d) practicality, that is, we validate our claim while applying our model on benchmark datasets. The empirical evaluation of multiple domain benchmark datasets demonstrates that the proposed framework outperforms existing semi-supervised learning techniques in terms of classification accuracy. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Open AccessEditorial
Special Issue on Laser Ultrasonics
Appl. Sci. 2019, 9(24), 5561; https://doi.org/10.3390/app9245561 - 17 Dec 2019
Viewed by 390
Abstract
The field of laser ultrasonics encompasses fundamental research on laser–matter interactions, as well as applications of opto-acoustic and acousto-optic phenomena in industry and biomedicine [...] Full article
(This article belongs to the Special Issue Laser Ultrasonics)
Open AccessArticle
The Role of Thermal Effects in Plasma Medical Applications: Biological and Calorimetric Analysis
Appl. Sci. 2019, 9(24), 5560; https://doi.org/10.3390/app9245560 - 17 Dec 2019
Cited by 4 | Viewed by 513
Abstract
Plasma Medicine tools exploit the therapeutic effects of the exposure of living matter to plasma produced at atmospheric pressure. Since these plasmas are usually characterized by a non-thermal equilibrium (highly energetic electrons, low temperature ions), thermal effects on the substrate are usually considered [...] Read more.
Plasma Medicine tools exploit the therapeutic effects of the exposure of living matter to plasma produced at atmospheric pressure. Since these plasmas are usually characterized by a non-thermal equilibrium (highly energetic electrons, low temperature ions), thermal effects on the substrate are usually considered negligible. Conversely, reactive oxygen and nitrogen species (RONS), UV radiation and metastables are thought to play a major role. In this contribution, we compare the presence of thermal effects in different operational regimes (corresponding to different power levels) of the Plasma Coagulation Controller (PCC), a plasma source specifically designed for accelerating blood coagulation. In particular, we analyze the application of PCC on human blood samples (in vitro) and male Wistar rats tissues (in vivo). Histological analysis points out, for the highest applied power regime, the onset of detrimental thermal effects such as red cell lysis in blood samples and tissues damages in in-vivo experiments. Calorimetric bench tests performed on metallic targets show that the current coupled by the plasma on the substrate induces most of measured thermal loads through a resistive coupling. Furthermore, the distance between the PCC nozzle and the target is found to strongly affect the total power. Full article
(This article belongs to the Special Issue Plasma Technology for Biomedical Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Influence of Phosphogypsum Dump on the Soil Ecosystem in the Sumy region (Ukraine)
Appl. Sci. 2019, 9(24), 5559; https://doi.org/10.3390/app9245559 - 17 Dec 2019
Cited by 1 | Viewed by 417
Abstract
This paper investigates the influence of a phosphogypsum dump on the surrounding environment (soil ecosystem) in the Sumy region (Ukraine). Analysis of the surrounding soils was performed to study the possible presence of compounds leaching from the dump. For physical chemical analysis of [...] Read more.
This paper investigates the influence of a phosphogypsum dump on the surrounding environment (soil ecosystem) in the Sumy region (Ukraine). Analysis of the surrounding soils was performed to study the possible presence of compounds leaching from the dump. For physical chemical analysis of samples, X-ray fluorescence (XRF) analysis, X-ray diffraction (XRD) analysis, and Fourier transform infrared spectrophotometry (FTIR) were used. XRF analysis did not confirm the contamination of soils around the phosphogypsum dump, and the soil fluoride levels measured in this study were comparable to the average soil fluoride concentration of soils globally. The colonization of the surface of the phosphogypsum dump by living organisms after the reclamation process was also analyzed. Field research was carried out on the routes, which specify the boundaries of the contours of plant communities, for description of phytocenoses in the territory of the dump, where the pH value ranged from 2.5 to 5.3, depending on the age of phosphogypsum terraces. Annual and perennial herbaceous plants are dominant on the reclaimed dump slopes. On the third and fourth phosphogypsum terraces (20 and 25 years old, respectively), tree forms have begun to settle, represented mainly by Populus tgemula, Populus alba, Betula pendula, and Robinia pseudoacacia. The studied patterns of ecological groups of plants growing under natural change of species can be used for the complete reclamation of the dump. Full article
(This article belongs to the Section Earth Sciences and Geography)
Show Figures

Figure 1

Open AccessArticle
Non-Programmers Composing Software Services: A Confirmatory Study of the Mental Models and Design Challenges
Appl. Sci. 2019, 9(24), 5558; https://doi.org/10.3390/app9245558 - 17 Dec 2019
Viewed by 453
Abstract
Ordinary web users can now create and publish online content. They even venture into “mashups,” integrating information from different sources into a composite information-providing web service. This is a non-trivial design task, which falls into the area of end-user development when the ordinary [...] Read more.
Ordinary web users can now create and publish online content. They even venture into “mashups,” integrating information from different sources into a composite information-providing web service. This is a non-trivial design task, which falls into the area of end-user development when the ordinary users who perform it do not have programming education. In this article, we investigate the service design strategies of 12 such ordinary users and compare them against the baseline of 12 programmers. In our think-aloud study, users completed two contrasting types of tasks involved in developing service-based applications: (a) manual service composition and (b) parametric design using templates with a high degree of software support (or assisted composition). These service composition tasks were chosen to differ in respect to the level of user support provided by the tool. Our findings show that non-programmers liked, more than programmers, the template-based parametric design and did not find the tool assistance as constraining as the programmers did. The difficulty of design involved in manual service composition and the absence of user guidance hindered non-programmers in expressing and implementing accurate design solutions. The differences in the mental models and needs of non-programmers are established to be in stark contrast to those of programmers. We used the details of our findings to propose specialized design recommendations for service composition tools aligned with the profiles of their target users. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Effect of Corrosion Characteristics on Long-Term Aging of Austenitic 304 Steel
Appl. Sci. 2019, 9(24), 5557; https://doi.org/10.3390/app9245557 - 17 Dec 2019
Viewed by 459
Abstract
The objective of this study is to investigate the effect of long-term aging on electrochemical corrosion characteristics of austenitic AISI 304 steel. AISI 304 steel was subjected to aging treatment for an extended period at 700 °C up to a maximum of 10,000 [...] Read more.
The objective of this study is to investigate the effect of long-term aging on electrochemical corrosion characteristics of austenitic AISI 304 steel. AISI 304 steel was subjected to aging treatment for an extended period at 700 °C up to a maximum of 10,000 h. The variation in the microstructure of aged specimens was observed with an optical microscope (OM) and a scanning electron microscope (SEM). The electrochemical polarization test was conducted to obtain the corrosion current density (Icorr) and corrosion potential (Ecorr). The metastable intermetallic M23C6 carbides generated in the vicinity of γ/γ grain boundaries and coarsened with aging time. The δ-ferrite island decomposed into σ-phase and M23C6 carbide with an aging time increase. As the aging time increased, the current density increased, but the corrosion potential of the austenitized specimen exhibited a minimum value of 0.04 μA/cm2. The highest pitting resistance exhibited at the austenitized specimen due to the absence of carbides. Consequently, the corrosion resistance of austenitic AISI 304 steel decreases with long-term aging time. The microstructural analyses well support this result. Full article
(This article belongs to the Special Issue Selected Papers from the ICMR 2019)
Show Figures

Figure 1

Open AccessArticle
Asymptotic Performances of a Signal-To-Noise Ratio Moment-Based Estimator for Real Sinusoids in Additive Noise
Appl. Sci. 2019, 9(24), 5556; https://doi.org/10.3390/app9245556 - 17 Dec 2019
Cited by 1 | Viewed by 446
Abstract
We considered the problem of the estimation of signal-to-noise ratio (SNR) with a real deterministic sinusoid with unknown frequency, phase and amplitude in additive Gaussian noise of unknown variance. A blind SNR estimator that does not require the knowledge of the instantaneous frequency [...] Read more.
We considered the problem of the estimation of signal-to-noise ratio (SNR) with a real deterministic sinusoid with unknown frequency, phase and amplitude in additive Gaussian noise of unknown variance. A blind SNR estimator that does not require the knowledge of the instantaneous frequency of the sinusoid, through separate estimation of signal and noise power, was derived using the method of moments, a general method to derive estimators based on high-order moments. Statistical performances of the proposed estimators were studied theoretically through derivation of Cramer–Rao lower bounds (CRLBs) and asymptotic variances. Furthermore, results from Monte-Carlo simulations that confirm the validity of the theoretical analysis are presented along with some comments on the use of proposed estimators in practical applications. Full article
Show Figures

Figure 1

Open AccessReview
Therapeutic Potential of Allicin-Rich Garlic Preparations: Emphasis on Clinical Evidence toward Upcoming Drugs Formulation
Appl. Sci. 2019, 9(24), 5555; https://doi.org/10.3390/app9245555 - 17 Dec 2019
Cited by 2 | Viewed by 693
Abstract
Garlic has been used for a long time in human history. Allicin, the main active compound present in freshly injured garlic, is enzymatically formed from alliin. Allicin has shown doubtless beneficial effects and even be conceived as medicine. The present review highlights allicin-associated [...] Read more.
Garlic has been used for a long time in human history. Allicin, the main active compound present in freshly injured garlic, is enzymatically formed from alliin. Allicin has shown doubtless beneficial effects and even be conceived as medicine. The present review highlights allicin-associated studies. Indeed, clinical studies on healthy subjects have evidenced that standardized garlic treatment (900 mg/day) significantly reduces total cholesterol (TC) and low-density lipoprotein cholesterol (c-LDL). Besides, allicin also led to a marked improvement in mouth opening, burning sensation, and oral health-related quality of life on stage II oral submucous fibrosis patients. Interestingly, in children, allicin also has been proposed for thrush prevention and as an alternative antibiotic therapy. Nonetheless, there is particular attention to allicin bioavailability, given its high instability. Although clinical evidence has promoted allicin release from garlic powder tablets under simulated gastrointestinal conditions, garlic tablets are those that have provided less alliinase protection due to its fast disintegration, releasing low allicin amounts. Full article
Show Figures

Figure 1

Open AccessArticle
Effect of Scanning Strategy During Selective Laser Melting on Surface Topography, Porosity, and Microstructure of Additively Manufactured Ti-6Al-4V
Appl. Sci. 2019, 9(24), 5554; https://doi.org/10.3390/app9245554 - 17 Dec 2019
Cited by 3 | Viewed by 767
Abstract
The effect of the scanning strategy during selective laser melting (SLM) of Ti-6Al-4V was investigated. An optimized cellular scan strategy (island scan modeled) was compared to a simple cellular scan strategy (island scan stripes) and a simple antiparallel line scanning strategy (line scan). [...] Read more.
The effect of the scanning strategy during selective laser melting (SLM) of Ti-6Al-4V was investigated. An optimized cellular scan strategy (island scan modeled) was compared to a simple cellular scan strategy (island scan stripes) and a simple antiparallel line scanning strategy (line scan). Surface texture was investigated by optical three-dimensional (3D) surface measurements, which when combined with light optical microscopy (LOM), revealed deflections caused by the thermal stresses during the build process. Elevated edges caused by the edge-effect dominate the surface texture of all investigated specimens. The scanning strategy determines the surface texture, and the lowest surface roughness was obtained by the line scan strategy. Porosity was investigated with X-ray computed tomography-imaging. Mainly spherical porosity was observed for the line scan and island scan modeled specimens, while the island scan stripes strategy showed more lack-of-fusion defects and a higher total porosity amount. Microstructure was investigated with LOM and scanning electron microscopy (SEM). The microstructure in Ti-6Al-4V was largely martensitic α’ and prior β grains. The morphology is different for the various scan strategies, and decomposition of α’ into lamellar α/β was observed in the bottom part of the island scan specimen. Accordingly, the hardness decreased in the decomposed part of the specimen. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Graphical abstract

Open AccessArticle
Coordinated Dispatch of Multi-Energy Microgrids and Distribution Network with a Flexible Structure
Appl. Sci. 2019, 9(24), 5553; https://doi.org/10.3390/app9245553 - 17 Dec 2019
Viewed by 515
Abstract
With the increasing scale of multi-energy microgrids (MGs) and complicated operation modes, the coordinated operation of microgrids and the distribution network (DN) has posed great challenges. In this paper, a bi-level optimal coordinated dispatch framework of the DN and multi-energy MGs based on [...] Read more.
With the increasing scale of multi-energy microgrids (MGs) and complicated operation modes, the coordinated operation of microgrids and the distribution network (DN) has posed great challenges. In this paper, a bi-level optimal coordinated dispatch framework of the DN and multi-energy MGs based on CCHP (combined cooling, heating, and power) is proposed. The first level studies the optimal operation of the DN with power interaction on tie lines between MGs considering the coupling relationship and constraints of the equipment and network. The network reconfiguration with limited control actions is considered to increase the flexibility of the topology and further improve the working state. For the second level, MGs receive orders from the DN and determine the optimal strategies of multi-energy devices to achieve optimized operation under the condition of satisfying the different types of load and requirement for the DN. To solve the optimal dispatch problem of both the DN and the multi-energy MGs considering the DN reconfiguration, a method combining particle swarm optimization algorithm (PSO) with mixed-integer linear programming (MILP) is proposed. Cases studied in an IEEE33-node DN with renewable power sources and grid-connected MGs validate that the proposed method is very effective in reducing the power loss and voltage offset of the DN while ensuring the benefits of the MGs. Full article
(This article belongs to the Special Issue Microgrids II)
Show Figures

Figure 1

Open AccessArticle
Intelligent Microarray Data Analysis through Non-negative Matrix Factorization to Study Human Multiple Myeloma Cell Lines
Appl. Sci. 2019, 9(24), 5552; https://doi.org/10.3390/app9245552 - 17 Dec 2019
Cited by 3 | Viewed by 617
Abstract
Microarray data are a kind of numerical non-negative data used to collect gene expression profiles. Since the number of genes in DNA is huge, they are usually high dimensional, therefore they require dimensionality reduction and clustering techniques to extract useful information. In this [...] Read more.
Microarray data are a kind of numerical non-negative data used to collect gene expression profiles. Since the number of genes in DNA is huge, they are usually high dimensional, therefore they require dimensionality reduction and clustering techniques to extract useful information. In this paper we use NMF, non-negative matrix factorization, to analyze microarray data, and also develop “intelligent” results visualization with the aim to facilitate the analysis of the domain experts. For this purpose, a case study based on the analysis of the gene expression profiles (GEPs), representative of the human multiple myeloma diseases, was investigated in 40 human myeloma cell lines (HMCLs). The aim of the experiments was to study the genes involved in arachidonic acid metabolism in order to detect gene patterns that possibly could be connected to the different gene expression profiles of multiple myeloma. NMF results have been verified by western blotting analysis in six HMCLs of proteins expressed by some of the most abundantly expressed genes. The experiments showed the effectiveness of NMF in intelligently analyzing microarray data. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

Open AccessArticle
Interaction between Sound and Thermal Influences on Patient Comfort in the Hospitals of China’s Northern Heating Region
Appl. Sci. 2019, 9(24), 5551; https://doi.org/10.3390/app9245551 - 17 Dec 2019
Cited by 3 | Viewed by 520
Abstract
Previous studies have found that hospitals are often inadequately ventilated in the heating region of China, which causes an increased risk of negative impacts on patients. The complex interaction between thermal comfort and acoustics presents considerable challenges for designers. There is a wide [...] Read more.
Previous studies have found that hospitals are often inadequately ventilated in the heating region of China, which causes an increased risk of negative impacts on patients. The complex interaction between thermal comfort and acoustics presents considerable challenges for designers. There is a wide range of literature covering the area of the interaction between the sound–thermal, sound–odor, and acoustic–visual influences, but a focused research on the sound –thermal influence on comfort in hospitals has not been published yet. This paper describes a series of field measurements and subjective evaluations that investigate the thermal comfort and acoustic performance of eighteen hospitals in China. The results showed that the thermal comfort in the monitored wards was mostly acceptable, but the temperatures tended to be much higher and the humidity much lower, in practice than they were designed to be in the heating season. The most significant conclusion is that a positive thermal stimulus can create a comfortable thermal environment, which can improve patients’ evaluation of the acoustics, while a negative stimulus has the opposite effect. A comfortable acoustic environment also caused patients to positively evaluate thermal comfort. Moreover, the relationship between thermal and sound effects in the overall evaluation showed that they are almost equal. Full article
(This article belongs to the Special Issue Advances in Room Acoustics of Non-performing Public Spaces)
Show Figures

Figure 1

Open AccessArticle
The Relevance of Cataract as a Risk Factor for Age-Related Macular Degeneration: A Machine Learning Approach
Appl. Sci. 2019, 9(24), 5550; https://doi.org/10.3390/app9245550 - 17 Dec 2019
Viewed by 527
Abstract
Age-related macular degeneration (AMD) is the leading cause of visual dysfunction and irreversible blindness in developed countries and a rising cause in underdeveloped countries. There is a current debate on whether or not cataracts are significant risk factors for AMD development. In particular, [...] Read more.
Age-related macular degeneration (AMD) is the leading cause of visual dysfunction and irreversible blindness in developed countries and a rising cause in underdeveloped countries. There is a current debate on whether or not cataracts are significant risk factors for AMD development. In particular, research regarding this association is so far inconclusive. For this reason, we aimed to employ here a machine-learning approach to analyze the relevance and importance of cataracts as a risk factor for AMD in a large cohort of Hispanics from Mexico. We conducted a nested case control study of 119 cataract cases and 137 healthy unmatched controls focusing on clinical data from electronic medical records. Additionally, we studied two single nucleotide polymorphisms in the CFH gene previously associated with the disease in various populations as positive control for our method. We next determined the most relevant variables and found the bivariate association between cataracts and AMD. Later, we used supervised machine-learning methods to replicate these findings without bias. To improve the interpretability, we detected the five most relevant features and displayed them using a bar graph and a rule-based tree. Our findings suggest that bilateral cataracts are not a significant risk factor for AMD development among Hispanics from Mexico. Full article
Show Figures

Figure 1

Open AccessArticle
The Influence of Thermo-Mechanical Activation of Bentonite on the Mechanical and Durability Performance of Concrete
Appl. Sci. 2019, 9(24), 5549; https://doi.org/10.3390/app9245549 - 17 Dec 2019
Cited by 2 | Viewed by 664
Abstract
Despite presenting a very high global warming toll, Portland cement concrete is the most widely used construction material in the world. The eco-efficiency, economy, and the overall mechanical and durability performances of concrete can be improved by incorporating supplementary cementitious materials (SCMs) as [...] Read more.
Despite presenting a very high global warming toll, Portland cement concrete is the most widely used construction material in the world. The eco-efficiency, economy, and the overall mechanical and durability performances of concrete can be improved by incorporating supplementary cementitious materials (SCMs) as partial substitutions to ordinary Portland cement (OPC). Naturally found bentonite possesses pozzolanic properties and has very low carbon footprint compared to OPC. By applying activation techniques, the reactivity of bentonite can be improved, and its incorporation levels can be maximized. In this study, the influence of mechanical and thermo-mechanical activation of bentonite is investigated on properties of concrete. Bentonite was used for 0%, 10%, 15%, 20%, 25%, 30%, and 35% mass replacements of OPC. Mechanical (compressive strength and split tensile strength) and durability (water absorption, sorptivity coefficient, and acid attack resistance) properties were studied. Results of experimental testing revealed that, concrete containing bentonite showed good mechanical performance, while durability was significantly improved relative to control mix. Application of thermo-mechanical activation can enhance the incorporation levels of bentonite in concrete. At 15% and 25%, bentonite produced optimum results for mechanical and thermo-mechanical activation, respectively. Bentonite inclusion is more beneficial to the durability than the mechanical strength of concrete. Full article
(This article belongs to the Special Issue Low Binder Concrete and Mortars) Printed Edition available
Show Figures

Figure 1

Open AccessFeature PaperArticle
Large-Scale Grid Integration of Renewable Energy Resources with a Double Synchronous Controller
Appl. Sci. 2019, 9(24), 5548; https://doi.org/10.3390/app9245548 - 16 Dec 2019
Viewed by 633
Abstract
This paper provides virtual inertia and mechanical power-based double synchronous controller (DSC) for power converters based on the d- and q-components of the converter current to assure the stable operation of the grid with the penetration of large-scale renewable energy resources (RERs). The [...] Read more.
This paper provides virtual inertia and mechanical power-based double synchronous controller (DSC) for power converters based on the d- and q-components of the converter current to assure the stable operation of the grid with the penetration of large-scale renewable energy resources (RERs). The DSC is projected based on emulating both the inertia and mechanical power variables of the synchronous generators (SGs), and its performance is compared with a non-synchronous controller (NSC) that is without these emulations. The main contributions of the DSC are providing a large margin of stability for the power grid with a wide area of low and high values of virtual inertia, also improving significantly power grid stability (PGS) with changing properly the embedded virtual variables of inertia, mechanical power, and also mechanical power error. Also, decoupling features of the proposed DSC in which both d and q components are completely involved with the characteristics of SGs as well as the relationship between the interfaced converter and dynamic models of SGs are other important contributions of the DSC over the existing control methods. Embedding some coefficients for the proposed DSC to show its robustness against the unknown intrinsic property of parameters is another contribution in this paper. Moreover, several transfer functions are achieved and analyzed that confirm a more stable performance of the emulated controller in comparison with the NSC for power-sharing characteristics. Simulation results confirm the superiority of the proposed DSC in comparison with other existing control techniques, e.g., the NSC techniques. Full article
Show Figures

Figure 1

Open AccessArticle
Simulation Study of Two Torque Optimization Methods for Direct Torque-Controlled Induction Motors
Appl. Sci. 2019, 9(24), 5547; https://doi.org/10.3390/app9245547 - 16 Dec 2019
Viewed by 706
Abstract
The simplicity and excellent dynamic performance of Direct Torque Control (DTC) make Induction Motor (IM) drives attractive for many applications that require precise torque control. The traditional version of DTC uses hysteresis controllers. Unfortunately, the nature of these controllers prevents the optimization of [...] Read more.
The simplicity and excellent dynamic performance of Direct Torque Control (DTC) make Induction Motor (IM) drives attractive for many applications that require precise torque control. The traditional version of DTC uses hysteresis controllers. Unfortunately, the nature of these controllers prevents the optimization of the inverter voltage vectors inside the flux hysteresis band. The inverter voltage vector optimization can produce fast torque response of the IM drive. This research proposes two torque optimization methods for IM systems utilizing DTC. Analysis and Matlab simulations for the proposed optimization methods prove that the torque and, consequently, the speed responses, are greatly improved. The performances of the drive system controlled by the proposed optimization methods and the traditional DTC are compared. Conversely, the effects of the parameters on the proposed optimization methods are introduced. The proposed methods greatly improve the torque and speed dynamic performances against the traditional DTC technique. However, one of the proposed optimization methods is more sensitive to IM parameter variations than the other. Full article
Show Figures

Figure 1

Open AccessArticle
Design on the Winter Jujubes Harvesting and Sorting Device
Appl. Sci. 2019, 9(24), 5546; https://doi.org/10.3390/app9245546 - 16 Dec 2019
Cited by 1 | Viewed by 619
Abstract
According to the existing problems of winter jujube harvesting, such as the intensive labor of manual picking, damage to the surface of winter jujubes, a winter jujube harvesting and sorting device was developed. This device consisted of vibration mechanism, collection mechanism, and sorting [...] Read more.
According to the existing problems of winter jujube harvesting, such as the intensive labor of manual picking, damage to the surface of winter jujubes, a winter jujube harvesting and sorting device was developed. This device consisted of vibration mechanism, collection mechanism, and sorting mechanism. The eccentric vibration mechanism made the winter jujubes fall, and the umbrella collecting mechanism can collect winter jujube and avoid the impact of winter jujube on the ground, and the sorting mechanism removed jujube leaves and divided the jujube into two types, and the automatic leveling mechanism made the device run smoothly in the field. Through finite element analysis and BP (Back Propagation) neural network analysis, the results show that: The vibration displacement of jujube tree is related to the trunk diameter and vibration position; the impact force of winter jujubes falling is related to the elastic modulus of umbrella material; the collecting area can be increased four times for each additional step of the collection mechanism; jujube leaves can be effectively removed when blower wind speed reaches 45.64 m/s. According to the evaluation standard grades of the jujubes harvesting and sorting, the device has good effects and the excellent rate up to 90%, which has good practicability and economy. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

Open AccessArticle
Preparation of Co-Ni Alloy Coating with Stable Composition by Jet-Electrodeposition
Appl. Sci. 2019, 9(24), 5545; https://doi.org/10.3390/app9245545 - 16 Dec 2019
Viewed by 468
Abstract
Cobalt-Nickel (Co-Ni) alloy coatings were prepared by jet electrodeposition on brass substrate under different Co2+/Ni2+ ratio contents (M = 2:1, 1:1, 1:2 and 1:3) and solution flow rates (from 2.0 to 4.5 L/min). The surface morphology, element content, and phase [...] Read more.
Cobalt-Nickel (Co-Ni) alloy coatings were prepared by jet electrodeposition on brass substrate under different Co2+/Ni2+ ratio contents (M = 2:1, 1:1, 1:2 and 1:3) and solution flow rates (from 2.0 to 4.5 L/min). The surface morphology, element content, and phase structure of the coating were observed by scanning electron microscope (SEM), energy dispersive spectrometer (EDS), and X-ray diffractometer (XRD). The hardness and wear resistance of the coatings were also measured. The results showed that the Co content in coatings was greater than 70%, no matter at what flow rate and concentration. With an increase of the flow rate, the Co content in the coating increases, and the grain size on the surface of the coating decreases, which leads to increased hardness of the coating. However, the flow rate of the plating solution has little effect on the grain growth orientation, and the phase structure is dominated by the elemental content of the coating. The coatings are in a single phase of hexagonal close-packed (HCP) when the Co content is more than 80%, while the coatings are in dual phases of the HCP and face-centered cubic (FCC) when the Co content is less than 80%. With an increase in the Co content, the grain size decreases, and consequently, the hardness and wear resistance of the alloy increase. A Co-Ni alloy coating with stable composition can be obtained when the ratio of Co2+ to Ni2+ is 2:1. Full article
(This article belongs to the Section Materials)
Show Figures

Figure 1

Previous Issue
Back to TopTop