Computer Science & Mathematics
http://www.mdpi.com/journal/computer-math
Latest open access articles published in Computer Science & Mathematics at http://www.mdpi.com/journal/computer-math<![CDATA[Computation, Vol. 3, Pages 427-443: Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms]]>
http://www.mdpi.com/2079-3197/3/3/427
Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA) can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M) and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M) a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.Computation2015-09-0233Article10.3390/computation30304274274432079-31972015-09-02doi: 10.3390/computation3030427Richard LambJoshua Premo<![CDATA[Symmetry, Vol. 7, Pages 1587-1594: Prevention of Exponential Equivalence in Simple Password Exponential Key Exchange (SPEKE)]]>
http://www.mdpi.com/2073-8994/7/3/1587
Simple Password Exponential Key Exchange (SPEKE) and Dragonfly are simple password-based authenticated key exchange protocols that use a value derived from a shared password as a generator for modular exponentiation, as opposed to Diffie–Hellman key exchange, which uses a fixed value. However, it has been shown that in SPEKE, an active attacker, can examine multiple passwords in a single attempt because the passwords have an exponential correlation.We show that Dragonfly can also suffer from the same problem, and we propose a simple countermeasure to prevent the exponential equivalence in SPEKE.Symmetry2015-09-0273Article10.3390/sym7031587158715942073-89942015-09-02doi: 10.3390/sym7031587Hanwook LeeDongho Won<![CDATA[JSAN, Vol. 4, Pages 226-250: Multi-Hop-Enabled Energy-Efficient MAC Protocol for Underwater Acoustic Sensor Networks]]>
http://www.mdpi.com/2224-2708/4/3/226
In multi-hop underwater acoustic sensor networks (UWASNs), packet collisions due to hidden and local nodes adversely affect throughput, energy efficiency and end-to-end delay. Existing medium access control (MAC) protocols try to solve the problem by utilizing a single-phase contention resolution mechanism, which causes a large number of control packet exchanges and energy overhead. In this paper, we introduce a MAC protocol that splits this single-phase contention resolution mechanism into two phases to provide efficient multi-hop networking. In the first phase, local nodes are eliminated from the contention, and in the later phase, the adverse effects of hidden nodes are mitigated. This two-phased contention resolution provides higher energy efficiency, better throughput and shorter end-to-end delay, and it also enables adaptability for different network architectures. A probabilistic model of the proposed protocol is also developed to analyse the performance. The proposed protocol has been evaluated through quantitative analysis and simulation. Results obtained through quantitative analysis and simulation reveal that the proposed protocol achieves significantly better energy efficiency, higher and more stable throughput and lower end-to-end delay compared to existing protocols, namely T-Lohi and slotted floor acquisition multiple access (S-FAMA).Journal of Sensor and Actuator Networks2015-09-0243Article10.3390/jsan40302262262502224-27082015-09-02doi: 10.3390/jsan4030226Khaja ShazzadKemal TepeEsam Abdel-Raheem<![CDATA[IJGI, Vol. 4, Pages 1672-1692: Towards Measuring and Visualizing Sustainable National Power—A Case Study of China and Neighboring Countries]]>
http://www.mdpi.com/2220-9964/4/3/1672
This paper presents a new perspective of national power—sustainable national power (SNP)—emphasizing both the traditional comprehensive national power (CNP) and social and environmental sustainability. We propose a measurement to quantify the SNP based on the measurement of comprehensive national power and a sustainable adjusted index. In addition, density-equalizing maps are adopted to visualize the sustainable national power of countries in order to gain a better understanding for its current state and future development from a cartographic perspective. China and its neighboring countries are selected as a case study area. The results show that China outperforms other countries in most of the CNP dimensions but performs poorly in various SNP-adjusted dimensions within the study area. The composite score shows that China is with the highest regional SNP, followed by Japan, Russia, South Korea and India. Furthermore, time series of cartograms reveal evidence showing power transitions among countries. In addition, the effectiveness of cartograms for cartographic communication is discussed.ISPRS International Journal of Geo-Information2015-09-0243Article10.3390/ijgi4031672167216922220-99642015-09-02doi: 10.3390/ijgi4031672Hua LiaoWeihua DongHuiping LiuYuejing Ge<![CDATA[IJGI, Vol. 4, Pages 1657-1671: Quality Evaluation of VGI Using Authoritative Data—A Comparison with Land Use Data in Southern Germany]]>
http://www.mdpi.com/2220-9964/4/3/1657
Volunteered Geographic Information (VGI) such as data derived from the OpenStreetMap (OSM) project is a popular data source for freely available geographic data. Normally, untrained contributors gather these data. This fact is frequently a cause of concern regarding the quality and usability of such data. In this study, the quality of OSM land use and land cover (LULC) data is investigated for an area in southern Germany. Two spatial data quality elements, thematic accuracy and completeness are addressed by comparing the OSM data with an authoritative German reference dataset. The results show that the kappa value indicates a substantial agreement between the OSM and the authoritative dataset. Nonetheless, for our study region, there are clear variations between the LULC classes. Forest covers a large area and shows both a high OSM completeness (97.6%) and correctness (95.1%). In contrast, farmland also covers a large area, but for this class OSM shows a low completeness value (45.9%) due to unmapped areas. Additionally, the results indicate that a high population density, as present in urbanized areas, seems to denote a higher strength of agreement between OSM and the DLM (Digital Landscape Model). However, a low population density does not necessarily imply a low strength of agreement.ISPRS International Journal of Geo-Information2015-09-0243Article10.3390/ijgi4031657165716712220-99642015-09-02doi: 10.3390/ijgi4031657Helen DornTobias TörnrosAlexander Zipf<![CDATA[IJGI, Vol. 4, Pages 1627-1656: Walk This Way: Improving Pedestrian Agent-Based Models through Scene Activity Analysis]]>
http://www.mdpi.com/2220-9964/4/3/1627
Pedestrian movement is woven into the fabric of urban regions. With more people living in cities than ever before, there is an increased need to understand and model how pedestrians utilize and move through space for a variety of applications, ranging from urban planning and architecture to security. Pedestrian modeling has been traditionally faced with the challenge of collecting data to calibrate and validate such models of pedestrian movement. With the increased availability of mobility datasets from video surveillance and enhanced geolocation capabilities in consumer mobile devices we are now presented with the opportunity to change the way we build pedestrian models. Within this paper we explore the potential that such information offers for the improvement of agent-based pedestrian models. We introduce a Scene- and Activity-Aware Agent-Based Model (SA2-ABM), a method for harvesting scene activity information in the form of spatiotemporal trajectories, and incorporate this information into our models. In order to assess and evaluate the improvement offered by such information, we carry out a range of experiments using real-world datasets. We demonstrate that the use of real scene information allows us to better inform our model and enhance its predictive capabilities.ISPRS International Journal of Geo-Information2015-09-0243Article10.3390/ijgi4031627162716562220-99642015-09-02doi: 10.3390/ijgi4031627Andrew CrooksArie CroitoruXu LuSarah WiseJohn IrvineAnthony Stefanidis<![CDATA[IJGI, Vol. 4, Pages 1605-1626: Movement Pattern Analysis Based on Sequence Signatures]]>
http://www.mdpi.com/2220-9964/4/3/1605
Increased affordability and deployment of advanced tracking technologies have led researchers from various domains to analyze the resulting spatio-temporal movement data sets for the purpose of knowledge discovery. Two different approaches can be considered in the analysis of moving objects: quantitative analysis and qualitative analysis. This research focuses on the latter and uses the qualitative trajectory calculus (QTC), a type of calculus that represents qualitative data on moving point objects (MPOs), and establishes a framework to analyze the relative movement of multiple MPOs. A visualization technique called sequence signature (SESI) is used, which enables to map QTC patterns in a 2D indexed rasterized space in order to evaluate the similarity of relative movement patterns of multiple MPOs. The applicability of the proposed methodology is illustrated by means of two practical examples of interacting MPOs: cars on a highway and body parts of a samba dancer. The results show that the proposed method can be effectively used to analyze interactions of multiple MPOs in different domains.ISPRS International Journal of Geo-Information2015-09-0243Article10.3390/ijgi4031605160516262220-99642015-09-02doi: 10.3390/ijgi4031605Seyed ChavoshiBernard De BaetsTijs NeutensMatthias DelafontaineGuy De TréNico de Weghe<![CDATA[Information, Vol. 6, Pages 564-575: Toward E-Content Adaptation: Units’ Sequence and Adapted Ant Colony Algorithm]]>
http://www.mdpi.com/2078-2489/6/3/564
An adapted ant colony algorithm is proposed to adapt e-content to learner’s profile. The pertinence of proposed units keeps learners motivated. A model of categorization of course’s units is presented. Two learning paths are discussed based on a predefined graph. In addition, the ant algorithm is simulated on the proposed model. The adapted algorithm requires a definition of a new pheromone which is a parameter responsible for defining whether the unit is in the right pedagogical sequence or in the wrong one. Moreover, it influences the calculation of quantity of pheromone deposited on each arc. Accordingly, results show that there are positive differences in learner’s passages to propose the suitable units depending on the sequence and the number of successes. The proposed units do not depend on the change of number of units around 10 to 30 units in the algorithm process.Information2015-09-0163Article10.3390/info60305645645752078-24892015-09-01doi: 10.3390/info6030564Naoual BenabdellahMourad GharbiMostafa Bellafkih<![CDATA[IJGI, Vol. 4, Pages 1584-1604: Spatio-Temporal Analysis of Spatial Accessibility to Primary Health Care in Bhutan]]>
http://www.mdpi.com/2220-9964/4/3/1584
Geographic information systems (GIS) can be effectively utilized to carry out spatio-temporal analysis of spatial accessibility to primary healthcare services. Spatial accessibility to primary healthcare services is commonly measured using floating catchment area models which are generally defined with three variables; namely, an attractiveness component of the service centre, travel time or distance between the locations of the service centre and the population, and population demand for healthcare services. The nearest-neighbour modified two-step floating catchment area (NN-M2SFCA) model is proposed for computing spatial accessibility indices for the entire country. Accessibility values from 2010 to 2013 for Bhutan were analysed both spatially and temporally by producing accessibility ranking maps, plotting Lorenz curves, and conducting spatial clustering analysis. The spatial accessibility indices of the 205 sub-districts show great disparities in healthcare accessibility in the country. The mean- and median-based classification results indicate that, in 2013, 24 percent of Bhutan’s population have poor access to primary healthcare services, 66 percent of the population have medium-level access, and 10 percent have good access.ISPRS International Journal of Geo-Information2015-09-0143Article10.3390/ijgi4031584158416042220-99642015-09-01doi: 10.3390/ijgi4031584Sonam JamtshoRobert CornerAshraf Dewan<![CDATA[IJGI, Vol. 4, Pages 1569-1583: Space for Climate]]>
http://www.mdpi.com/2220-9964/4/3/1569
This paper describes how Earth Observation (EO) data—in particular from satellites—can support climate science, monitoring, and services by delivering global, repetitive, consistent, and timely information on the state of the environment and its evolution. Some examples are presented of EO demonstration pilot projects performed in partnership with scientists, industry, and development practitioners to support climate science, adaptation, mitigation, and disaster risk management. In particular, the paper highlights the challenge of gathering observations and generating long-term climate data records, which provide the foundation of risk management. The paper calls for a science-based integrated approach to climate risk management supported by data and knowledge, providing decision-makers with a unique analytical lens to develop a safety net to risk and maximize opportunities related to climate change and variability.ISPRS International Journal of Geo-Information2015-09-0143Article10.3390/ijgi4031569156915832220-99642015-09-01doi: 10.3390/ijgi4031569Pierre-Philippe Mathieu<![CDATA[Axioms, Vol. 4, Pages 400-411: POVMs and the Two Theorems of Naimark and Sz.-Nagy]]>
http://www.mdpi.com/2075-1680/4/3/400
In 1940 Naimark showed that if a set of quantum observables are positive semi-definite and sum to the identity then, on a larger space, they have a joint resolution as commuting projectors. In 1955 Sz.-Nagy showed that any set of observables could be so resolved, with the resolution respecting all linear sums. Crucially, both resolutions return the correct Born probabilities for the original observables. Here, an alternative proof of the Sz.-Nagy result is given using elementary inner product spaces. A version of the resolution is then shown to respect all products of observables on the base space. Practical and theoretical consequences are indicated. For example, quantum statistical inference problems that involve any algebraic functionals can now be studied using classical statistical methods over commuting observables. The estimation of quantum states is a problem of this type. Further, as theoretical objects, classical and quantum systems are now distinguished by only more or less degrees of freedom.Axioms2015-09-0143Article10.3390/axioms40304004004112075-16802015-09-01doi: 10.3390/axioms4030400James MalleyAnthony Fletcher<![CDATA[Future Internet, Vol. 7, Pages 307-328: A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing]]>
http://www.mdpi.com/1999-5903/7/3/307
It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC) scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR), which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.Future Internet2015-08-3173Article10.3390/fi70303073073281999-59032015-08-31doi: 10.3390/fi7030307Cunbo LuLiangtian Wan<![CDATA[Econometrics, Vol. 3, Pages 633-653: A New Family of Consistent and Asymptotically-Normal Estimators for the Extremal Index]]>
http://www.mdpi.com/2225-1146/3/3/633
The extremal index (θ) is the key parameter for extending extreme value theory results from i.i.d. to stationary sequences. One important property of this parameter is that its inverse determines the degree of clustering in the extremes. This article introduces a novel interpretation of the extremal index as a limiting probability characterized by two Poisson processes and a simple family of estimators derived from this new characterization. Unlike most estimators for θ in the literature, this estimator is consistent, asymptotically normal and very stable across partitions of the sample. Further, we show in an extensive simulation study that this estimator outperforms in finite samples the logs, blocks and runs estimation methods. Finally, we apply this new estimator to test for clustering of extremes in monthly time series of unemployment growth and inflation rates and conclude that runs of large unemployment rates are more prolonged than periods of high inflation.Econometrics2015-08-2833Article10.3390/econometrics30306336336532225-11462015-08-28doi: 10.3390/econometrics3030633Jose Olmo<![CDATA[Computation, Vol. 3, Pages 386-426: Numerical Simulations of Wave-Induced Flow Fields around Large-Diameter Surface-Piercing Vertical Circular Cylinder]]>
http://www.mdpi.com/2079-3197/3/3/386
A computational analysis is performed on the diffraction of water waves induced by large-diameter, surface-piercing, vertical circular cylinder. With reference to linear-wave cases, the phenomenon is preliminarly considered in terms of velocity potential, a simplified theoretical framework in which both hypotheses of inviscid fluid and irrotational flow are incorporated. Then, and as a first-approximation analysis, the Euler equations in primitive variables are considered (a framework in which the fluid is still handled as inviscid, but the field can be rotational). Finally, the real-fluid behavior is analyzed, by numerically integrating the full Navier-Stokes equations (viscous fluid and rotational field) in their velocity-pressure formulation, by following the approach of the Direct Numerical Simulation (DNS, no models are used for the fluctuating portion of the velocity field). For further investigation of the flow fields, the swirling-strength criterion for flow-structure extraction, and the Karhunen-Loève (KL) decomposition technique for the extraction of the most energetic flow modes respectively, are applied to the computed fields. It is found that remarkable differences exist between the wave-induced fields, as derived within the different computing frameworks tested.Computation2015-08-2833Article10.3390/computation30303863864262079-31972015-08-28doi: 10.3390/computation3030386Giancarlo Alfonsi<![CDATA[Robotics, Vol. 4, Pages 316-340: A Spatial Queuing-Based Algorithm for Multi-Robot Task Allocation]]>
http://www.mdpi.com/2218-6581/4/3/316
Multi-robot task allocation (MRTA) is an important area of research in autonomous multi-robot systems. The main problem in MRTA is to allocate a set of tasks to a set of robots so that the tasks can be completed by the robots while ensuring that a certain metric, such as the time required to complete all tasks, or the distance traveled, or the energy expended by the robots is reduced. We consider a scenario where tasks can appear dynamically and a task needs to be performed by multiple robots to be completed. We propose a new algorithm called SQ-MRTA (Spatial Queueing-MRTA) that uses a spatial queue-based model to allocate tasks between robots in a distributed manner. We have implemented the SQ-MRTA algorithm on accurately simulated models of Corobot robots within the Webots simulator for different numbers of robots and tasks and compared its performance with other state-of-the-art MRTA algorithms. Our results show that the SQ-MRTA algorithm is able to scale up with the number of tasks and robots in the environment, and it either outperforms or performs comparably with respect to other distributed MRTA algorithms.Robotics2015-08-2843Article10.3390/robotics40303163163402218-65812015-08-28doi: 10.3390/robotics4030316William LenaghPrithviraj DasguptaAngelica Munoz-Melendez<![CDATA[Algorithms, Vol. 8, Pages 723-742: A Comparative Study of Modern Heuristics on the School Timetabling Problem]]>
http://www.mdpi.com/1999-4893/8/3/723
In this contribution a comparative study of modern heuristics on the school timetabling problem is presented. More precisely, we investigate the application of two population-based algorithms, namely a Particle Swarm Optimization (PSO) and an Artificial Fish Swarm (AFS), on the high school timetabling problem. In order to demonstrate their efficiency and performance, experiments with real-world input data have been performed. Both algorithms proposed manage to create feasible and efficient high school timetables, thus fulfilling adequately the timetabling needs of the respective high schools. Computational results demonstrate that both algorithms manage to reach efficient solutions, most of the times better than existing approaches applied to the same school timetabling input instances using the same evaluation criteria.Algorithms2015-08-2883Article10.3390/a80307237237421999-48932015-08-28doi: 10.3390/a8030723Iosif KatsaragakisIoannis TassopoulosGrigorios Beligiannis<![CDATA[IJFS, Vol. 3, Pages 393-410: Fiscal Deficits and Stock Prices in India: Empirical Evidence]]>
http://www.mdpi.com/2227-7072/3/3/393
The study aims at examining how fiscal deficits affect the performance of the stock market in India by using annual data from 1988–2012. The study makes use of Ng-Perron unit root tests to check the non-stationarity property of the series; the Auto Regressive Distributed Lag (ARDL) bounds test and a Vector Error Correction Model (VECM) for testing both short and long run dynamic relationships. The variance decomposition (VDC) is used to predict the exogenous shocks of the variables. The findings of the bounds test reveal that the estimated equation and the series are co-integrated. The ARDL results suggest a long run negative relationship exists between budget deficit and stock prices and do not show any significant relationship in the short run. The VECM result shows that fiscal deficits influence the stock price only in the short run. The results of the Variance Decomposition show that stock price movement in the long run is mostly explained by shocks of fiscal deficits. The study implies that the government must adopt appropriate macroeconomic policies to reduce budget deficit, which will result in stock market growth and in turn will lead to the financial development of the country.International Journal of Financial Studies2015-08-2733Article10.3390/ijfs30303933934102227-70722015-08-27doi: 10.3390/ijfs3030393Pooja JoshiArun Giri<![CDATA[Symmetry, Vol. 7, Pages 1567-1586: Design of IP Camera Access Control Protocol by Utilizing Hierarchical Group Key]]>
http://www.mdpi.com/2073-8994/7/3/1567
Unlike CCTV, security video surveillance devices, which we have generally known about, IP cameras which are connected to a network either with or without wire, provide monitoring services through a built-in web-server. Due to the fact that IP cameras can use a network such as the Internet, multiple IP cameras can be installed at a long distance and each IP camera can utilize the function of a web server individually. Even though IP cameras have this kind of advantage, it has difficulties in access control management and weakness in user certification, too. Particularly, because the market of IP cameras did not begin to be realized a long while ago, systems which are systematized from the perspective of security have not been built up yet. Additionally, it contains severe weaknesses in terms of access authority to the IP camera web server, certification of users, and certification of IP cameras which are newly installed within a network, etc. This research grouped IP cameras hierarchically to manage them systematically, and provided access control and data confidentiality between groups by utilizing group keys. In addition, IP cameras and users are certified by using PKI-based certification, and weak points of security such as confidentiality and integrity, etc., are improved by encrypting passwords. Thus, this research presents specific protocols of the entire process and proved through experiments that this method can be actually applied.Symmetry2015-08-2773Article10.3390/sym7031567156715862073-89942015-08-27doi: 10.3390/sym7031567Jungho KangJaekyung HanJong Park<![CDATA[Information, Vol. 6, Pages 550-563: Analyzing Trends in Software Product Lines Evolution Using aCladistics Based Approach]]>
http://www.mdpi.com/2078-2489/6/3/550
A software product line is a complex system the aim of which is to provide a platform dedicated to large reuse. It necessitates a great investment. Thus, its ability to cope with customers’ ever-changing requirements is among its key success factors. Great effort has been made to deal with the software product line evolution. In our previous works, we carried out a classification of these works to provide an overview of the used techniques. We also identified the following key challenges of software product lines evolution: the ability to predict future changes, the ability to define the impact of a change easily and the improvement in understanding the change. We have already tackled the second and the third challenges. The objective of this paper is to deal with the first challenge. We use the cladistics classification which was used in biology to understand the evolution of organisms sharing the same ancestor and their process of descent at the aim of predicting their future changes. By analogy, we consider a population of applications for media management on mobile devices derived from the same platform and we use cladistics to construct their evolutionary tree. We conducted an analysis to show how to identify the evolution trends of the case study products and to predict future changes.Information2015-08-2763Article10.3390/info60305505505632078-24892015-08-27doi: 10.3390/info6030550Anissa BenlarabiAmal KhtiraBouchra Asri<![CDATA[Symmetry, Vol. 7, Pages 1536-1566: Lie Symmetry Analysis of the Hopf Functional-Differential Equation]]>
http://www.mdpi.com/2073-8994/7/3/1536
In this paper, we extend the classical Lie symmetry analysis from partial differential equations to integro-differential equations with functional derivatives. We continue the work of Oberlack and Wacławczyk (2006, Arch. Mech. 58, 597), (2013, J. Math. Phys. 54, 072901), where the extended Lie symmetry analysis is performed in the Fourier space. Here, we introduce a method to perform the extended Lie symmetry analysis in the physical space where we have to deal with the transformation of the integration variable in the appearing integral terms. The method is based on the transformation of the product y(x)dx appearing in the integral terms and applied to the functional formulation of the viscous Burgers equation. The extended Lie symmetry analysis furnishes all known symmetries of the viscous Burgers equation and is able to provide new symmetries associated with the Hopf formulation of the viscous Burgers equation. Hence, it can be employed as an important tool for applications in continuum mechanics.Symmetry2015-08-2773Article10.3390/sym7031536153615662073-89942015-08-27doi: 10.3390/sym7031536Daniel JanochaMarta WacławczykMartin Oberlack<![CDATA[Algorithms, Vol. 8, Pages 712-722: Gradient-Based Iterative Identification for Wiener Nonlinear Dynamic Systems with Moving Average Noises]]>
http://www.mdpi.com/1999-4893/8/3/712
This paper focuses on the parameter identification problem for Wiener nonlinear dynamic systems with moving average noises. In order to improve the convergence rate, the gradient-based iterative algorithm is presented by replacing the unmeasurable variables with their corresponding iterative estimates, and to compute iteratively the noise estimates based on the obtained parameter estimates. The simulation results show that the proposed algorithm can effectively estimate the parameters of Wiener systems with moving average noises.Algorithms2015-08-2683Article10.3390/a80307127127221999-48932015-08-26doi: 10.3390/a8030712Lincheng ZhouXiangli LiHuigang XuPeiyi Zhu<![CDATA[Symmetry, Vol. 7, Pages 1519-1535: Enantioselective Organocatalyzed Synthesis of 2-Amino-3-cyano-4H-chromene Derivatives]]>
http://www.mdpi.com/2073-8994/7/3/1519
The structural motif that results from the fusion of a benzene ring to a heterocyclic pyran ring, known as chromene, is broadly found in nature and it has been reported to be associated with a wide range of biological activity. Moreover, asymmetric organocatalysis is a discipline in expansion that is already recognized as a well-established tool for obtaining enantiomerically enriched compounds. This review covers the particular case of the asymmetric synthesis of 2-amino-3-cyano-4H-chromenes using organocatalysis. Herein, we show the most illustrative examples of the methods developed by diverse research groups, following a classification based on these five different approaches: (1) addition of naphthol compounds to substituted α,α-dicyanoolefins; (2) addition of malononitrile to substituted o-vinylphenols; (3) addition of malononitrile to N-protected o-iminophenols; (4) Michael addition of nucleophiles to 2-iminochromene derivatives; and (5) organocatalyzed formal [4+2] cycloaddition reaction. In most cases, chiral thioureas have been found to be effective catalysts to promote the synthetic processes, and generally a bifunctional mode of action has been envisioned for them. In addition, squaramides and cinchona derivatives have been occasionally used as suitable catalysts for the substrates activation.Symmetry2015-08-2673Review10.3390/sym7031519151915352073-89942015-08-26doi: 10.3390/sym7031519Isaac SonsonaEugenia Marqués-LópezRaquel Herrera<![CDATA[Axioms, Vol. 4, Pages 385-399: Limiting Approach to Generalized Gamma Bessel Model via Fractional Calculus and Its Applications in Various Disciplines]]>
http://www.mdpi.com/2075-1680/4/3/385
The essentials of fractional calculus according to different approaches that can be useful for our applications in the theory of probability and stochastic processes are established. In addition to this, from this fractional integral, one can list out almost all of the extended densities for the pathway parameter q &lt; 1 and q → 1. Here, we bring out the idea of thicker- or thinner-tailed models associated with a gamma-type distribution as a limiting case of the pathway operator. Applications of this extended gamma model in statistical mechanics, input-output models, solar spectral irradiance modeling, etc., are established.Axioms2015-08-2643Article10.3390/axioms40303853853992075-16802015-08-26doi: 10.3390/axioms4030385Nicy Sebastian<![CDATA[Axioms, Vol. 4, Pages 365-384: An Overview of Generalized Gamma Mittag–Leffler Model and Its Applications]]>
http://www.mdpi.com/2075-1680/4/3/365
Recently, probability models with thicker or thinner tails have gained more importance among statisticians and physicists because of their vast applications in random walks, Lévi flights, financial modeling, etc. In this connection, we introduce here a new family of generalized probability distributions associated with the Mittag–Leffler function. This family gives an extension to the generalized gamma family, opens up a vast area of potential applications and establishes connections to the topics of fractional calculus, nonextensive statistical mechanics, Tsallis statistics, superstatistics, the Mittag–Leffler stochastic process, the Lévi process and time series. Apart from examining the properties, the matrix-variate analogue and the connection to fractional calculus are also explained. By using the pathway model of Mathai, the model is further generalized. Connections to Mittag–Leffler distributions and corresponding autoregressive processes are also discussed.Axioms2015-08-2643Article10.3390/axioms40303653653842075-16802015-08-26doi: 10.3390/axioms4030365Seema Nair<![CDATA[Symmetry, Vol. 7, Pages 1475-1518: Performance Enhancement of Face Recognition in Smart TV Using Symmetrical Fuzzy-Based Quality Assessment]]>
http://www.mdpi.com/2073-8994/7/3/1475
With the rapid growth of smart TV, the necessity for recognizing a viewer has increased for various applications that deploy face recognition to provide intelligent services and high convenience to viewers. However, the viewers can have various postures, illumination, and expression variations on their faces while watching TV, and thereby, the performance of face recognition inevitably degrades. In order to handle these problems, video-based face recognition has been proposed, instead of a single image-based one. However, video-based processing of multiple images is prohibitive in smart TVs as the processing power is limited. Therefore, a quality measure-based (QM-based) image selection is required that considers both the processing speed and accuracy of face recognition. Therefore, we propose a performance enhancement method for face recognition through symmetrical fuzzy-based quality assessment. Our research is novel in the following three ways as compared to previous works. First, QMs are adaptively selected by comparing variance values obtained from candidate QMs within a video sequence, where the higher the variance value by a QM, the more meaningful is the QM in terms of a distinction between images. Therefore, we can adaptively select meaningful QMs that reflect the primary factors influencing the performance of face recognition. Second, a quality score of an image is calculated using a fuzzy method based on the inputs of the selected QMs, symmetrical membership functions, and rule table considering the characteristics of symmetry. A fuzzy-based combination method of image quality has the advantage of being less affected by the types of face databases because it does not perform an additional training procedure. Third, the accuracy of face recognition is enhanced by fusing the matching scores of the high-quality face images, which are selected based on the quality scores among successive face mages. Experimental results showed that the performance of face recognition using the proposed method was better than that of conventional methods in terms of accuracy.Symmetry2015-08-2573Article10.3390/sym7031475147515182073-89942015-08-25doi: 10.3390/sym7031475Yeong Kim Won LeeKi KimHyung HongKang Park<![CDATA[Information, Vol. 6, Pages 536-549: Influences of Removable Devices on the Anti-Threat Model: Dynamic Analysis and Control Strategies]]>
http://www.mdpi.com/2078-2489/6/3/536
With the rapid development of M2M wireless network, damages caused by malicious worms are getting more and more serious. The main goal of this paper is to explore the influences of removable devices on the interaction dynamics between malicious worms and benign worms by using a mathematical model. The model takes two important network environment factors into consideration: benign worms and the influences of removable devices. Besides, the model’s basic reproduction number is obtained, along with the correct control conditions of the local and global asymptotical stability of the worm-free equilibrium. Simulation results show that the effectiveness of our proposed model in terms of reflecting the influences of removable devices on the interaction dynamics of an anti-treat model. Based on numerical analyses and simulations, effective methods are proposed to contain the propagation of malicious worms by using anti-worms.Information2015-08-2463Article10.3390/info60305365365492078-24892015-08-24doi: 10.3390/info6030536Jinhua MaZhide ChenWei WuRongjun ZhengJianghua Liu<![CDATA[Axioms, Vol. 4, Pages 345-364: Almost Periodic Solutions of Nonlinear Volterra Difference Equations with Unbounded Delay]]>
http://www.mdpi.com/2075-1680/4/3/345
In order to obtain the conditions for the existence of periodic and almost periodic solutions of Volterra difference equations, \( x(n+1)=f(n,x(n))+\sum_{s=-\infty}^{n}F(n,s, {x(n+s)},x(n)) \), we consider certain stability properties, which are referred to as (K, \( \rho \))-weakly uniformly-asymptotic stability and (K, \( \rho \))-uniformly asymptotic stability. Moreover, we discuss the relationship between the \( \rho \)-separation condition and the uniformly-asymptotic stability property in the \( \rho \) sense.Axioms2015-08-2443Article10.3390/axioms40303453453642075-16802015-08-24doi: 10.3390/axioms4030345Yoshihiro HamayaTomomi ItokazuKaori Saito<![CDATA[Symmetry, Vol. 7, Pages 1463-1474: A (1+2)-Dimensional Simplified Keller–Segel Model: Lie Symmetry and Exact Solutions]]>
http://www.mdpi.com/2073-8994/7/3/1463
This research is a natural continuation of the recent paper “Exact solutions of the simplified Keller–Segel model” (Commun Nonlinear Sci Numer Simulat 2013, 18, 2960–2971). It is shown that a (1+2)-dimensional Keller–Segel type system is invariant with respect infinite-dimensional Lie algebra. All possible maximal algebras of invariance of the Neumann boundary value problems based on the Keller–Segel system in question were found. Lie symmetry operators are used for constructing exact solutions of some boundary value problems. Moreover, it is proved that the boundary value problem for the (1+1)-dimensional Keller–Segel system with specific boundary conditions can be linearized and solved in an explicit form.Symmetry2015-08-2473Article10.3390/sym7031463146314742073-89942015-08-24doi: 10.3390/sym7031463Maksym Didovych<![CDATA[IJGI, Vol. 4, Pages 1549-1568: Geographic Situational Awareness: Mining Tweets for Disaster Preparedness, Emergency Response, Impact, and Recovery]]>
http://www.mdpi.com/2220-9964/4/3/1549
Social media data have emerged as a new source for detecting and monitoring disaster events. A number of recent studies have suggested that social media data streams can be used to mine actionable data for emergency response and relief operation. However, no effort has been made to classify social media data into stages of disaster management (mitigation, preparedness, emergency response, and recovery), which has been used as a common reference for disaster researchers and emergency managers for decades to organize information and streamline priorities and activities during the course of a disaster. This paper makes an initial effort in coding social media messages into different themes within different disaster phases during a time-critical crisis by manually examining more than 10,000 tweets generated during a natural disaster and referencing the findings from the relevant literature and official government procedures involving different disaster stages. Moreover, a classifier based on logistic regression is trained and used for automatically mining and classifying the social media messages into various topic categories during various disaster phases. The classification results are necessary and useful for emergency managers to identify the transition between phases of disaster management, the timing of which is usually unknown and varies across disaster events, so that they can take action quickly and efficiently in the impacted communities. Information generated from the classification can also be used by the social science research communities to study various aspects of preparedness, response, impact and recovery.ISPRS International Journal of Geo-Information2015-08-2443Article10.3390/ijgi4031549154915682220-99642015-08-24doi: 10.3390/ijgi4031549Qunying HuangYu Xiao<![CDATA[JRFM, Vol. 8, Pages 337-354: An Empirical Analysis for the Prediction of a Financial Crisis in Turkey through the Use of Forecast Error Measures]]>
http://www.mdpi.com/1911-8074/8/3/337
In this study, we try to examine whether the forecast errors obtained by the ANN models affect the breakout of financial crises. Additionally, we try to investigate how much the asymmetric information and forecast errors are reflected on the output values. In our study, we used the exchange rate of USD/TRY (USD), the Borsa Istanbul 100 Index (BIST), and gold price (GP) as our output variables of our Artificial Neural Network (ANN) models. We observe that the predicted ANN model has a strong explanation capability for the 2001 and 2008 crises. Our calculations of some symmetry measures such as mean absolute percentage error (MAPE), symmetric mean absolute percentage error (sMAPE), and Shannon entropy (SE), clearly demonstrate the degree of asymmetric information and the deterioration of the financial system prior to, during, and after the financial crisis. We found that the asymmetric information prior to crisis is larger as compared to other periods. This situation can be interpreted as early warning signals before the potential crises. This evidence seems to favor an asymmetric information view of financial crises.Journal of Risk and Financial Management2015-08-2483Article10.3390/jrfm80303373373541911-80742015-08-24doi: 10.3390/jrfm8030337Seyma CavdarAlev Aydin<![CDATA[IJGI, Vol. 4, Pages 1530-1548: A Geoweb-Based Tagging System for Borderlands Data Acquisition]]>
http://www.mdpi.com/2220-9964/4/3/1530
Borderlands modeling and understanding depend on both spatial and non-spatial data, which were difficult to obtain in the past. This has limited the progress of borderland-related research. In recent years, data collection technologies have developed greatly, especially geospatial Web 2.0 technologies including blogs, publish/subscribe, mashups, and GeoRSS, which provide opportunities for data acquisition in borderland areas. This paper introduces the design and development of a Geoweb-based tagging system that enables users to tag and edit geographical information. We first establish the GeoBlog model, which consists of a set of geospatial components, posts, indicators, and comments, as the foundation of the tagging system. GeoBlog is implemented such that blogs are mashed up with OpenStreetMap. Moreover, we present an improvement to existing publish/subscribe systems with support for spatio-temporal events and subscriptions, called Spatial Publish/Subscribe, as well as the event agency network for routing messages from the publishers to the subscribers. A prototype system based on this approach is implemented in experiments. The results of this study provide an approach for asynchronous interaction and message-ordered transfer in the tagging system.ISPRS International Journal of Geo-Information2015-08-2143Article10.3390/ijgi4031530153015482220-99642015-08-21doi: 10.3390/ijgi4031530Hanfa XingJun ChenXiaoguang Zhou<![CDATA[Information, Vol. 6, Pages 522-535: Travel Mode Detection Based on Neural Networks and Particle Swarm Optimization]]>
http://www.mdpi.com/2078-2489/6/3/522
The collection of massive Global Positioning System (GPS) data from travel surveys has increased exponentially worldwide since the 1990s. A number of methods, which range from rule-based to advanced classification approaches, have been applied to detect travel modes from GPS positioning data collected in travel surveys based on GPS-enabled smartphones or dedicated GPS devices. Among these approaches, neural networks (NNs) are widely adopted because they can extract subtle information from training data that cannot be directly obtained by human or other analysis techniques. However, traditional NNs, which are generally trained by back-propagation algorithms, are likely to be trapped in local optimum. Therefore, particle swarm optimization (PSO) is introduced to train the NNs. The resulting PSO-NNs are employed to distinguish among four travel modes (walk, bike, bus, and car) with GPS positioning data collected through a smartphone-based travel survey. As a result, 95.81% of samples are correctly flagged for the training set, while 94.44% are correctly identified for the test set. Results from this study indicate that smartphone-based travel surveys provide an opportunity to supplement traditional travel surveys.Information2015-08-2163Article10.3390/info60305225225352078-24892015-08-21doi: 10.3390/info6030522Guangnian XiaoZhicai JuanJingxin Gao<![CDATA[Electronics, Vol. 4, Pages 565-581: Homogeneous Crystallization of Micro-DispensedTIPS-Pentacene Using a Two-Solvent System toEnable Printed Inverters on Foil Substrates]]>
http://www.mdpi.com/2079-9292/4/3/565
We report on a micro-dispensing system for 6,13-Bis(triisopropylsilylethynyl)pentacene (TIPS-pentacene) to enable homogenous crystallization and uniform filmmorphology of the dispensed droplets using a two-solvent mixture along with the use of aninsulating binder. This solution composition results in a controlled evaporation of the dropletin ambient air such that the Marangoni flow counteracts the outward convective flow toenable uniform radial crystal growth from the edge towards the center of the drops.The consequence of this process is the high degree of uniformity in the crystallization of thedrops, which results in a reduction in the performance spread of the organic field effecttransistors (OFET) created using this process. The addition of the insulating binder furtherimproves the reduction in the spread of the results as a trade-off to the reduction in mobilityof the transistors. The transfer curves of the OFETs show a tight grouping due to thecontrolled self-alignment of the TIPS-pentacene crystals; this repeatability was furtherhighlighted by fabricating p-type inverters with driver to load ratios of 8:1, wherein theoutput inverter curves were also grouped tightly while exhibiting a gain of greater than 4 inthe switching region. Therefore, the reliability and repeatability of this process justifies itsuse to enable large area solution-processed printed circuits at the cost of reduced mobility.Electronics2015-08-2143Article10.3390/electronics40305655655812079-92922015-08-21doi: 10.3390/electronics4030565Indranil BoseKornelius TetznerKathrin BornerKarlheinz Bock<![CDATA[Risks, Vol. 3, Pages 318-337: Delivering Left-Skewed Portfolio Payoff Distributions in the Presence of Transaction Costs]]>
http://www.mdpi.com/2227-9091/3/3/318
For pension-savers, a low payoff is a financial disaster. Such investors will most likely prefer left-skewed payoff distributions over right-skewed payoff distributions. We explore how such distributions can be delivered. Cautious-relaxed utility measures are cautious in ensuring that payoffs don’t fall much below a reference value, but relaxed about exceeding it. We find that the payoff distribution delivered by a cautious-relaxed utility measure has appealing features which payoff distributions delivered by traditional utility functions don’t. In particular, cautious-relaxed distributions can have the mass concentrated on the left, hence be left-skewed. However, cautious-relaxed strategies prescribe frequent portfolio adjustments which may be expensive if transaction costs are charged. In contrast, more traditional strategies can be time-invariant. Thus we investigate the impact of transaction costs on the appeal of cautious-relaxed strategies. We find that relatively high transaction fees are required for the cautious-relaxed strategy to lose its appeal. This paper contributes to the literature which compares utility measures by the payoff distributions they produce and finds that a cautious-relaxed utility measure will deliver payoffs that many investors will prefer.Risks2015-08-2133Article10.3390/risks30303183183372227-90912015-08-21doi: 10.3390/risks3030318Jacek Krawczyk<![CDATA[Systems, Vol. 3, Pages 133-151: A Management Framework for Municipal Solid Waste Systems and Its Application to Food Waste Prevention]]>
http://www.mdpi.com/2079-8954/3/3/133
Waste management is a complex task involving numerous waste fractions, a range of technological treatment options, and many outputs that are circulated back into society. A systematic, interdisciplinary systems management framework was developed to facilitate the planning, implementation, and maintenance of sustainable waste systems. It aims not to replace existing decision-making approaches, but rather to enable their integration to allow for inclusion of overall sustainability concerns and address the complexity of solid waste management. The framework defines key considerations for system design, steps for performance monitoring, and approaches for facilitating continual system improvements. It was developed by critically examining the literature to determine what aspects of a management framework would be most effective at improving systems management for complex waste systems. The framework was applied to food waste management as a theoretical case study to exemplify how it can serve as a systems management tool for complex waste systems, as well as address obstacles typically faced in the field. Its benefits include the integration of existing waste system assessment models; the inclusion of environmental, economic, and social priorities; efficient performance monitoring; and a structure to continually define, review, and improve systems. This framework may have broader implications for addressing sustainability in other disciplines.Systems2015-08-2133Article10.3390/systems30301331331512079-89542015-08-21doi: 10.3390/systems3030133Krista ThybergDavid Tonjes<![CDATA[Algorithms, Vol. 8, Pages 697-711: Comparative Study of DE, PSO and GA for Position Domain PID Controller Tuning]]>
http://www.mdpi.com/1999-4893/8/3/697
Gain tuning is very important in order to obtain good performances for a given controller. Contour tracking performance is mainly determined by the selected control gains of a position domain PID controller. In this paper, three popular evolutionary algorithms are utilized to optimize the gains of a position domain PID controller for performance improvement of contour tracking of robotic manipulators. Differential Evolution (DE), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO) are used to determine the optimal gains of the position domain PID controller, and three distinct fitness functions are also used to quantify the contour tracking performance of each solution set. Simulation results show that DE features the highest performance indexes for both linear and nonlinear contour tracking, while PSO is quite efficient for linear contour tracking. Both algorithms performed consistently better than GA that featured premature convergence in all cases.Algorithms2015-08-2183Article10.3390/a80306976977111999-48932015-08-21doi: 10.3390/a8030697Puren OuyangVangjel Pano<![CDATA[Algorithms, Vol. 8, Pages 680-696: Network Community Detection on Metric Space]]>
http://www.mdpi.com/1999-4893/8/3/680
Community detection in a complex network is an important problem of much interest in recent years. In general, a community detection algorithm chooses an objective function and captures the communities of the network by optimizing the objective function, and then, one uses various heuristics to solve the optimization problem to extract the interesting communities for the user. In this article, we demonstrate the procedure to transform a graph into points of a metric space and develop the methods of community detection with the help of a metric defined for a pair of points. We have also studied and analyzed the community structure of the network therein. The results obtained with our approach are very competitive with most of the well-known algorithms in the literature, and this is justified over the large collection of datasets. On the other hand, it can be observed that time taken by our algorithm is quite less compared to other methods and justifies the theoretical findings.Algorithms2015-08-2183Article10.3390/a80306806806961999-48932015-08-21doi: 10.3390/a8030680Suman SahaSatya Ghrera<![CDATA[Algorithms, Vol. 8, Pages 669-679: Expanding the Applicability of a Third Order Newton-Type Method Free of Bilinear Operators]]>
http://www.mdpi.com/1999-4893/8/3/669
This paper is devoted to the semilocal convergence, using centered hypotheses, of a third order Newton-type method in a Banach space setting. The method is free of bilinear operators and then interesting for the solution of systems of equations. Without imposing any type of Fréchet differentiability on the operator, a variant using divided differences is also analyzed. A variant of the method using only divided differences is also presented.Algorithms2015-08-2183Article10.3390/a80306696696791999-48932015-08-21doi: 10.3390/a8030669Sergio AmatSonia BusquierConcepción BermúdezÁngel Magreñán<![CDATA[Mathematics, Vol. 3, Pages 880-890: A Note on Necessary Optimality Conditions for a Model with Differential Infectivity in a Closed Population]]>
http://www.mdpi.com/2227-7390/3/3/880
The aim of this note is to present the necessary optimality conditions for a model (in closed population) of an immunizing disease similar to hepatitis B following. We study the impact of medical tests and controls involved in curing this kind of immunizing disease and deduced a well posed adjoint system if there exists an optimal control.Mathematics2015-08-2133Article10.3390/math30308808808902227-73902015-08-21doi: 10.3390/math3030880Yannick Kouakep<![CDATA[Mathematics, Vol. 3, Pages 843-879: Chern-Simons Path Integrals in S2 × S1]]>
http://www.mdpi.com/2227-7390/3/3/843
Using torus gauge fixing, Hahn in 2008 wrote down an expression for a Chern-Simons path integral to compute the Wilson Loop observable, using the Chern-Simons action \(S_{CS}^\kappa\), \(\kappa\) is some parameter. Instead of making sense of the path integral over the space of \(\mathfrak{g}\)-valued smooth 1-forms on \(S^2 \times S^1\), we use the Segal Bargmann transform to define the path integral over \(B_i\), the space of \(\mathfrak{g}\)-valued holomorphic functions over \(\mathbb{C}^2 \times \mathbb{C}^{i-1}\). This approach was first used by us in 2011. The main tool used is Abstract Wiener measure and applying analytic continuation to the Wiener integral. Using the above approach, we will show that the Chern-Simons path integral can be written as a linear functional defined on \(C(B_1^{\times^4} \times B_2^{\times^2}, \mathbb{C})\) and this linear functional is similar to the Chern-Simons linear functional defined by us in 2011, for the Chern-Simons path integral in the case of \(\mathbb{R}^3\). We will define the Wilson Loop observable using this linear functional and explicitly compute it, and the expression is dependent on the parameter \(\kappa\). The second half of the article concentrates on taking \(\kappa\) goes to infinity for the Wilson Loop observable, to obtain link invariants. As an application, we will compute the Wilson Loop observable in the case of \(SU(N)\) and \(SO(N)\). In these cases, the Wilson Loop observable reduces to a state model. We will show that the state models satisfy a Jones type skein relation in the case of \(SU(N)\) and a Conway type skein relation in the case of \(SO(N)\). By imposing quantization condition on the charge of the link \(L\), we will show that the state models are invariant under the Reidemeister Moves and hence the Wilson Loop observables indeed define a framed link invariant. This approach follows that used in an article written by us in 2012, for the case of \(\mathbb{R}^3\).Mathematics2015-08-2133Article10.3390/math30308438438792227-73902015-08-21doi: 10.3390/math3030843Adrian Lim<![CDATA[Information, Vol. 6, Pages 505-521: News Schemes for Activity Recognition Systems Using PCA-WSVM, ICA-WSVM, and LDA-WSVM]]>
http://www.mdpi.com/2078-2489/6/3/505
Feature extraction and classification are two key steps for activity recognition in a smart home environment. In this work, we used three methods for feature extraction: Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA). The new features selected by each method are then used as the inputs for a Weighted Support Vector Machines (WSVM) classifier. This classifier is used to handle the problem of imbalanced activity data from the sensor readings. The experiments were implemented on multiple real-world datasets with Conditional Random Fields (CRF), standard Support Vector Machines (SVM), Weighted SVM, and combined methods PCA+WSVM, ICA+WSVM, and LDA+WSVM showed that LDA+WSVM had a higher recognition rate than other methods for activity recognition.Information2015-08-2063Article10.3390/info60305055055212078-24892015-08-20doi: 10.3390/info6030505M’hamed AbidineBelkacem Fergani<![CDATA[IJGI, Vol. 4, Pages 1512-1529: Investigation of Travel and Activity Patterns Using Location-based Social Network Data: A Case Study of Active Mobile Social Media Users]]>
http://www.mdpi.com/2220-9964/4/3/1512
Due to its relatively high availability and low cost, location-based social network (LBSN) (e.g., Foursquare) data (a popular type of volunteered geographic information) seem to be an alternative or complement to survey data in the study of travel behavior and activity analysis. Illustrating this situation, recently, a number of studies attempted to use LBSN data (e.g., Foursquare check-ins) to investigate patterns of human travel and activity. Of particular note is that compared to other individual-level characteristics of users, such as age, profession, education, income and so forth, gender is relatively highly available in the profiles of Foursquare users. Moreover, considering gender differences in travel and activity analysis is a popular research topic and is helpful in better understanding the changes in women’s roles in family, labor force participation, society and so forth. Therefore, this paper empirically investigates how gender influences the travel and activity patterns of active local Foursquare users in New York City. Empirical investigations of gender differences in travel and activity patterns are conducted at both the individual and aggregate level. The empirical results reveal that there are gender differences in the travel and activity patterns of active local users in New York City at both the individual and aggregate level. Finally, the results of the empirical study and the extent to which LBSN data can be exploited to produce travel diary data are discussed.ISPRS International Journal of Geo-Information2015-08-2043Article10.3390/ijgi4031512151215292220-99642015-08-20doi: 10.3390/ijgi4031512Yeran SunMing Li<![CDATA[Mathematics, Vol. 3, Pages 781-842: Algebra of Complex Vectors and Applications in Electromagnetic Theory and Quantum Mechanics]]>
http://www.mdpi.com/2227-7390/3/3/781
A complex vector is a sum of a vector and a bivector and forms a natural extension of a vector. The complex vectors have certain special geometric properties and considered as algebraic entities. These represent rotations along with specified orientation and direction in space. It has been shown that the association of complex vector with its conjugate generates complex vector space and the corresponding basis elements defined from the complex vector and its conjugate form a closed complex four dimensional linear space. The complexification process in complex vector space allows the generation of higher n-dimensional geometric algebra from (n — 1)-dimensional algebra by considering the unit pseudoscalar identification with square root of minus one. The spacetime algebra can be generated from the geometric algebra by considering a vector equal to square root of plus one. The applications of complex vector algebra are discussed mainly in the electromagnetic theory and in the dynamics of an elementary particle with extended structure. Complex vector formalism simplifies the expressions and elucidates geometrical understanding of the basic concepts. The analysis shows that the existence of spin transforms a classical oscillator into a quantum oscillator. In conclusion the classical mechanics combined with zeropoint field leads to quantum mechanics.Mathematics2015-08-2033Article10.3390/math30307817818422227-73902015-08-20doi: 10.3390/math3030781Kundeti Muralidhar<![CDATA[Algorithms, Vol. 8, Pages 656-668: Fifth-Order Iterative Method for Solving Multiple Roots of the Highest Multiplicity of Nonlinear Equation]]>
http://www.mdpi.com/1999-4893/8/3/656
A three-step iterative method with fifth-order convergence as a new modification of Newton’s method was presented. This method is for finding multiple roots of nonlinear equation with unknown multiplicity m whose multiplicity m is the highest multiplicity. Its order of convergence is analyzed and proved. Results for some numerical examples show the efficiency of the new method.Algorithms2015-08-2083Article10.3390/a80306566566681999-48932015-08-20doi: 10.3390/a8030656Juan LiangXiaowu LiZhinan WuMingsheng ZhangLin WangFeng Pan<![CDATA[Algorithms, Vol. 8, Pages 645-655: Local Convergence of an Optimal Eighth Order Method under Weak Conditions]]>
http://www.mdpi.com/1999-4893/8/3/645
We study the local convergence of an eighth order Newton-like method to approximate a locally-unique solution of a nonlinear equation. Earlier studies, such as Chen et al. (2015) show convergence under hypotheses on the seventh derivative or even higher, although only the first derivative and the divided difference appear in these methods. The convergence in this study is shown under hypotheses only on the first derivative. Hence, the applicability of the method is expanded. Finally, numerical examples are also provided to show that our results apply to solve equations in cases where earlier studies cannot apply.Algorithms2015-08-1983Article10.3390/a80306456456551999-48932015-08-19doi: 10.3390/a8030645Ioannis ArgyrosRamandeep BehlS.S. Motsa<![CDATA[IJGI, Vol. 4, Pages 1500-1511: Historical Urban Land Use Transformation in Virtual Geo-Library]]>
http://www.mdpi.com/2220-9964/4/3/1500
As countries become increasingly urbanized, understanding how urban areas are changing within the landscape becomes increasingly important. Urbanized areas are often the strongest indicators of human interaction with the environment, and understanding how urban areas develop through remotely sensed data allows for more sustainable practices. A Landsat satellite sensor which is a remote sensing platform, with its ability to analyze global data, rapidly present itself as being an invaluable tool for studying the growth of urban areas. In this study, we present the virtual geo-library as the geovisualization tools to provide the analytical studies of the urbanization process in Malang City, East Java, Indonesia, using images derived from Landsat sensor family (1989 to 2014). We provide a dynamic geovisualization through virtual geo-library, where users could understand and get valuable scientific information (e.g., urban area changes and land use transformation in higher land). This system is also equipped with the tools to enable users to create automatic cartographic maps and print the results out as a digital pdf format file.ISPRS International Journal of Geo-Information2015-08-1943Article10.3390/ijgi4031500150015112220-99642015-08-19doi: 10.3390/ijgi4031500Fatwa RamdaniAlfian PutraBayu Utomo<![CDATA[Symmetry, Vol. 7, Pages 1455-1462: Estrada and L-Estrada Indices of Edge-Independent Random Graphs]]>
http://www.mdpi.com/2073-8994/7/3/1455
Let \(G\) be a simple graph of order \(n\) with eigenvalues \(\lambda_1,\lambda_2,\cdots,\lambda_n\) and normalized Laplacian eigenvalues \(\mu_1,\mu_2,\cdots,\mu_n\). The Estrada index and normalized Laplacian Estrada index are defined as \(EE(G)=\sum_{k=1}^ne^{\lambda_k}\) and \(\mathcal{L}EE(G)=\sum_{k=1}^ne^{\mu_k-1}\), respectively. We establish upper and lower bounds to \(EE\) and \(\mathcal{L}EE\) for edge-independent random graphs, containing the classical Erdös-Rényi graphs as special cases.Symmetry2015-08-1973Technical Note10.3390/sym7031455145514622073-89942015-08-19doi: 10.3390/sym7031455Yilun Shang<![CDATA[Systems, Vol. 3, Pages 109-132: Statistical Model Selection for Better Prediction and Discovering Science Mechanisms That Affect Reliability]]>
http://www.mdpi.com/2079-8954/3/3/109
Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidate inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. Finally, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.Systems2015-08-1933Article10.3390/systems30301091091322079-89542015-08-19doi: 10.3390/systems3030109Christine Anderson-CookJerome MorzinskiKenneth Blecker<![CDATA[Electronics, Vol. 4, Pages 541-564: Beyond the Interconnections: Split Manufacturing in RF Designs]]>
http://www.mdpi.com/2079-9292/4/3/541
With the globalization of the integrated circuit (IC) design flow of chip fabrication, intellectual property (IP) piracy is becoming the main security threat. While most of the protection methods are dedicated for digital circuits, we are trying to protect radio-frequency (RF) designs. For the first time, we applied the split manufacturing method in RF circuit protection. Three different implementation cases are introduced for security and design overhead tradeoffs, i.e., the removal of the top metal layer, the removal of the top two metal layers and the design obfuscation dedicated to RF circuits. We also developed a quantitative security evaluation method to measure the protection level of RF designs under split manufacturing. Finally, a simple Class AB power amplifier and a more sophisticated Class E power amplifier are used for the demonstration through which we prove that: (1) the removal of top metal layer or the top two metal layers can provide high-level protection for RF circuits with a lower request to domestic foundries; (2) the design obfuscation method provides the highest level of circuit protection, though at the cost of design overhead; and (3) split manufacturing may be more suitable for RF designs than for digital circuits, and it can effectively reduce IP piracy in untrusted off-shore foundries.Electronics2015-08-1843Article10.3390/electronics40305415415642079-92922015-08-18doi: 10.3390/electronics4030541Yu BiJiann YuanYier Jin<![CDATA[Mathematics, Vol. 3, Pages 758-780: The Segal–Bargmann Transform for Odd-Dimensional Hyperbolic Spaces]]>
http://www.mdpi.com/2227-7390/3/3/758
We develop isometry and inversion formulas for the Segal–Bargmann transform on odd-dimensional hyperbolic spaces that are as parallel as possible to the dual case of odd-dimensional spheres.Mathematics2015-08-1833Article10.3390/math30307587587802227-73902015-08-18doi: 10.3390/math3030758Brian HallJeffrey Mitchell<![CDATA[IJGI, Vol. 4, Pages 1480-1499: Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation]]>
http://www.mdpi.com/2220-9964/4/3/1480
Three-dimensional (3D) point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.ISPRS International Journal of Geo-Information2015-08-1843Article10.3390/ijgi4031480148014992220-99642015-08-18doi: 10.3390/ijgi4031480Shen YingGuang XuChengpeng LiZhengyuan Mao<![CDATA[IJGI, Vol. 4, Pages 1442-1479: Integrating Legal and Physical Dimensions of Urban Environments]]>
http://www.mdpi.com/2220-9964/4/3/1442
Building Information Models (e.g., IFC) and virtual 3D city models (e.g., CityGML) are revolutionising the way we manage information about our cities. However, the main focus of these models is on the physical and functional characteristics of urban properties and facilities, which neglects the legal and ownership aspects. In contrast, cadastral data models, such as the Land Administration Domain Model (LADM), have been developed for legal information management purposes and model legal objects such as ownership boundaries without providing correspondence to the object’s physical attributes. Integration of legal and physical objects in the virtual 3D city and cadastral models would maximise their utility and flexibility to support different applications that require an integrated resource of both legal and physical information, such as urban space management and land development processes. The aim of this paper is to propose a data model that supports both legal and physical information of urban environments. The methodology to develop this data model is to extend the core cadastral data model and integrate urban features into the data model. The outcome of the research can be utilised to extend the current data models to increases their usability for different applications that require both legal and physical information.ISPRS International Journal of Geo-Information2015-08-1743Article10.3390/ijgi4031442144214792220-99642015-08-17doi: 10.3390/ijgi4031442Ali AienAbbas RajabifardMohsen KalantariDavood Shojaei<![CDATA[IJGI, Vol. 4, Pages 1423-1441: Comparative Analysis on Two Schemes for Synthesizing the High Temporal Landsat-like NDVI Dataset Based on the STARFM Algorithm]]>
http://www.mdpi.com/2220-9964/4/3/1423
The NDVI dataset with high temporal and spatial resolution (HTSN) is significant for extracting information about the phenological change of vegetation in regions with a complex earth surface. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) has been successfully applied to synthesize the HTSN by fusing the data with different characteristics. Based on the model, there are two different schemes for synthesizing the HTSN. One scheme is that red reflectance and near-infrared (NIR) reflectance are synthesized, respectively, and the HTSN is then obtained through algebraic operation (Scheme 1); the other scheme is that the red and NIR reflectance are used to calculate NDVI, which is directly taken as input data to synthesize the HTSN (Scheme 2). In this paper, taking the hill areas in eastern Sichuan China as a case, the two schemes were compared with each other. Seven Landsat images and time-series MOD13Q1 datasets spanning from October 2001 to February 2003 were used as the test data. The results showed the prediction accuracies of both derived HTSNs by the two different schemes were generally in good agreement, and Scheme 2 was slightly superior to Scheme 1 (R2: 0.14 &lt; Scheme 1 &lt; 0.53; 0.15 &lt; Scheme 2 &lt; 0.53). Although the two HTSNs showed high temporal and spatial consistence, the small spatiotemporal difference between them had a different influence on different applications. The coincidence rate of cropping intensity extracted from two derived HTSNs was fairly high, reaching up to 93.86%, while the coincidence rate of crop peak dates (i.e., the emerging dates of peaks in an annual time-series NDVI curve) was only 70.95%. Therefore, it is deemed that Scheme 2 can replace Scheme 1 in the application of extracting cropping intensity, so that more calculation time and memory space can be saved. For extracting more quantitative crop phenological information like crop peak dates, more tests are still needed in order to compare the absolute accuracy for both schemes.ISPRS International Journal of Geo-Information2015-08-1743Article10.3390/ijgi4031423142314412220-99642015-08-17doi: 10.3390/ijgi4031423Ainong LiWei ZhangGuangbin LeiJinhu Bian<![CDATA[Symmetry, Vol. 7, Pages 1436-1454: Asymmetry Assessment Using Surface Topography in Healthy Adolescents]]>
http://www.mdpi.com/2073-8994/7/3/1436
The ability to assess geometric asymmetry in the torsos of individuals is important for detecting Adolescent Idiopathic Scoliosis (AIS). A markerless technique using Surface Topography (ST) has been introduced as a non-invasive alternative to standard diagnostic radiographs. The technique has been used to identify asymmetry patterns associated with AIS. However, the presence and nature of asymmetries in the healthy population has not been properly studied. The purpose of this study is therefore to identify asymmetries and potential relationships to development factors such as age, gender, hand dominance and unilateral physical activity in healthy adolescents. Full torso scans of 83 participants were analyzed. Using Geomagic, deviation contour maps (DCMs) were created by reflecting the torso along the best plane of sagittal symmetry with each spectrum normalized. Two classes of asymmetry were observed: twist and thickness each with subgroupings. Averaged interobserver and intraobserver Kappas for twist subgroupings were 0.84 and 0.84, respectively, and for thickness subgroupings were 0.53 and 0.63 respectively. Further significant relationships were observed between specific types of asymmetry and gender such as females displaying predominately twist asymmetry, and males with thickness asymmetry. However, no relationships were found between type of asymmetry and age, hand dominance or unilateral physical activity. Understanding asymmetries in healthy subjects will continue to enhance assessment ability of the markerless ST technique.Symmetry2015-08-1773Article10.3390/sym7031436143614542073-89942015-08-17doi: 10.3390/sym7031436Connie HoEric ParentElise WatkinsMarc MoreauDouglas HeddenMarwan El-RichSamer Adeeb<![CDATA[Symmetry, Vol. 7, Pages 1410-1435: Lie and Conditional Symmetries of a Class of Nonlinear (1 + 2)-Dimensional Boundary Value Problems]]>
http://www.mdpi.com/2073-8994/7/3/1410
A new definition of conditional invariance for boundary value problems involving a wide range of boundary conditions (including initial value problems as a special case) is proposed. It is shown that other definitions worked out in order to find Lie symmetries of boundary value problems with standard boundary conditions, followed as particular cases from our definition. Simple examples of direct applicability to the nonlinear problems arising in applications are demonstrated. Moreover, the successful application of the definition for the Lie and conditional symmetry classification of a class of (1 + 2)-dimensional nonlinear boundary value problems governed by the nonlinear diffusion equation in a semi-infinite domain is realised. In particular, it is proven that there is a special exponent, k ≠ —2, for the power diffusivity uk when the problem in question with non-vanishing flux on the boundary admits additional Lie symmetry operators compared to the case k ≠ —2. In order to demonstrate the applicability of the symmetries derived, they are used for reducing the nonlinear problems with power diffusivity uk and a constant non-zero flux on the boundary (such problems are common in applications and describing a wide range of phenomena) to (1 + 1)-dimensional problems. The structure and properties of the problems obtained are briefly analysed. Finally, some results demonstrating how Lie invariance of the boundary value problem in question depends on the geometry of the domain are presented.Symmetry2015-08-1773Article10.3390/sym7031410141014352073-89942015-08-17doi: 10.3390/sym7031410Roman ChernihaJohn King<![CDATA[Information, Vol. 6, Pages 494-504: Applying the Upper Integral to the Biometric Score Fusion Problem in the Identification Model]]>
http://www.mdpi.com/2078-2489/6/3/494
This paper presents a new biometric score fusion approach in an identification system using the upper integral with respect to Sugeno’s fuzzy measure. First, the proposed method considers each individual matcher as a fuzzy set in order to handle uncertainty and imperfection in matching scores. Then, the corresponding fuzzy entropy estimates the reliability of the information provided by each biometric matcher. Next, the fuzzy densities are generated based on rank information and training accuracy. Finally, the results are aggregated using the upper fuzzy integral. Experimental results compared with other fusion methods demonstrate the good performance of the proposed approach.Information2015-08-1463Article10.3390/info60304944945042078-24892015-08-14doi: 10.3390/info6030494Khalid FakharMohamed AroussiMohamed SaidiDriss Aboutajdine<![CDATA[Information, Vol. 6, Pages 481-493: Black Box Traceable Ciphertext Policy Attribute-Based Encryption Scheme]]>
http://www.mdpi.com/2078-2489/6/3/481
In the existing attribute-based encryption (ABE) scheme, the authority (i.e., private key generator (PKG)) is able to calculate and issue any user’s private key, which makes it completely trusted, which severely influences the applications of the ABE scheme. To mitigate this problem, we propose the black box traceable ciphertext policy attribute-based encryption (T-CP-ABE) scheme in which if the PKG re-distributes the users’ private keys for malicious uses, it might be caught and sued. We provide a construction to realize the T-CP-ABE scheme in a black box model. Our scheme is based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model. In our scheme, we employ a pair (ID, S) to identify a user, where ID denotes the identity of a user and S denotes the attribute set associated with her.Information2015-08-1463Article10.3390/info60304814814932078-24892015-08-14doi: 10.3390/info6030481Xingbing FuXuyun NieFagen Li<![CDATA[Mathematics, Vol. 3, Pages 746-757: A Moonshine Dialogue in Mathematical Physics]]>
http://www.mdpi.com/2227-7390/3/3/746
Phys and Math are two colleagues at the University of Saçenbon (Crefan Kingdom), dialoguing about the remarkable efficiency of mathematics for physics. They talk about the notches on the Ishango bone and the various uses of psi in maths and physics; they arrive at dessins d’enfants, moonshine concepts, Rademacher sums and their significance in the quantum world. You should not miss their eccentric proposal of relating Bell’s theorem to the Baby Monster group. Their hyperbolic polygons show a considerable singularity/cusp structure that our modern age of computers is able to capture. Henri Poincaré would have been happy to see it.Mathematics2015-08-1433Essay10.3390/math30307467467572227-73902015-08-14doi: 10.3390/math3030746Michel Planat<![CDATA[Mathematics, Vol. 3, Pages 727-745: From Classical to Discrete Gravity through Exponential Non-Standard Lagrangians in General Relativity]]>
http://www.mdpi.com/2227-7390/3/3/727
Recently, non-standard Lagrangians have gained a growing importance in theoretical physics and in the theory of non-linear differential equations. However, their formulations and implications in general relativity are still in their infancies despite some advances in contemporary cosmology. The main aim of this paper is to fill the gap. Though non-standard Lagrangians may be defined by a multitude form, in this paper, we considered the exponential type. One basic feature of exponential non-standard Lagrangians concerns the modified Euler-Lagrange equation obtained from the standard variational analysis. Accordingly, when applied to spacetime geometries, one unsurprisingly expects modified geodesic equations. However, when taking into account the time-like paths parameterization constraint, remarkably, it was observed that mutually discrete gravity and discrete spacetime emerge in the theory. Two different independent cases were obtained: A geometrical manifold with new spacetime coordinates augmented by a metric signature change and a geometrical manifold characterized by a discretized spacetime metric. Both cases give raise to Einstein’s field equations yet the gravity is discretized and originated from “spacetime discreteness”. A number of mathematical and physical implications of these results were discussed though this paper and perspectives are given accordingly.Mathematics2015-08-1433Article10.3390/math30307277277452227-73902015-08-14doi: 10.3390/math3030727Rami El-Nabulsi<![CDATA[Administrative Sciences, Vol. 5, Pages 148-164: Learning from the Co-operative Institutional Model: How to Enhance Organizational Robustness of Third Sector Organizations with More Pluralistic Forms of Governance]]>
http://www.mdpi.com/2076-3387/5/3/148
Third sector organizations are oftentimes seen as contributing to a robust civil society. Yet the dominant modes of third sector organizational governance often adhere to a unitary orientation. The over-reliance on unitary modes of governance introduces two challenges: first, organizational stakeholders are kept from utilizing participatory mechanisms that would enable them to act as societal intermediaries, and; second, these organizations may underperform due to the artificial separation of stakeholders from participating in governance. This paper addresses calls to widen our knowledge by translating theory into practice through a discussion about the efficacy of pluralistic governance. The co-operative enterprise in introduced to focus analyses on pluralist modes of stakeholder governance. A specific co-operative’s governance structure and practice is introduced—Choctaw Electric Co-operative—through an archival analyses of secondary media accounts of a stakeholder-led reform initiative in rural Oklahoma. The Ostrom Design Principles—a diagnostic used to assess institutional robustness—are applied to demonstrate the shortsightedness of unitary governance, and highlight the potential benefits of pluralistic stakeholder engagement. Knowledge is widened in two ways: first, empirical analyses of co-operative enterprise may provide for significant insights and innovations in third sector governance, and; second proper systems of pluralistic governance exhibit enormous capacity to better orient the firm toward better serving the stakeholder base, improving performance and institutional robustness, while empowering stakeholders as societal intermediaries.Administrative Sciences2015-08-1453Article10.3390/admsci50301481481642076-33872015-08-14doi: 10.3390/admsci5030148Keith Taylor<![CDATA[IJFS, Vol. 3, Pages 381-392: A Probit Model for the State of the Greek GDP Growth]]>
http://www.mdpi.com/2227-7072/3/3/381
The paper provides probability estimates of the state of the GDP growth. A regime-switching model defines the probability of the Greek GDP being in boom or recession. Then probit models extract the predictive information of a set of explanatory (economic and financial) variables regarding the state of the GDP growth. A contemporaneous, as well as a lagged, relationship between the explanatory variables and the state of the GDP growth is conducted. The mean absolute distance (MAD) between the probability of not being in recession and the probability estimated by the probit model is the function that evaluates the performance of the models. The probit model with the industrial production index and the realized volatility as the explanatory variables has the lowest MAD value of 6.43% (7.94%) in the contemporaneous (lagged) relationship.International Journal of Financial Studies2015-08-1333Article10.3390/ijfs30303813813922227-70722015-08-13doi: 10.3390/ijfs3030381Stavros Degiannakis<![CDATA[IJGI, Vol. 4, Pages 1389-1422: A Volunteered Geographic Information Framework to Enable Bottom-Up Disaster Management Platforms]]>
http://www.mdpi.com/2220-9964/4/3/1389
Recent disasters, such as the 2010 Haiti earthquake, have drawn attention to the potential role of citizens as active information producers. By using location-aware devices such as smartphones to collect geographic information in the form of geo-tagged text, photos, or videos, and sharing this information through online social media, such as Twitter, citizens create Volunteered Geographic Information (VGI). To effectively use this information for disaster management, we developed a VGI framework for the discovery of VGI. This framework consists of four components: (i) a VGI brokering module to provide a standard service interface to retrieve VGI from multiple resources based on spatial, temporal, and semantic parameters; (ii) a VGI quality control component, which employs semantic filtering and cross-referencing techniques to evaluate VGI; (iii) a VGI publisher module, which uses a service-based delivery mechanism to disseminate VGI, and (iv) a VGI discovery component to locate, browse, and query metadata about available VGI datasets. In a case study we employed a FOSS (Free and Open Source Software) strategy, open standards/specifications, and free/open data to show the utility of the framework. We demonstrate that the framework can facilitate data discovery for disaster management. The addition of quality metrics and a single aggregated source of relevant crisis VGI will allow users to make informed policy choices that could save lives, meet basic humanitarian needs earlier, and perhaps limit environmental and economic damage.ISPRS International Journal of Geo-Information2015-08-1343Article10.3390/ijgi4031389138914222220-99642015-08-13doi: 10.3390/ijgi4031389Mohammad PooraziziAndrew HunterStefan Steiniger<![CDATA[Mathematics, Vol. 3, Pages 690-726: Root Operators and “Evolution” Equations]]>
http://www.mdpi.com/2227-7390/3/3/690
Root-operator factorization à la Dirac provides an effective tool to deal with equations, which are not of evolution type, or are ruled by fractional differential operators, thus eventually yielding evolution-like equations although for a multicomponent vector. We will review the method along with its extension to root operators of degree higher than two. Also, we will show the results obtained by the Dirac-method as well as results from other methods, specifically in connection with evolution-like equations ruled by square-root operators, that we will address to as relativistic evolution equations.Mathematics2015-08-1333Article10.3390/math30306906907262227-73902015-08-13doi: 10.3390/math3030690Giuseppe DattoliAmalia Torre<![CDATA[Algorithms, Vol. 8, Pages 632-644: Data Fusion Modeling for an RT3102 and Dewetron System Application in Hybrid Vehicle Stability Testing]]>
http://www.mdpi.com/1999-4893/8/3/632
More and more hybrid electric vehicles are driven since they offer such advantages as energy savings and better active safety performance. Hybrid vehicles have two or more power driving systems and frequently switch working condition, so controlling stability is very important. In this work, a two-stage Kalman algorithm method is used to fuse data in hybrid vehicle stability testing. First, the RT3102 navigation system and Dewetron system are introduced. Second, a modeling of data fusion is proposed based on the Kalman filter. Then, this modeling is simulated and tested on a sample vehicle, using Carsim and Simulink software to test the results. The results showed the merits of this modeling.Algorithms2015-08-1283Article10.3390/a80306326326441999-48932015-08-12doi: 10.3390/a8030632Zhibin MiaoHongtian Zhang<![CDATA[IJFS, Vol. 3, Pages 351-380: The Swiss Black Swan Bad Scenario: Is Switzerland Another Casualty of the Eurozone Crisis?]]>
http://www.mdpi.com/2227-7072/3/3/351
Financial disasters to hedge funds, bank trading departments and individual speculative traders and investors seem to always occur because of non-diversification in all possible scenarios, being overbet and being hit by a bad scenario. Black swans are the worst type of bad scenario: unexpected and extreme. The Swiss National Bank decision on 15 January 2015 to abandon the 1.20 peg against the Euro was a tremendous blow for many Swiss exporters, but also Swiss and international investors, hedge funds, global macro funds, banks, as well as the Swiss central bank. In this paper, we discuss the causes for this action, the money losers and the few winners, what it means for Switzerland, Europe and the rest of the world, what kinds of trades were lost and how they have been prevented.International Journal of Financial Studies2015-08-1233Article10.3390/ijfs30303513513802227-70722015-08-12doi: 10.3390/ijfs3030351Sebastien LleoWilliam Ziemba<![CDATA[IJGI, Vol. 4, Pages 1366-1388: An Investigation into the Completeness of, and the Updates to, OpenStreetMap Data in a Heterogeneous Area in Brazil]]>
http://www.mdpi.com/2220-9964/4/3/1366
The integration of user-generated content made in a collaborative environment is being increasingly considered a valuable input to reference maps, even from official map agencies such as USGS and Ordnance Survey. In Brazil, decades of lack of investment has resulted in a topographic map coverage that is both outdated and unequally distributed throughout the territory. This paper aims to analyze the spatial distribution of updates of OpenStreetMap in rural and urban areas in the country to understand the patterns of user updates and its correlation with other economic and developmental variables. This analysis will contribute to generating the knowledge needed in order to consider the use of this data as part of a reference layer of the National Spatial Database Infrastructure as well to design strategies to encourage user action in specific areas.ISPRS International Journal of Geo-Information2015-08-1243Article10.3390/ijgi4031366136613882220-99642015-08-12doi: 10.3390/ijgi4031366Silvana CamboimJoão BravoClaudia Sluter<![CDATA[Information, Vol. 6, Pages 467-480: Optimization of China Crude Oil Transportation Network with Genetic Ant Colony Algorithm]]>
http://www.mdpi.com/2078-2489/6/3/467
Taking into consideration both shipping and pipeline transport, this paper first analysed the risk factors for different modes of crude oil import transportation. Then, based on the minimum of both transportation cost and overall risk, a multi-objective programming model was established to optimize the transportation network of crude oil import, and the genetic algorithm and ant colony algorithm were employed to solve the problem. The optimized result shows that VLCC (Very Large Crude Carrier) is superior in long distance sea transportation, whereas pipeline transport is more secure than sea transport. Finally, this paper provides related safeguard suggestions on crude oil import transportation.Information2015-08-1263Article10.3390/info60304674674802078-24892015-08-12doi: 10.3390/info6030467Yao WangJing Lu<![CDATA[Future Internet, Vol. 7, Pages 294-306: Enhancing Educational Opportunities with Computer-Mediated Assessment Feedback]]>
http://www.mdpi.com/1999-5903/7/3/294
As internet technologies make their way into developing areas, so too does the possibility of education and training being delivered to the people living in those previously unserved areas. The growing catalogue of free, high quality courseware, when combined with the newly acquired means of delivery, creates the potential for millions of people in the developing world to acquire a good education. Yet a good education obviously requires more than simply delivering information; students must also receive high quality feedback on their assessments. They must be told how their performance compares with the ideal, and be shown how to close the gap between the two. However, delivering high quality feedback is labor-intensive, and therefore expensive, and has long been recognized as a problematic issue by educators. This paper outlines a case study that uses a Learning Management System (LMS) to efficiently deliver detailed feedback that is informed by the principles of best practice. We make the case that the efficiencies of this method allow for large-scale courses with thousands of enrolments that are accessible to developing and developed areas alike. We explore the question; is computer-mediated feedback delivery efficient and effective and might it be applied to large-scale courses at low-cost?Future Internet2015-08-1173Article10.3390/fi70302942943061999-59032015-08-11doi: 10.3390/fi7030294David TuffleyAmy Antonio<![CDATA[Algorithms, Vol. 8, Pages 621-631: One-Bit Quantization and Distributed Detection with an Unknown Scale Parameter]]>
http://www.mdpi.com/1999-4893/8/3/621
We examine a distributed detection problem in a wireless sensor network, where sensor nodes collaborate to detect a Gaussian signal with an unknown change of power, i.e., a scale parameter. Due to power/bandwidth constraints, we consider the case where each sensor quantizes its observation into a binary digit. The binary data are then transmitted through error-prone wireless links to a fusion center, where a generalized likelihood ratio test (GLRT) detector is employed to perform a global decision. We study the design of a binary quantizer based on an asymptotic analysis of the GLRT. Interestingly, the quantization threshold of the quantizer is independent of the unknown scale parameter. Numerical results are included to illustrate the performance of the proposed quantizer and GLRT in binary symmetric channels (BSCs).Algorithms2015-08-1183Article10.3390/a80306216216311999-48932015-08-11doi: 10.3390/a8030621Fei GaoLili GuoHongbin LiJun Fang<![CDATA[IJGI, Vol. 4, Pages 1346-1365: Q-SOS—A Sensor Observation Service for Accessing Quality Descriptions of Environmental Data]]>
http://www.mdpi.com/2220-9964/4/3/1346
The worldwide Sensor Web comprises observation data from diverse sources. Each data provider may process and assess datasets differently before making them available online. This information is often invisible to end users. Therefore, publishing observation data with quality descriptions is vital as it helps users to assess the suitability of data for their applications. It is also important to capture contextual information concerning data quality such as provenance to trace back incorrect data to its origins. In the Open Geospatial Consortium (OGC)’s Sensor Web Enablement (SWE) framework, there is no sufficiently and practically applicable approach how these aspects can be systematically represented and made accessible. This paper presents Q-SOS—an extension of the OGC’s Sensor Observation Service (SOS) that supports retrieval of observation data together with quality descriptions. These descriptions are represented in an observation data model covering various aspects of data quality assessment. The service and the data model have been developed based on open standards and open source tools, and are productively being used to share observation data from the TERENO observatory infrastructure. We discuss the advantages of deploying the presented solutions from data provider and consumer viewpoints. Enhancements applied to the related open-source developments are also introduced.ISPRS International Journal of Geo-Information2015-08-1043Article10.3390/ijgi4031346134613652220-99642015-08-10doi: 10.3390/ijgi4031346Anusuriya DevarajuSimon JirkaRalf KunkelJuergen Sorg<![CDATA[Econometrics, Vol. 3, Pages 610-632: Right on Target, or Is it? The Role of Distributional Shape in Variance Targeting]]>
http://www.mdpi.com/2225-1146/3/3/610
Estimation of GARCH models can be simplified by augmenting quasi-maximum likelihood (QML) estimation with variance targeting, which reduces the degree of parameterization and facilitates estimation. We compare the two approaches and investigate, via simulations, how non-normality features of the return distribution affect the quality of estimation of the volatility equation and corresponding value-at-risk predictions. We find that most GARCH coefficients and associated predictions are more precisely estimated when no variance targeting is employed. Bias properties are exacerbated for a heavier-tailed distribution of standardized returns, while the distributional asymmetry has little or moderate impact, these phenomena tending to be more pronounced under variance targeting. Some effects further intensify if one uses ML based on a leptokurtic distribution in place of normal QML. The sample size has also a more favorable effect on estimation precision when no variance targeting is used. Thus, if computational costs are not prohibitive, variance targeting should probably be avoided.Econometrics2015-08-1033Article10.3390/econometrics30306106106322225-11462015-08-10doi: 10.3390/econometrics3030610Stanislav AnatolyevStanislav Khrapov<![CDATA[Mathematics, Vol. 3, Pages 666-689: Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series]]>
http://www.mdpi.com/2227-7390/3/3/666
This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.Mathematics2015-08-0733Article10.3390/math30306666666892227-73902015-08-07doi: 10.3390/math3030666Kanadpriya BasuMaria MarianiLaura SerpaRitwik Sinha<![CDATA[IJGI, Vol. 4, Pages 1336-1345: Large Scale Landform Mapping Using Lidar DEM]]>
http://www.mdpi.com/2220-9964/4/3/1336
In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority), considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.ISPRS International Journal of Geo-Information2015-08-0743Article10.3390/ijgi4031336133613452220-99642015-08-07doi: 10.3390/ijgi4031336Türkay GökgözMoustafa Baker<![CDATA[Future Internet, Vol. 7, Pages 276-293: Social Media-Related Geographic Information in the Context of Strategic Environmental Assessment of Municipal Masterplans: A Case Study Concerning Sardinia (Italy)]]>
http://www.mdpi.com/1999-5903/7/3/276
This paper proposes a discussion concerning the use of social media-related geographic information in the context of the strategic environmental assessment (SEA) of Sardinian Municipal masterplans (MMPs). We show that this kind of information improves, substantially, the SEA process since it provides planners, evaluators, and the local communities with information retrieved from social media that would have not been available otherwise. This information integrates authoritative data collection, which comes from official sources, and enlightens tastes and preferences of the users of services and infrastructure, and their expectations concerning their spatial organization. A methodological approach related to the collection of social media-related geographic information is implemented and discussed with reference to the urban context of the city of Cagliari (Sardinia, Italy). The results are very effective in terms of provision of information, which may possibly increase the spatial knowledge available for planning policy definition and implementation. In this perspective, this kind of information discloses opportunities for building analytical scenarios related to urban and regional planning and it offers useful suggestions for sustainable development based on tourism strategies.Future Internet2015-08-0773Article10.3390/fi70302762762931999-59032015-08-07doi: 10.3390/fi7030276Roberta FlorisCorrado Zoppi<![CDATA[Information, Vol. 6, Pages 454-466: Personal Identification and the Assessment of the Psychophysiological State While Writing a Signature]]>
http://www.mdpi.com/2078-2489/6/3/454
This article discusses the problem of user identification and psychophysiological state assessment while writing a signature using a graphics tablet. The solution of the problem includes the creation of templates containing handwriting signature features simultaneously with the hidden registration of physiological parameters of a person being tested. Heart rate variability description in the different time points is used as a physiological parameter. As a result, a signature template is automatically generated for psychophysiological states of an identified person. The problem of user identification and psychophysiological state assessment is solved depending on the registered value of a physiological parameter.Information2015-08-0763Article10.3390/info60304544544662078-24892015-08-07doi: 10.3390/info6030454Pavel LozhnikovAlexey SulavkoAlexander Samotuga<![CDATA[Algorithms, Vol. 8, Pages 590-620: An Overview of a Class of Clock Synchronization Algorithms for Wireless Sensor Networks: A Statistical Signal Processing Perspective]]>
http://www.mdpi.com/1999-4893/8/3/590
Recently, wireless sensor networks (WSNs) have drawn great interest due to their outstanding monitoring and management potential in medical, environmental and industrial applications. Most of the applications that employ WSNs demand all of the sensor nodes to run on a common time scale, a requirement that highlights the importance of clock synchronization. The clock synchronization problem in WSNs is inherently related to parameter estimation. The accuracy of clock synchronization algorithms depends essentially on the statistical properties of the parameter estimation algorithms. Recently, studies dedicated to the estimation of synchronization parameters, such as clock offset and skew, have begun to emerge in the literature. The aim of this article is to provide an overview of the state-of-the-art clock synchronization algorithms for WSNs from a statistical signal processing point of view. This article focuses on describing the key features of the class of clock synchronization algorithms that exploit the traditional two-way message (signal) exchange mechanism. Upon introducing the two-way message exchange mechanism, the main clock offset estimation algorithms for pairwise synchronization of sensor nodes are first reviewed, and their performance is compared. The class of fully-distributed clock offset estimation algorithms for network-wide synchronization is then surveyed. The paper concludes with a list of open research problems pertaining to clock synchronization of WSNs.Algorithms2015-08-0683Review10.3390/a80305905906201999-48932015-08-06doi: 10.3390/a8030590Xu WangDaniel JeskeErchin Serpedin<![CDATA[IJGI, Vol. 4, Pages 1317-1335: GPS-Aided Video Tracking]]>
http://www.mdpi.com/2220-9964/4/3/1317
Tracking moving objects is both challenging and important for a large variety of applications. Different technologies based on the global positioning system (GPS) and video or radio data are used to obtain the trajectories of the observed objects. However, in some use cases, they fail to provide sufficiently accurate, complete and correct data at the same time. In this work we present an approach for fusing GPS- and video-based tracking in order to exploit their individual advantages. In this way we aim to combine the reliability of GPS tracking with the high geometric accuracy of camera detection. For the fusion of the movement data provided by the different devices we use a hidden Markov model (HMM) formulation and the Viterbi algorithm to extract the most probable trajectories. In three experiments, we show that our approach is able to deal with challenging situations like occlusions or objects which are temporarily outside the monitored area. The results show the desired increase in terms of accuracy, completeness and correctness.ISPRS International Journal of Geo-Information2015-08-0643Article10.3390/ijgi4031317131713352220-99642015-08-06doi: 10.3390/ijgi4031317Udo FeuerhakeClaus BrennerMonika Sester<![CDATA[JSAN, Vol. 4, Pages 208-225: Dense Clustered Multi-Channel Wireless Sensor Cloud]]>
http://www.mdpi.com/2224-2708/4/3/208
Dense Wireless Sensor Network Clouds have an inherent issue of latency and packet drops with regards to data collection. Though there is extensive literature that tries to address these issues through either scheduling, channel contention or a combination of the two, the problem still largely exists. In this paper, a Clustered Multi-Channel Scheduling Protocol (CMSP) is designed that creates a Voronoi partition of a dense network. Each partition is assigned a channel, and a scheduling scheme is adopted to collect data within the Voronoi partitions. This scheme collects data from the partitions concurrently and then passes it to the base station. CMSP is compared using simulation with other multi-channel protocols like Tree-based Multi-Channel, Multi-Channel MAC and Multi-frequency Media Access Control for wireless sensor networks. Results indicate CMSP has higher throughput and data delivery ratio at a lower power consumption due to network partitioning and hierarchical scheduling that minimizes load on the network.Journal of Sensor and Actuator Networks2015-08-0643Article10.3390/jsan40302082082252224-27082015-08-06doi: 10.3390/jsan4030208Sivaramakrishnan SivakumarAdnan Al-Anbuky<![CDATA[Electronics, Vol. 4, Pages 538-540: Connected Vehicles, V2V Communications, and VANET]]>
http://www.mdpi.com/2079-9292/4/3/538
Communications between vehicles are seen as a solution for road transport problems, such as accidents, inefficiencies, traffic congestions, fuel consumption, and exhaust emissions. However, before implementing such a solution, some preliminary analysis is needed. First, the most convenient communications technologies should be selected for each application and specific communications architecture should be deployed to support such services. Standardization is essential for successful deployment.[...]Electronics2015-08-0643Editorial10.3390/electronics40305385385402079-92922015-08-06doi: 10.3390/electronics4030538Felipe Jiménez<![CDATA[Algorithms, Vol. 8, Pages 573-589: Robust Rank Reduction Algorithm with Iterative Parameter Optimization and Vector Perturbation]]>
http://www.mdpi.com/1999-4893/8/3/573
In dynamic propagation environments, beamforming algorithms may suffer from strong interference, steering vector mismatches, a low convergence speed and a high computational complexity. Reduced-rank signal processing techniques provide a way to address the problems mentioned above. This paper presents a low-complexity robust data-dependent dimensionality reduction based on an iterative optimization with steering vector perturbation (IOVP) algorithm for reduced-rank beamforming and steering vector estimation. The proposed robust optimization procedure jointly adjusts the parameters of a rank reduction matrix and an adaptive beamformer. The optimized rank reduction matrix projects the received signal vector onto a subspace with lower dimension. The beamformer/steering vector optimization is then performed in a reduced dimension subspace. We devise efficient stochastic gradient and recursive least-squares algorithms for implementing the proposed robust IOVP design. The proposed robust IOVP beamforming algorithms result in a faster convergence speed and an improved performance. Simulation results show that the proposed IOVP algorithms outperform some existing full-rank and reduced-rank algorithms with a comparable complexity.Algorithms2015-08-0583Article10.3390/a80305735735891999-48932015-08-05doi: 10.3390/a8030573Peng LiJiao FengRodrigo de Lamare<![CDATA[JRFM, Vol. 8, Pages 311-336: Volatility Forecast in Crises and Expansions]]>
http://www.mdpi.com/1911-8074/8/3/311
We build a discrete-time non-linear model for volatility forecasting purposes. This model belongs to the class of threshold-autoregressive models, where changes in regimes are governed by past returns. The ability to capture changes in volatility regimes and using more accurate volatility measures allow outperforming other benchmark models, such as linear heterogeneous autoregressive model and GARCH specifications. Finally, we show how to derive closed-form expression for multiple-step-ahead forecasting by exploiting information about the conditional distribution of returns.Journal of Risk and Financial Management2015-08-0583Article10.3390/jrfm80303113113361911-80742015-08-05doi: 10.3390/jrfm8030311Sergii Pypko<![CDATA[Information, Vol. 6, Pages 443-453: Recommender System for E-Learning Based on Semantic Relatedness of Concepts]]>
http://www.mdpi.com/2078-2489/6/3/443
Digital publishing resources contain a lot of useful and authoritative knowledge. It may be necessary to reorganize the resources by concepts and recommend the related concepts for e-learning. A recommender system is presented in this paper based on the semantic relatedness of concepts computed by texts from digital publishing resources. Firstly, concepts are extracted from encyclopedias. Information in digital publishing resources is then reorganized by concepts. Secondly, concept vectors are generated by skip-gram model and semantic relatedness between concepts is measured according to the concept vectors. As a result, the related concepts and associated information can be recommended to users by the semantic relatedness for learning or reading. History data or users’ preferences data are not needed for recommendation in a specific domain. The technique may not be language-specific. The method shows potential usability for e-learning in a specific domain.Information2015-08-0463Article10.3390/info60304434434532078-24892015-08-04doi: 10.3390/info6030443Mao YeZhi TangJianbo XuLifeng Jin<![CDATA[Electronics, Vol. 4, Pages 526-537: Redundancy Determination of HVDC MMC Modules]]>
http://www.mdpi.com/2079-9292/4/3/526
An availability and a reliability prediction has been made for a high-voltage direct-current (HVDC) module of VSC (Voltage Source Converter) containing DC/DC converter, gate driver, capacitor and insulated gate bipolar transistors (IGBT). This prediction was made using published failure rates for the electronic equipment. The purpose of this prediction is to determinate the additional module redundancy of VSC and the used method is “binomial failure method”.Electronics2015-08-0443Concept Paper10.3390/electronics40305265265372079-92922015-08-04doi: 10.3390/electronics4030526Chanki KimSeongdoo Lee<![CDATA[Econometrics, Vol. 3, Pages 590-609: A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts]]>
http://www.mdpi.com/2225-1146/3/3/590
This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS) test, referred to as the KS Predictive Accuracy (KSPA) test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.Econometrics2015-08-0433Article10.3390/econometrics30305905906092225-11462015-08-04doi: 10.3390/econometrics3030590Hossein HassaniEmmanuel Silva<![CDATA[Symmetry, Vol. 7, Pages 1395-1409: Enantioselective Organocatalysis in Microreactors: Continuous Flow Synthesis of a (S)-Pregabalin Precursor and (S)-Warfarin]]>
http://www.mdpi.com/2073-8994/7/3/1395
Continuous flow processes have recently emerged as a powerful technology for performing chemical transformations since they ensure some advantages over traditional batch procedures. In this work, the use of commercially available and affordable PEEK (Polyetheretherketone) and PTFE (Polytetrafluoroethylene) HPLC (High Performance Liquid Chromatography) tubing as microreactors was exploited to perform organic reactions under continuous flow conditions, as an alternative to the commercial traditional glass microreactors. The wide availability of tubing with different sizes allowed quickly running small-scale preliminary screenings, in order to optimize the reaction parameters, and then to realize under the best experimental conditions a reaction scale up for preparative purposes. The gram production of some Active Pharmaceutical Ingredients (APIs) such as (S)-Pregabalin and (S)-Warfarin was accomplished in short reaction time with high enantioselectivity, in an experimentally very simple procedure.Symmetry2015-08-0473Article10.3390/sym7031395139514092073-89942015-08-04doi: 10.3390/sym7031395Riccardo PortaMaurizio BenagliaFrancesca CocciaSergio RossiAlessandra Puglisi<![CDATA[Axioms, Vol. 4, Pages 321-344: On the Fractional Poisson Process and the Discretized Stable Subordinator]]>
http://www.mdpi.com/2075-1680/4/3/321
We consider the renewal counting number process N = N(t) as a forward march over the non-negative integers with independent identically distributed waiting times. We embed the values of the counting numbers N in a “pseudo-spatial” non-negative half-line x ≥ 0 and observe that for physical time likewise we have t ≥ 0. Thus we apply the Laplace transform with respect to both variables x and t. Applying then a modification of the Montroll-Weiss-Cox formalism of continuous time random walk we obtain the essential characteristics of a renewal process in the transform domain and, if we are lucky, also in the physical domain. The process t = t(N) of accumulation of waiting times is inverse to the counting number process, in honour of the Danish mathematician and telecommunication engineer A.K. Erlang we call it the Erlang process. It yields the probability of exactly n renewal events in the interval (0; t]. We apply our Laplace-Laplace formalism to the fractional Poisson process whose waiting times are of Mittag-Leffler type and to a renewal process whose waiting times are of Wright type. The process of Mittag-Leffler type includes as a limiting case the classical Poisson process, the process of Wright type represents the discretized stable subordinator and a re-scaled version of it was used in our method of parametric subordination of time-space fractional diffusion processes. Properly rescaling the counting number process N(t) and the Erlang process t(N) yields as diffusion limits the inverse stable and the stable subordinator, respectively.Axioms2015-08-0443Article10.3390/axioms40303213213442075-16802015-08-04doi: 10.3390/axioms4030321Rudolf GorenfloFrancesco Mainardi<![CDATA[Algorithms, Vol. 8, Pages 562-572: Modeling Documents with Event Model]]>
http://www.mdpi.com/1999-4893/8/3/562
Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.Algorithms2015-08-0483Article10.3390/a80305625625721999-48932015-08-04doi: 10.3390/a8030562Longhui WangGuoguang ZhaoDonghong Sun<![CDATA[JSAN, Vol. 4, Pages 189-207: A Long-Range Directional Wake-Up Radio for Wireless Mobile Networks]]>
http://www.mdpi.com/2224-2708/4/3/189
This paper describes a long-range directional wake-up radio (LDWuR) for wireless mobile networks. In contrast to most wake-up radios (WuR) to date, which are short range, ours is applicable to long-range deployments. Existing studies achieve long distance by using modulation and coding schemes or by directional antennas, though the latter require exploring the direction of the transmitter. To address this issue, our LDWuR adopts both static and dynamic antennas, where the static ones are directional, while the dynamic ones are omnidirectional for beamforming. We present our LDWuR prototype and design principle. Simulation results show that our LDWuR and event-driven MAC protocol suppress the idle-listening of Wi-Fi stations in a wireless network, thereby enhancing the Wi-Fi power savings.Journal of Sensor and Actuator Networks2015-08-0343Article10.3390/jsan40301891892072224-27082015-08-03doi: 10.3390/jsan4030189Wen-Chan ShihRaja JurdakDavid AbbottPai ChouWen-Tsuen Chen<![CDATA[Symmetry, Vol. 7, Pages 1376-1394: Bäcklund Transformations for Integrable Geometric Curve Flows]]>
http://www.mdpi.com/2073-8994/7/3/1376
We study the Bäcklund transformations of integrable geometric curve flows in certain geometries. These curve flows include the KdV and Camassa-Holm flows in the two-dimensional centro-equiaffine geometry, the mKdV and modified Camassa-Holm flows in the two-dimensional Euclidean geometry, the Schrödinger and extended Harry-Dym flows in the three-dimensional Euclidean geometry and the Sawada-Kotera flow in the affine geometry, etc. Using the fact that two different curves in a given geometry are governed by the same integrable equation, we obtain Bäcklund transformations relating to these two integrable geometric flows. Some special solutions of the integrable systems are used to obtain the explicit Bäcklund transformations.Symmetry2015-08-0373Article10.3390/sym7031376137613942073-89942015-08-03doi: 10.3390/sym7031376Changzheng QuJingwei HanJing Kang<![CDATA[Symmetry, Vol. 7, Pages 1352-1375: Integrable (2 + 1)-Dimensional Spin Models with Self-Consistent Potentials]]>
http://www.mdpi.com/2073-8994/7/3/1352
Integrable spin systems possess interesting geometrical and gauge invariance properties and have important applications in applied magnetism and nanophysics. They are also intimately connected to the nonlinear Schrödinger family of equations. In this paper, we identify three different integrable spin systems in (2 + 1) dimensions by introducing the interaction of the spin field with more than one scalar potential, or vector potential, or both. We also obtain the associated Lax pairs. We discuss various interesting reductions in (2 + 1) and (1 + 1) dimensions. We also deduce the equivalent nonlinear Schrödinger family of equations, including the (2 + 1)-dimensional version of nonlinear Schrödinger–Hirota–Maxwell–Bloch equations, along with their Lax pairs.Symmetry2015-08-0373Article10.3390/sym7031352135213752073-89942015-08-03doi: 10.3390/sym7031352Ratbay MyrzakulovGalya MamyrbekovaGulgassyl NugmanovaMuthusamy Lakshmanan<![CDATA[IJGI, Vol. 4, Pages 1301-1316: Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data]]>
http://www.mdpi.com/2220-9964/4/3/1301
Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS) map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM), can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR) data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS) and Inertial Measurement Unit (IMU) navigation solution.ISPRS International Journal of Geo-Information2015-07-3143Article10.3390/ijgi4031301130113162220-99642015-07-31doi: 10.3390/ijgi4031301Siavash HosseinyalamdaryYashar BalazadeganCharles Toth<![CDATA[IJFS, Vol. 3, Pages 342-350: Determinants of the Government Bond Yield in Spain: A Loanable Funds Model]]>
http://www.mdpi.com/2227-7072/3/3/342
This paper applies demand and supply analysis to examine the government bond yield in Spain. The sample ranges from 1999.Q1 to 2014.Q2. The EGARCH model is employed in empirical work. The Spanish government bond yield is positively associated with the government debt/GDP ratio, the short-term Treasury bill rate, the expected inflation rate, the U.S. 10 year government bond yield and a dummy variable representing the debt crisis and negatively affected by the GDP growth rate and the expected nominal effective exchange rate.International Journal of Financial Studies2015-07-3033Article10.3390/ijfs30303423423502227-70722015-07-30doi: 10.3390/ijfs3030342Yu Hsing<![CDATA[IJGI, Vol. 4, Pages 1290-1300: Enhancing Disaster Management: Development of a Spatial Database of Day Care Centers in the USA]]>
http://www.mdpi.com/2220-9964/4/3/1290
Children under the age of five constitute around 7% of the total U.S. population, and represent a segment of the population that is totally dependent on others for day-to-day activities. A significant proportion of this population spends time in some form of day care arrangement while their parents are away from home. Accounting for those children during emergencies is of high priority, which requires a broad understanding of the locations of such day care centers. As concentrations of at risk population, the spatial location of day care centers is critical for any type of emergency preparedness and response (EPR). However, until recently, the U.S. emergency preparedness and response community did not have access to a comprehensive spatial database of day care centers at the national scale. This paper describes an approach for the development of the first comprehensive spatial database of day care center locations throughout the U.S. utilizing a variety of data harvesting techniques to integrate information from widely disparate data sources followed by geolocating for spatial precision. In the context of disaster management, such spatially refined demographic databases hold tremendous potential for improving high-resolution population distribution and dynamics models and databases.ISPRS International Journal of Geo-Information2015-07-3043Article10.3390/ijgi4031290129013002220-99642015-07-30doi: 10.3390/ijgi4031290Nagendra SinghMark TuttleBudhendra Bhaduri<![CDATA[Robotics, Vol. 4, Pages 284-315: Intent Understanding Using an Activation Spreading Architecture]]>
http://www.mdpi.com/2218-6581/4/3/284
In this paper, we propose a new approach for recognizing intentions of humans by observing their activities with a color plus depth (RGB-D) camera. Activities and goals are modeled as a distributed network of inter-connected nodes in an Activation Spreading Network (ASN). Inspired by a formalism in hierarchical task networks, the structure of the network captures the hierarchical relationship between high-level goals and low-level activities that realize these goals. Our approach can detect intentions before they are realized and it can work in real-time. We also extend the formalism of ASNs to incorporate contextual information into intent recognition. We further augment the ASN formalism with special nodes and synaptic connections to model ordering constraints between actions, in order to represent and handle partial-order plans in our ASN. A fully functioning system is developed for experimental evaluation. We implemented a robotic system that uses our intent recognition to naturally interact with the user. Our ASN based intent recognizer is tested against three different scenarios involving everyday activities performed by a subject, and our results show that the proposed approach is able to detect low-level activities and recognize high-level intentions effectively in real-time. Further analysis shows that contextual and partial-order ASNs are able to discriminate between otherwise ambiguous goals.Robotics2015-07-3043Article10.3390/robotics40302842843152218-65812015-07-30doi: 10.3390/robotics4030284Mohammad SaffarMircea NicolescuMonica NicolescuBanafsheh Rekabdar<![CDATA[Econometrics, Vol. 3, Pages 577-589: A Spectral Model of Turnover Reduction]]>
http://www.mdpi.com/2225-1146/3/3/577
We give a simple explicit formula for turnover reduction when a large number of alphas are traded on the same execution platform and trades are crossed internally. We model turnover reduction via alpha correlations. Then, for a large number of alphas, turnover reduction is related to the largest eigenvalue and the corresponding eigenvector of the alpha correlation matrix.Econometrics2015-07-2933Article10.3390/econometrics30305775775892225-11462015-07-29doi: 10.3390/econometrics3030577Zura Kakushadze<![CDATA[Electronics, Vol. 4, Pages 507-525: A FPGA-Based Broadband EIT System for Complex Bioimpedance Measurements—Design and Performance Estimation]]>
http://www.mdpi.com/2079-9292/4/3/507
Electrical impedance tomography (EIT) is an imaging method that is able to estimate the electrical conductivity distribution of living tissue. This work presents a field programmable gate array (FPGA)-based multi-frequency EIT system for complex, time-resolved bioimpedance measurements. The system has the capability to work with measurement setups with up to 16 current electrodes and 16 voltage electrodes. The excitation current has a range of about 10 µA to 5 mA, whereas the sinusoidal signal used for excitation can have a frequency of up to 500 kHz. Additionally, the usage of a chirp or rectangular signal excitation is possible. Furthermore, the described system has a sample rate of up to 3480 impedance spectra per second (ISPS). The performance of the EIT system is demonstrated with a resistor-based phantom and tank phantoms. Additionally, first measurements taken from the human thorax during a breathing cycle are presented.Electronics2015-07-2943Article10.3390/electronics40305075075252079-92922015-07-29doi: 10.3390/electronics4030507Roman KuscheAnkit MalhotraMartin RyschkaGunther ArdeltPaula KlimachSteffen Kaufmann<![CDATA[Algorithms, Vol. 8, Pages 552-561: Some Improvements to a Third Order Variant of Newton’s Method from Simpson’s Rule]]>
http://www.mdpi.com/1999-4893/8/3/552
In this paper, we present three improvements to a three-point third order variant of Newton’s method derived from the Simpson rule. The first one is a fifth order method using the same number of functional evaluations as the third order method, the second one is a four-point 10th order method and the last one is a five-point 20th order method. In terms of computational point of view, our methods require four evaluations (one function and three first derivatives) to get fifth order, five evaluations (two functions and three derivatives) to get 10th order and six evaluations (three functions and three derivatives) to get 20th order. Hence, these methods have efficiency indexes of 1.495, 1.585 and 1.648, respectively which are better than the efficiency index of 1.316 of the third order method. We test the methods through some numerical experiments which show that the 20th order method is very efficient.Algorithms2015-07-2983Article10.3390/a80305525525611999-48932015-07-29doi: 10.3390/a8030552Diyashvir Babajee<![CDATA[Algorithms, Vol. 8, Pages 541-551: Target Detection Algorithm Based on Two Layers Human Visual System]]>
http://www.mdpi.com/1999-4893/8/3/541
Robust small target detection of low signal-to-noise ratio (SNR) is very important in infrared search and track applications for self-defense or attacks. Due to the complex background, current algorithms have some unsolved issues with false alarm rate. In order to reduce the false alarm rate, an infrared small target detection algorithm based on saliency detection and support vector machine was proposed. Firstly, we detect salient regions that may contain targets with phase spectrum Fourier transform (PFT) approach. Then, target recognition was performed in the salient regions. Experimental results show the proposed algorithm has ideal robustness and efficiency for real infrared small target detection applications.Algorithms2015-07-2983Article10.3390/a80305415415511999-48932015-07-29doi: 10.3390/a8030541Zheng CuiJingli YangShouda JiangChangan Wei<![CDATA[IJGI, Vol. 4, Pages 1265-1289: Application of Geo-Information Techniques in Land Use and Land Cover Change Analysis in a Peri-Urban District of Ghana]]>
http://www.mdpi.com/2220-9964/4/3/1265
Using Satellite Remote Sensing and Geographic Information System, this paper analyzes the land use and land cover change dynamics in the Bosomtwe District of Ghana, for 1986, 2010 thematic mapper and enhanced thematic Mapper+ (TM/ETM+) images, and 2014 Landsat 8 Operational Land Imager and Thermal Infrared Sensor (OLI/TIS) image. The three images were geo-referenced and processed for classification, using the maximum likelihood classifier algorithm. A Jeffries-Matusita’s separability check was used in confirming the degree of spectral separation acceptability of the bands used for each of the land use and land cover classes. The best Kappa hat statistic of classification accuracy was 83%. Land Use and Land Cover (LULC) transition analysis in Environmental Systems Research Institute ESRI’s ArcMap was performed. The results of the classification over the three periods showed that built up, bare land and concrete surfaces increased from 1201 in 1986 to 5454 ha in 2010. Dense forest decreased by 2253 ha over the same period and increased by 873 ha by the 2014. Low forest also decreased by 1043 ha in 2010; however, it increased by 13% in 2014. Our findings showed some of the important changes in the land use and land cover patterns in the District. After the urbanization process, coupled with farmland abandonment, between 1986 and 2010, substantial increments in urban land and clear increments in farmland coverage between 1986 and 2014were found to be the reason for vegetation cover decreases. This suggests that major changes in the socio-ecological driving forces affecting landscape dynamics have occurred in the last few decades.ISPRS International Journal of Geo-Information2015-07-2843Article10.3390/ijgi4031265126512892220-99642015-07-28doi: 10.3390/ijgi4031265Divine AppiahDietrich SchröderEric ForkuoJohn Bugri<![CDATA[Information, Vol. 6, Pages 432-442: Sliding-Mode Speed Control of PMSM with Fuzzy-Logic Chattering Minimization—Design and Implementation]]>
http://www.mdpi.com/2078-2489/6/3/432
In this paper a Sliding Mode Control scheme (SMC) applied to the Permanent Magnet Synchronous Motor (PMSM) speed control is designed and improved. A Fuzzy logic algorithm is added to mitigate chattering caused by discontinuous term in steady states, and to ensure good performances of the controller in transient states. The proposed Fuzzy-SMC performance is tested in simulation and experimental results are obtained using eZdsp F28335.Information2015-07-2863Article10.3390/info60304324324422078-24892015-07-28doi: 10.3390/info6030432Fadil HichamDriss YousfiAite YounessElhafyani LarbiNasrudin Rahim