Computer Science & Mathematics
http://www.mdpi.com/journal/computer-math
Latest open access articles published in Computer Science & Mathematics at http://www.mdpi.com/journal/computer-math<![CDATA[Algorithms, Vol. 9, Pages 58: LR Parsing for LCFRS]]>
http://www.mdpi.com/1999-4893/9/3/58
LR parsing is a popular parsing strategy for variants of Context-Free Grammar (CFG). It has also been used for mildly context-sensitive formalisms, such as Tree-Adjoining Grammar. In this paper, we present the first LR-style parsing algorithm for Linear Context-Free Rewriting Systems (LCFRS), a mildly context-sensitive extension of CFG which has received considerable attention in the last years in the context of natural language processing.Algorithms2016-08-2793Article10.3390/a9030058581999-48932016-08-27doi: 10.3390/a9030058Laura KallmeyerWolfgang Maier<![CDATA[IJGI, Vol. 5, Pages 152: Algebraic and Geometric Characterizations of Double-Cross Matrices of Polylines]]>
http://www.mdpi.com/2220-9964/5/9/152
We study the double-cross matrix descriptions of polylines in the two-dimensional plane. The double-cross matrix is a qualitative description of polylines in which exact, quantitative information is given up in favour of directional information. First, we give an algebraic characterization of the double-cross matrix of a polyline and derive some properties of double-cross matrices from this characterisation. Next, we give a geometric characterization of double-cross similarity of two polylines, using the technique of local carrier orders of polylines. We also identify the transformations of the plane that leave the double-cross matrix of all polylines in the two-dimensional plane invariant.ISPRS International Journal of Geo-Information2016-08-2759Article10.3390/ijgi50901521522220-99642016-08-27doi: 10.3390/ijgi5090152Bart KuijpersBart Moelans<![CDATA[IJGI, Vol. 5, Pages 153: A New Simplification Approach Based on the Oblique-Dividing-Curve Method for Contour Lines]]>
http://www.mdpi.com/2220-9964/5/9/153
As one of the key operators of automated map generalization, algorithms for the line simplification have been widely researched in the past decades. Although many of the currently available algorithms have revealed satisfactory simplification performances with certain data types and selected test areas, it still remains a challenging task to solve the problems of (a) how to properly divide a cartographic line when it is too long to be dealt with directly; and (b) how to make adaptable parameterizations for various geo-data in different areas. In order to solve these two problems, a new line-simplification approach based on the Oblique-Dividing-Curve (ODC) method has been proposed in this paper. In this proposed model, one cartographic line is divided into a series of monotonic curves by the ODC method. Then, the curves are categorized into different groups according to their shapes, sizes and other geometric characteristics. The curves in different groups will trigger different strategies as well as the associated criteria for line simplification. Whenever a curve is simplified, the whole simplified cartographic line will be re-divided and the simplification process restarts again, i.e., the proposed simplification approach is iteratively operated until the final simplification result is achieved. Experiment evidence demonstrates that the proposed approach is able to handle the holistic bend-trend of the whole cartographic line during the simplification process and thereby provides considerably improved simplification performance with respect to maintaining the essential shape/salient characteristics and keeping the topological consistency. Moreover, the produced simplification results are not sensitive to the parameterizations of the proposed approach.ISPRS International Journal of Geo-Information2016-08-2759Article10.3390/ijgi50901531532220-99642016-08-27doi: 10.3390/ijgi5090153Haizhong QianMeng ZhangFang Wu<![CDATA[Information, Vol. 7, Pages 52: Optimal Threshold Determination for Discriminating Driving Anger Intensity Based on EEG Wavelet Features and ROC Curve Analysis]]>
http://www.mdpi.com/2078-2489/7/3/52
Driving anger, called “road rage”, has become increasingly common nowadays, affecting road safety. A few researches focused on how to identify driving anger, however, there is still a gap in driving anger grading, especially in real traffic environment, which is beneficial to take corresponding intervening measures according to different anger intensity. This study proposes a method for discriminating driving anger states with different intensity based on Electroencephalogram (EEG) spectral features. First, thirty drivers were recruited to conduct on-road experiments on a busy route in Wuhan, China where anger could be inducted by various road events, e.g., vehicles weaving/cutting in line, jaywalking/cyclist crossing, traffic congestion and waiting red light if they want to complete the experiments ahead of basic time for extra paid. Subsequently, significance analysis was used to select relative energy spectrum of β band (β%) and relative energy spectrum of θ band (θ%) for discriminating the different driving anger states. Finally, according to receiver operating characteristic (ROC) curve analysis, the optimal thresholds (best cut-off points) of β% and θ% for identifying none anger state (i.e., neutral) were determined to be 0.2183 ≤ θ% &lt; 1, 0 &lt; β% &lt; 0.2586; low anger state is 0.1539 ≤ θ% &lt; 0.2183, 0.2586 ≤ β% &lt; 0.3269; moderate anger state is 0.1216 ≤ θ% &lt; 0.1539, 0.3269 ≤ β% &lt; 0.3674; high anger state is 0 &lt; θ% &lt; 0.1216, 0.3674 ≤ β% &lt; 1. Moreover, the discrimination performances of verification indicate that, the overall accuracy (Acc) of the optimal thresholds of β% for discriminating the four driving anger states is 80.21%, while 75.20% for that of θ%. The results can provide theoretical foundation for developing driving anger detection or warning devices based on the relevant optimal thresholds.Information2016-08-2673Article10.3390/info7030052522078-24892016-08-26doi: 10.3390/info7030052Ping WanChaozhong WuYingzi LinXiaofeng Ma<![CDATA[IJGI, Vol. 5, Pages 154: A Spectral Signature Shape-Based Algorithm for Landsat Image Classification]]>
http://www.mdpi.com/2220-9964/5/9/154
Land-cover datasets are crucial for earth system modeling and human-nature interaction research at local, regional and global scales. They can be obtained from remotely sensed data using image classification methods. However, in processes of image classification, spectral values have received considerable attention for most classification methods, while the spectral curve shape has seldom been used because it is difficult to be quantified. This study presents a classification method based on the observation that the spectral curve is composed of segments and certain extreme values. The presented classification method quantifies the spectral curve shape and takes full use of the spectral shape differences among land covers to classify remotely sensed images. Using this method, classification maps from TM (Thematic mapper) data were obtained with an overall accuracy of 0.834 and 0.854 for two respective test areas. The approach presented in this paper, which differs from previous image classification methods that were mostly concerned with spectral “value” similarity characteristics, emphasizes the "shape" similarity characteristics of the spectral curve. Moreover, this study will be helpful for classification research on hyperspectral and multi-temporal images.ISPRS International Journal of Geo-Information2016-08-2659Article10.3390/ijgi50901541542220-99642016-08-26doi: 10.3390/ijgi5090154Yuanyuan ChenQuanfang WangYanlong WangSi-Bo DuanMiaozhong XuZhao-Liang Li<![CDATA[Symmetry, Vol. 8, Pages 85: A Survey of Public Key Infrastructure-Based Security for Mobile Communication Systems]]>
http://www.mdpi.com/2073-8994/8/9/85
Mobile communication security techniques are employed to guard the communication between the network entities. Mobile communication cellular systems have become one of the most important communication systems in recent times and are used by millions of people around the world. Since the 1990s, considerable efforts have been taken to improve both the communication and security features of the mobile communications systems. However, these improvements divide the mobile communications field into different generations according to the communication and security techniques such as A3, A5 and A8 algorithms for 2G-GSM cellular system, 3G-authentication and key agreement (AKA), evolved packet system-authentication and key agreement (EPS-AKA), and long term evolution-authentication and key agreement (LTE-AKA) algorithms for 3rd generation partnership project (3GPP) systems. Furthermore, these generations have many vulnerabilities, and huge security work is involved to solve such problems. Some of them are in the field of the public key cryptography (PKC) which requires a high computational cost and more network flexibility to be achieved. As such, the public key infrastructure (PKI) is more compatible with the modern generations due to the superior communications features. This paper surveys the latest proposed works on the security of GSM, CDMA, and LTE cellular systems using PKI. Firstly, we present the security issues for each generation of mobile communication systems, then we study and analyze the latest proposed schemes and give some comparisons. Finally, we introduce some new directions for the future scope. This paper classifies the mobile communication security schemes according to the techniques used for each cellular system and covers some of the PKI-based security techniques such as authentication, key agreement, and privacy preserving.Symmetry2016-08-2689Review10.3390/sym8090085852073-89942016-08-26doi: 10.3390/sym8090085Mohammed RamadanGuohong DuFagen LiChunxiang Xu<![CDATA[Technologies, Vol. 4, Pages 26: Neural Operant Conditioning as a Core Mechanism of Brain-Machine Interface Control]]>
http://www.mdpi.com/2227-7080/4/3/26
The process of changing the neuronal activity of the brain to acquire rewards in a broad sense is essential for utilizing brain-machine interfaces (BMIs), which is essentially operant conditioning of neuronal activity. Currently, this is also known as neural biofeedback, and it is often referred to as neurofeedback when human brain activity is targeted. In this review, we first illustrate biofeedback and operant conditioning, which are methodological background elements in neural operant conditioning. Then, we introduce research models of neural operant conditioning in animal experiments and demonstrate that it is possible to change the firing frequency and synchronous firing of local neuronal populations in a short time period. We also debate the possibility of the application of neural operant conditioning and its contribution to BMIs.Technologies2016-08-2643Review10.3390/technologies4030026262227-70802016-08-26doi: 10.3390/technologies4030026Yoshio SakuraiKichan Song<![CDATA[Administrative Sciences, Vol. 6, Pages 11: Value of Uncertainty: The Lost Opportunities in Large Projects]]>
http://www.mdpi.com/2076-3387/6/3/11
The uncertainty management theory has become well established over the last 20–30 years. However, the authors suggest that it does not fully address why opportunities often remain unexploited. Empirical studies show a stronger focus on mitigating risks than exploiting opportunities. This paper therefore addresses why so few opportunities are explored in large projects. The theory claims that risks and opportunities should be equally managed in the same process. In two surveys, conducted in six (private and public) companies over a four-year period, project managers stated that uncertainty management is about managing risk and opportunities. However, two case studies from 12 projects from the same companies revealed that all of them had their main focus on risks, and most of the opportunities were left unexploited. We have developed a theoretical explanation model to shed light on this phenomena. The concept is a reflection based on findings from our empirical data up against current project management, uncertainty, risk and stakeholder literature. Our model shows that the threshold for pursuing a potential opportunity is high. If a potential opportunity should be considered, it must be extremely interesting, since it may require contract changes, and the project must abandon an earlier-accepted best solution.Administrative Sciences2016-08-2663Article10.3390/admsci6030011112076-33872016-08-26doi: 10.3390/admsci6030011Agnar JohansenPetter Eik-AndresenAndreas Dypvik LandmarkAnandasivakumar EkambaramAsbjørn Rolstadås<![CDATA[Symmetry, Vol. 8, Pages 84: The Algorithm of Continuous Optimization Based on the Modified Cellular Automaton]]>
http://www.mdpi.com/2073-8994/8/9/84
This article is devoted to the application of the cellular automata mathematical apparatus to the problem of continuous optimization. The cellular automaton with an objective function is introduced as a new modification of the classic cellular automaton. The algorithm of continuous optimization, which is based on dynamics of the cellular automaton having the property of geometric symmetry, is obtained. The results of the simulation experiments with the obtained algorithm on standard test functions are provided, and a comparison between the analogs is shown.Symmetry2016-08-2589Article10.3390/sym8090084842073-89942016-08-25doi: 10.3390/sym8090084Oleg EvsutinAlexander ShelupanovRoman MeshcheryakovDmitry BondarenkoAngelika Rashchupkina<![CDATA[Fluids, Vol. 1, Pages 27: Scalar Flux Kinematics]]>
http://www.mdpi.com/2311-5521/1/3/27
The first portion of this paper contains an overview of recent progress in the development of dynamical-systems-based methods for the computation of Lagrangian transport processes in physical oceanography. We review the considerable progress made in the computation and interpretation of key material features such as eddy boundaries, and stable and unstable manifolds (or their finite-time approximations). Modern challenges to the Lagrangian approach include the need to deal with the complexity of the ocean submesoscale and the difficulty in computing fluxes of properties other than volume. We suggest a new approach that reduces complexity through time filtering and that directly addresses non-material, residual scalar fluxes. The approach is “semi-Lagrangian” insofar as it contemplates trajectories of a velocity field related to a residual scalar flux, usually not the fluid velocity. Two examples are explored, the first coming from a canonical example of viscous adjustment along a flat plate and the second from a numerical simulation of a turbulent Antarctic Circumpolar Current in an idealized geometry. Each example concentrates on the transport of dynamically relevant scalars, and the second illustrates how substantial material exchange across a baroclinically unstable jet coexists with zero residual buoyancy flux.Fluids2016-08-2513Article10.3390/fluids1030027272311-55212016-08-25doi: 10.3390/fluids1030027Larry PrattRoy BarkanIrina Rypina<![CDATA[IJGI, Vol. 5, Pages 148: Evaluating Temporal Analysis Methods Using Residential Burglary Data]]>
http://www.mdpi.com/2220-9964/5/9/148
Law enforcement agencies, as well as researchers rely on temporal analysis methods in many crime analyses, e.g., spatio-temporal analyses. A number of temporal analysis methods are being used, but a structured comparison in different configurations is yet to be done. This study aims to fill this research gap by comparing the accuracy of five existing, and one novel, temporal analysis methods in approximating offense times for residential burglaries that often lack precise time information. The temporal analysis methods are evaluated in eight different configurations with varying temporal resolution, as well as the amount of data (number of crimes) available during analysis. A dataset of all Swedish residential burglaries reported between 2010 and 2014 is used (N = 103,029). From that dataset, a subset of burglaries with known precise offense times is used for evaluation. The accuracy of the temporal analysis methods in approximating the distribution of burglaries with known precise offense times is investigated. The aoristic and the novel aoristic e x t method perform significantly better than three of the traditional methods. Experiments show that the novel aoristic e x t method was most suitable for estimating crime frequencies in the day-of-the-year temporal resolution when reduced numbers of crimes were available during analysis. In the other configurations investigated, the aoristic method showed the best results. The results also show the potential from temporal analysis methods in approximating the temporal distributions of residential burglaries in situations when limited data are available.ISPRS International Journal of Geo-Information2016-08-2559Article10.3390/ijgi50901481482220-99642016-08-25doi: 10.3390/ijgi5090148Martin BoldtAnton Borg<![CDATA[Fluids, Vol. 1, Pages 25: Diapycnal Velocity in the Double-Diffusive Thermocline]]>
http://www.mdpi.com/2311-5521/1/3/25
A series of large-scale numerical simulations is presented, which incorporate parameterizations of vertical mixing of temperature and salinity by double-diffusion and by small-scale turbulence. These simulations reveal the tendency of double-diffusion to constrain diapycnal volume transport, both upward and downward. For comparable values of mixing coefficients, the average diapycnal velocity in the double-diffusive thermocline is much less than in the corresponding turbulent regime. The insulating effect of double-diffusion is rationalized using two theoretical models. The first argument is based on the assumed vertical advective-diffusive balance. The second theory uses the Rhines and Young technique to evaluate the net diapycnal transport across regions bounded by closed streamlines at a given density surface. The numerical simulations and associated analytical arguments in this study underscore fundamental differences between double-diffusive mixing and mechanically generated small-scale turbulence. When both double-diffusion and turbulence are taken into account, we find that the constraints on diapycnal velocity loosen (tighten) with the increase (decrease) of the fraction of the overall mixing attributed to turbulence. The range of diapycnal velocities that could be realized in doubly-diffusive fluids is determined by the variation in the heat/salt flux ratio. We hypothesize that the unique ability of double-diffusive mixing to actively control diapycnal volume transport may have significant ramifications for the structure and dynamics of thermohaline circulation in the ocean.Fluids2016-08-2513Article10.3390/fluids1030025252311-55212016-08-25doi: 10.3390/fluids1030025Timour RadkoErick Edwards<![CDATA[IJGI, Vol. 5, Pages 151: Method for Determining Appropriate Clustering Criteria of Location-Sensing Data]]>
http://www.mdpi.com/2220-9964/5/9/151
Large quantities of location-sensing data are generated from location-based social network services. These data are provided as point properties with location coordinates acquired from a global positioning system or Wi-Fi signal. To show the point data on multi-scale map services, the data should be represented by clusters following a grid-based clustering method, in which an appropriate grid size should be determined. Currently, there are no criteria for determining the proper grid size, and the modifiable areal unit problem has been formulated for the purpose of addressing this issue. The method proposed in this paper is applies a hexagonal grid to geotagged Twitter point data, considering the grid size in terms of both quantity and quality to minimize the limitations associated with the modifiable areal unit problem. Quantitatively, we reduced the original Twitter point data by an appropriate amount using Töpfer’s radical law. Qualitatively, we maintained the original distribution characteristics using Moran’s I. Finally, we determined the appropriate sizes of clusters from zoom levels 9–13 by analyzing the distribution of data on the graphs. Based on the visualized clustering results, we confirm that the original distribution pattern is effectively maintained using the proposed method.ISPRS International Journal of Geo-Information2016-08-2559Article10.3390/ijgi50901511512220-99642016-08-25doi: 10.3390/ijgi5090151Youngmin LeePil KwonKiyun YuWoojin Park<![CDATA[Future Internet, Vol. 8, Pages 42: Supporting Elderly People by Ad Hoc Generated Mobile Applications Based on Vocal Interaction]]>
http://www.mdpi.com/1999-5903/8/3/42
Mobile devices can be exploited for enabling people to interact with Internet of Things (IoT) services. The MicroApp Generator [1] is a service-composition tool for supporting the generation of mobile applications directly on the mobile device. The user interacts with the generated app by using the traditional touch-based interaction. This kind of interaction often is not suitable for elderly and special needs people that cannot see or touch the screen. In this paper, we extend the MicroApp Generator with an interaction approach enabling a user to interact with the generated app only by using his voice, which can be very useful to let special needs people live at home. To this aim, once the mobile app has been generated and executed, the system analyses and describes the user interface, listens to the user speech and performs the associated actions. A preliminary analysis has been conducted to assess the user experience of the proposed approach by a sample composed of elderly users by using a questionnaire as a research instrument.Future Internet2016-08-2583Article10.3390/fi8030042421999-59032016-08-25doi: 10.3390/fi8030042Rita FranceseMichele Risi<![CDATA[IJGI, Vol. 5, Pages 150: The Socio-Spatial Distribution of Leisure Venues: A Case Study of Karaoke Bars in Nanjing, China]]>
http://www.mdpi.com/2220-9964/5/9/150
With the development of service industry and cultural industry, urban leisure and entertainment services have become an important symbol of the city and the driving force of economic and social development. Karaoke, a typical form of urban entertainment, is immensely popular throughout China, and the number of karaoke bars is expected to keep growing in the future. However, little is known about their spatial distribution in the urban space and their association with other location-specific factors. Based on the geospatial entity data and business statistics data, we demonstrate a clustered pattern of 530 karaoke bars in Nanjing by means of point pattern analysis and cluster analysis in GIS. Furthermore, we identify the distribution of population, transportation network, and commercial centers as the three determinants underlying the formation of the pattern.ISPRS International Journal of Geo-Information2016-08-2559Article10.3390/ijgi50901501502220-99642016-08-25doi: 10.3390/ijgi5090150Can CuiJiechen WangZhongjie WuJianhua NiTianlu Qian<![CDATA[IJGI, Vol. 5, Pages 149: Defining Fitness-for-Use for Crowdsourced Points of Interest (POI)]]>
http://www.mdpi.com/2220-9964/5/9/149
(1) Background: Due to the advent of Volunteered Geographic Information (VGI), large datasets of user-generated Points of Interest (POI) are now available. As with all VGI, however, there is uncertainty concerning data quality and fitness-for-use. Currently, the task of evaluating fitness-for-use of POI is left to the data user, with no guidance framework being available which is why this research proposes a generic approach to choose appropriate measures for assessing fitness-for-use of crowdsourced POI for different tasks. (2) Methods: POI are related to the higher-level concept of geo-atoms in order to identify and distinguish their two basic functions, geo-referencing and object-referencing. Then, for each of these functions, suitable measures of positional and thematic quality are developed based on existing quality indicators. (3) Results: Typical use cases of POI are evaluated with regards to their use of the two basic functions of POI, and allocated appropriate measures for fitness-for-use. The general procedure is illustrated on a brief practical example. (4) Conclusion: This research addresses the issue of fitness-for-use of POI on a higher conceptual level by relating it to more fundamental notions of geographical information representation. The results are expected to assist users of crowdsourced POI datasets in determining an appropriate method to evaluate fitness-for-use.ISPRS International Journal of Geo-Information2016-08-2459Communication10.3390/ijgi50901491492220-99642016-08-24doi: 10.3390/ijgi5090149David JonietzAlexander Zipf<![CDATA[Fluids, Vol. 1, Pages 26: Stabilization of Isolated Vortices in a Rotating Stratified Fluid]]>
http://www.mdpi.com/2311-5521/1/3/26
The key element of Geophysical Fluid Dynamics—reorganization of potential vorticity (PV) by nonlinear processes—is studied numerically for isolated vortices in a uniform environment. Many theoretical studies and laboratory experiments suggest that axisymmetric vortices with a Gaussian shape are not able to remain circular owing to the growth of small perturbations in the typical parameter range of abundant long-lived vortices. An example of vortex destabilization and the eventual formation of more intense self-propagating structures is presented using a 3D rotating stratified Boussinesq numerical model. The peak vorticity growth found during the stages of strong elongation and fragmentation is related to the transfer of available potential energy into kinetic energy of vortices. In order to develop a theoretical model of a stable circular vortex with a small Burger number compatible with observations, we suggest a simple stabilizing procedure involving the modification of peripheral PV gradients. The results have important implications for better understanding of real-ocean eddies.Fluids2016-08-2413Article10.3390/fluids1030026262311-55212016-08-24doi: 10.3390/fluids1030026Georgi SutyrinTimour Radko<![CDATA[Economies, Vol. 4, Pages 18: Going Forward from B to A? Proposals for the Eurozone Crisis]]>
http://www.mdpi.com/2227-7099/4/3/18
After reviewing the main determinants of the current Eurozone crisis, this paper discusses the feasibility of introducing fiscal currencies as a way to restore fiscal space in peripheral countries, such as Greece, which have so far adopted austerity measures in order to abide by their commitments with Eurozone institutions and the IMF. We show that the introduction of fiscal currencies would speed up the recovery, without violating the rules of Eurozone Treaties. At the same time, these processes could help the transition of the euro from its current status of single currency to a status of “common clearing currency” along the lines proposed by Keynes at Bretton Woods as a system of international settlements. Eurozone countries could therefore move from “Plan B” aimed at addressing member state domestic problems, to a “Plan A” of a better European monetary system.Economies2016-08-2443Article10.3390/economies4030018182227-70992016-08-24doi: 10.3390/economies4030018Massimo AmatoLuca FantacciDimitri PapadimitriouGennaro Zezza<![CDATA[Symmetry, Vol. 8, Pages 83: Revisiting the Optical]]>
http://www.mdpi.com/2073-8994/8/9/83
Optics has proved a fertile ground for the experimental simulation of quantum mechanics. Most recently, optical realizations of PT -symmetric quantum mechanics have been shown, both theoretically and experimentally, opening the door to international efforts aiming at the design of practical optical devices exploiting this symmetry. Here, we focus on the optical PT -symmetric dimer, a two-waveguide coupler where the materials show symmetric effective gain and loss, and provide a review of the linear and nonlinear optical realizations from a symmetry-based point of view. We go beyond a simple review of the literature and show that the dimer is just the smallest of a class of planar N-waveguide couplers that are the optical realization of the Lorentz group in 2 + 1 dimensions. Furthermore, we provide a formulation to describe light propagation through waveguide couplers described by non-Hermitian mode coupling matrices based on a non-Hermitian generalization of the Ehrenfest theorem.Symmetry2016-08-2489Review10.3390/sym8090083832073-89942016-08-24doi: 10.3390/sym8090083José Huerta MoralesJulio GuerreroServando López-AguayoBlas Rodríguez-Lara<![CDATA[Technologies, Vol. 4, Pages 25: Interfering Heralded Single Photons from Two Separate Silicon Nanowires Pumped at Different Wavelengths]]>
http://www.mdpi.com/2227-7080/4/3/25
Practical quantum photonic applications require on-demand single photon sources. As one possible solution, active temporal and wavelength multiplexing has been proposed to build an on-demand single photon source. In this scheme, heralded single photons are generated from different pump wavelengths in many temporal modes. However, the indistinguishability of these heralded single photons has not yet been experimentally confirmed. In this work, we achieve 88% ± 8% Hong–Ou–Mandel quantum interference visibility from heralded single photons generated from two separate silicon nanowires pumped at different wavelengths. This demonstrates that active temporal and wavelength multiplexing could generate indistinguishable heralded single photons.Technologies2016-08-2443Article10.3390/technologies4030025252227-70802016-08-24doi: 10.3390/technologies4030025Xiang ZhangRunyu JiangBryn BellDuk-Yong ChoiChange ChaeChunle Xiong<![CDATA[MCA, Vol. 21, Pages 37: Fuzzy Grey Prediction-Based Particle Filter for Object Tracking]]>
http://www.mdpi.com/2297-8747/21/3/37
A particle filter is a powerful tool for object tracking based on sequential Monte Carlo methods under a Bayesian estimation framework. A major challenge for a particle filter in object tracking is how to allocate particles to a high-probability density area. A particle filter does not take into account the historical prior information on the generation of the proposal distribution and, thus, it cannot approximate posterior density well. Therefore, a new fuzzy grey prediction-based particle filter (called FuzzyGP-PF) for object tracking is proposed in this paper. First, a new prediction model which was based on fuzzy mathematics theory and grey system theory was established, coined the Fuzzy-Grey-Prediction (FGP) model. Then, the history state sequence is utilized as prior information to predict and sample a part of particles for generating the proposal distribution in the particle filter. Simulations are conducted in the context of two typical maneuvering motion scenarios and the results indicate that the proposed FuzzyGP-PF algorithm can exhibit better overall performance in object tracking.Mathematical and Computational Applications2016-08-23213Article10.3390/mca21030037372297-87472016-08-23doi: 10.3390/mca21030037Lian YangZhangping Lu<![CDATA[Fluids, Vol. 1, Pages 24: Nonlinear Convection in a Partitioned Porous Layer]]>
http://www.mdpi.com/2311-5521/1/3/24
Convection in a partitioned porous layer is considered where the thin partition causes a mechanical isolation of the two identical sublayers from one another, but heat may neveretheless conduct freely. An unsteady solver that employs the multigrid method is employed to determine steady-state strongly nonlinear for values of the Darcy–Rayleigh number up to eight times its critical value. The predictions of linear stability theory are confirmed and the accuracy of the computations are carefully monitored and controlled. It is found that the wavenumber for which the maximum rate of heat transfer is attained at any chosen value of the Darcy–Rayleigh number, Ra increases quite strongly from roughly 2.33 at onset to 6.25 when Ra = 200 . It is also found that convection generally cannot take place with wavenumbers which are close to the left-hand branch of the neutral stability curve because nonlinear interactions favour modes selected from higher harmonics.Fluids2016-08-2313Article10.3390/fluids1030024242311-55212016-08-23doi: 10.3390/fluids1030024D. Rees<![CDATA[Algorithms, Vol. 9, Pages 57: Uniform Page Migration Problem in Euclidean Space]]>
http://www.mdpi.com/1999-4893/9/3/57
The page migration problem in Euclidean space is revisited. In this problem, online requests occur at any location to access a single page located at a server. Every request must be served, and the server has the choice to migrate from its current location to a new location in space. Each service costs the Euclidean distance between the server and request. A migration costs the distance between the former and the new server location, multiplied by the page size. We study the problem in the uniform model, in which the page has size D = 1 . All request locations are not known in advance; however, they are sequentially presented in an online fashion. We design a 2.75 -competitive online algorithm that improves the current best upper bound for the problem with the unit page size. We also provide a lower bound of 2.732 for our algorithm. It was already known that 2.5 is a lower bound for this problem.Algorithms2016-08-2393Article10.3390/a9030057571999-48932016-08-23doi: 10.3390/a9030057Amanj KhorramianAkira Matsubayashi<![CDATA[IJGI, Vol. 5, Pages 147: Unmanned Aerial Vehicles in Geomatics]]>
http://www.mdpi.com/2220-9964/5/8/147
Geomatics as a geospatial science, including technologies and processes, has experienced a boost in recent years with the development of Unmanned Aerial Vehicles (UAVs) equipped with sensing instruments [1].[...]ISPRS International Journal of Geo-Information2016-08-2258Editorial10.3390/ijgi50801471472220-99642016-08-22doi: 10.3390/ijgi5080147Gonzalo Martinsanz<![CDATA[J. Imaging, Vol. 2, Pages 23: Cross-Characterization for Imaging Parasitic Resistive Losses in Thin-Film Photovoltaic Modules]]>
http://www.mdpi.com/2313-433X/2/3/23
Thin-film photovoltaic (PV) modules often suffer from a variety of parasitic resistive losses in transparent conductive oxide (TCO) and absorber layers that significantly affect the module electrical performance. This paper presents the holistic investigation of resistive effects due to TCO lateral sheet resistance and shunts in amorphous-silicon (a-Si) thin-film PV modules by simultaneous use of three different imaging techniques, electroluminescence (EL), lock-in thermography (LIT) and light beam induced current (LBIC), under different operating conditions. Results from individual techniques have been compared and analyzed for particular type of loss channel, and combination of these techniques has been used to obtain more detailed information for the identification and classification of these loss channels. EL and LIT techniques imaged the TCO lateral resistive effects with different spatial sensitivity across the cell width. For quantification purpose, a distributed diode modeling and simulation approach has been exploited to estimate TCO sheet resistance from EL intensity pattern and effect of cell width on module efficiency. For shunt investigation, LIT provided better localization of severe shunts, while EL and LBIC given good localization of weak shunts formed by the scratches. The impact of shunts on the photocurrent generation capability of individual cells has been assessed by li-LBIC technique. Results show that the cross-characterization by different imaging techniques provides additional information, which aids in identifying the nature and severity of loss channels with more certainty, along with their relative advantages and limitations in particular cases.Journal of Imaging2016-08-2223Article10.3390/jimaging2030023232313-433X2016-08-22doi: 10.3390/jimaging2030023Archana SinhaMartin BlissXiaofeng WuSubinoy RoyRalph GottschalgRajesh Gupta<![CDATA[Algorithms, Vol. 9, Pages 56: Multiple Artificial Neural Networks with Interaction Noise for Estimation of Spatial Categorical Variables]]>
http://www.mdpi.com/1999-4893/9/3/56
This paper presents a multiple artificial neural networks (MANN) method with interaction noise for estimating the occurrence probabilities of different classes at any site in space. The MANN consists of several independent artificial neural networks, the number of which is determined by the neighbors around the target location. In the proposed algorithm, the conditional or pre-posterior (multi-point) probabilities are viewed as output nodes, which can be estimated by weighted combinations of input nodes: two-point transition probabilities. The occurrence probability of a certain class at a certain location can be easily computed by the product of output probabilities using Bayes’ theorem. Spatial interaction or redundancy information can be measured in the form of interaction noises. Prediction results show that the method of MANN with interaction noise has a higher classification accuracy than the traditional Markov chain random fields (MCRF) model and can successfully preserve small-scale features.Algorithms2016-08-2093Article10.3390/a9030056561999-48932016-08-20doi: 10.3390/a9030056Xiang HuangZhizhong Wang<![CDATA[Computation, Vol. 4, Pages 32: Calculation of the Acoustic Spectrum of a Cylindrical Vortex in Viscous Heat-Conducting Gas Based on the Navier–Stokes Equations]]>
http://www.mdpi.com/2079-3197/4/3/32
An extremely interesting problem in aero-hydrodynamics is the sound radiation of a single vortical structure. Currently, this type of problem is mainly considered for an incompressible medium. In this paper a method was developed to take into account the viscosity and thermal conductivity of gas. The acoustic radiation frequency of a cylindrical vortex on a flat wall in viscous heat-conducting gas (air) has been investigated. The problem is solved on the basis of the Navier–Stokes equations using the small initial vorticity approach. The power expansion of unknown functions in a series with a small parameter (vorticity) is used. It is shown that there are high-frequency oscillations modulated by a low-frequency signal. The value of the high frequency remains constant for a long period of time. Thus the high frequency can be considered a natural frequency of the vortex radiation. The value of the natural frequency depends only on the initial radius of the cylindrical vortex, and does not depend on the intensity of the initial vorticity. As expected from physical considerations, the natural frequency decreases exponentially as the initial radius of the cylinder increases. Furthermore, the natural frequency differs from that of the oscillations inside the initial cylinder and in the outer domain. The results of the paper may be of interest for aeroacoustics and tornado modeling.Computation2016-08-2043Article10.3390/computation4030032322079-31972016-08-20doi: 10.3390/computation4030032Tatiana PetrovaFedor Shugaev<![CDATA[Symmetry, Vol. 8, Pages 82: Decoration of the Truncated Tetrahedron—An Archimedean Polyhedron—To Produce a New Class of Convex Equilateral Polyhedra with Tetrahedral Symmetry]]>
http://www.mdpi.com/2073-8994/8/8/82
The Goldberg construction of symmetric cages involves pasting a patch cut out of a regular tiling onto the faces of a Platonic host polyhedron, resulting in a cage with the same symmetry as the host. For example, cutting equilateral triangular patches from a 6.6.6 tiling of hexagons and pasting them onto the full triangular faces of an icosahedron produces icosahedral fullerene cages. Here we show that pasting cutouts from a 6.6.6 tiling onto the full hexagonal and triangular faces of an Archimedean host polyhedron, the truncated tetrahedron, produces two series of tetrahedral (Td) fullerene cages. Cages in the first series have 28n2 vertices (n ≥ 1). Cages in the second (leapfrog) series have 3 × 28n2. We can transform all of the cages of the first series and the smallest cage of the second series into geometrically convex equilateral polyhedra. With tetrahedral (Td) symmetry, these new polyhedra constitute a new class of “convex equilateral polyhedra with polyhedral symmetry”. We also show that none of the other Archimedean polyhedra, six with octahedral symmetry and six with icosahedral, can host full-face cutouts from regular tilings to produce cages with the host’s polyhedral symmetry.Symmetry2016-08-2088Article10.3390/sym8080082822073-89942016-08-20doi: 10.3390/sym8080082Stan ScheinAlexander YehKris CoolsaetJames Gayed<![CDATA[Symmetry, Vol. 8, Pages 81: Cosmological Reflection of Particle Symmetry]]>
http://www.mdpi.com/2073-8994/8/8/81
The standard model involves particle symmetry and the mechanism of its breaking. Modern cosmology is based on inflationary models with baryosynthesis and dark matter/energy, which involves physics beyond the standard model. Studies of the physical basis of modern cosmology combine direct searches for new physics at accelerators with its indirect non-accelerator probes, in which cosmological consequences of particle models play an important role. The cosmological reflection of particle symmetry and the mechanisms of its breaking are the subject of the present review.Symmetry2016-08-2088Review10.3390/sym8080081812073-89942016-08-20doi: 10.3390/sym8080081Maxim Khlopov<![CDATA[Technologies, Vol. 4, Pages 24: Electrically Injected Twin Photon Emitting Lasers at Room Temperature]]>
http://www.mdpi.com/2227-7080/4/3/24
On-chip generation, manipulation and detection of nonclassical states of light are some of the major issues for quantum information technologies. In this context, the maturity and versatility of semiconductor platforms are important assets towards the realization of ultra-compact devices. In this paper we present our work on the design and study of an electrically injected AlGaAs photon pair source working at room temperature. The device is characterized through its performances as a function of temperature and injected current. Finally we discuss the impact of the device’s properties on the generated quantum state. These results are very promising for the demonstration of electrically injected entangled photon sources at room temperature and let us envision the use of III-V semiconductors for a widespread diffusion of quantum communication technologies.Technologies2016-08-1843Article10.3390/technologies4030024242227-70802016-08-18doi: 10.3390/technologies4030024Claire AutebertGiorgio MalteseYacine HaliouaFabien BoitierAristide LemaîtreMaria AmantiCarlo SirtoriSara Ducci<![CDATA[Robotics, Vol. 5, Pages 18: Bio-Inspired Vision-Based Leader-Follower Formation Flying in the Presence of Delays]]>
http://www.mdpi.com/2218-6581/5/3/18
Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles) to date. Towards this goal, we make three contributions in this paper: (i) we used a computational approach to develop a bio-inspired architecture for vision-based Leader-Follower formation flying on two micro-UAVs. We believe that the minimal computational cost of the resulting algorithm makes it suitable for object detection and tracking during high-speed flocking; (ii) we show that provided delays in the control loop of a micro-UAV are below a critical value, Kalman filter-based estimation algorithms are not required to achieve Leader-Follower formation flying; (iii) unlike previous approaches, we do not use external observers, such as GPS signals or synchronized communication with flock members. These three contributions could be useful in achieving vision-based flocking in GPS-denied environments on computationally-limited agents.Robotics2016-08-1853Article10.3390/robotics5030018182218-65812016-08-18doi: 10.3390/robotics5030018John Oyekan<![CDATA[Computation, Vol. 4, Pages 31: Computational Analysis of Natural Ventilation Flows in Geodesic Dome Building in Hot Climates]]>
http://www.mdpi.com/2079-3197/4/3/31
For centuries, dome roofs were used in traditional houses in hot regions such as the Middle East and Mediterranean basin due to its thermal advantages, structural benefits and availability of construction materials. This article presents the computational modelling of the wind- and buoyancy-induced ventilation in a geodesic dome building in a hot climate. The airflow and temperature distributions and ventilation flow rates were predicted using Computational Fluid Dynamics (CFD). The three-dimensional Reynolds-Averaged Navier-Stokes (RANS) equations were solved using the CFD tool ANSYS FLUENT15. The standard k-epsilon was used as turbulence model. The modelling was verified using grid sensitivity and flux balance analysis. In order to validate the modelling method used in the current study, additional simulation of a similar domed-roof building was conducted for comparison. For wind-induced ventilation, the dome building was modelled with upper roof vents. For buoyancy-induced ventilation, the geometry was modelled with roof vents and also with two windows open in the lower level. The results showed that using the upper roof openings as a natural ventilation strategy during winter periods is advantageous and could reduce the indoor temperature and also introduce fresh air. The results also revealed that natural ventilation using roof vents cannot satisfy thermal requirements during hot summer periods and complementary cooling solutions should be considered. The analysis showed that buoyancy-induced ventilation model can still generate air movement inside the building during periods with no or very low wind.Computation2016-08-1743Article10.3390/computation4030031312079-31972016-08-17doi: 10.3390/computation4030031Zohreh SoleimaniJohn CalautitBen Hughes<![CDATA[JSAN, Vol. 5, Pages 13: An Experimental Comparison of Radio Transceiver and Transceiver-Free Localization Methods]]>
http://www.mdpi.com/2224-2708/5/3/13
This paper presents an experimental performance assessment for localization systems using received signal strength (RSS) measurements from a wireless sensor network. In this experimental study, we compare two types of model-based localization methods: transceiver-based localization, which locates objects using RSS from transmitters to receivers at known locations; and transceiver-free localization, which estimates location by using RSS changes on known-location nodes caused by objects. We evaluate their performance using three sets of experiments with different environmental conditions. Our performance analysis shows that transceiver-free localization methods are generally more accurate than transceiver-based localization methods for a wireless sensor network with high node density.Journal of Sensor and Actuator Networks2016-08-1753Article10.3390/jsan5030013132224-27082016-08-17doi: 10.3390/jsan5030013Yang ZhaoNeal Patwari<![CDATA[Algorithms, Vol. 9, Pages 55: A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis]]>
http://www.mdpi.com/1999-4893/9/3/55
With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS)-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294).Algorithms2016-08-1793Article10.3390/a9030055551999-48932016-08-17doi: 10.3390/a9030055Jonathan ShepherdTomohito WadaDavid RowlandsDaniel James<![CDATA[Symmetry, Vol. 8, Pages 80: Superconducting Gap Symmetry of LaFeP(O,F) Observed by Impurity Doping Effect]]>
http://www.mdpi.com/2073-8994/8/8/80
We have investigated Mn, Co and Ni substitution effects on polycrystalline samples of LaFePO0.95F0.05 by resistivity and magnetoresistance measurements. In LaFe1-xMxPO0.95F0.05 (M = Mn, Co and Ni), the superconducting transition temperature (Tc) monotonously decreases with increasing the impurity doping level of x. There is a clear difference of Tc suppression rates among Mn, Co and Ni doping cases, and the decreasing rate of Tc by Mn doping as a magnetic impurity is larger than those by the nonmagnetic doping impurities (Co/Ni). This result indicates that in LaFePO0.95F0.05, Tc is rapidly suppressed by the pair-breaking effect of magnetic impurities, and the pairing symmetry is a full-gapped s-wave. In the nonmagnetic impurity-doped systems, the residual resistivity in the normal state has nearly the same value when Tc becomes zero. The residual resistivity value is almost consistent with the universal value of sheet resistance for two-dimensional superconductors, suggesting that Tc is suppressed by electron localization in Co/Ni-doped LaFePO0.95F0.05.Symmetry2016-08-1788Article10.3390/sym8080080802073-89942016-08-17doi: 10.3390/sym8080080Shigeki MiyasakaSinnosuke SuzukiSetsuko Tajima<![CDATA[Econometrics, Vol. 4, Pages 35: Special Issues of Econometrics: Celebrated Econometricians]]>
http://www.mdpi.com/2225-1146/4/3/35
Econometrics is pleased to announce the commissioning of a new series of Special Issues dedicated to celebrated econometricians of our time.[...]Econometrics2016-08-1743Editorial10.3390/econometrics4030035352225-11462016-08-17doi: 10.3390/econometrics4030035 Editorial Office<![CDATA[Economies, Vol. 4, Pages 17: Why Migrate: For Study or for Work?]]>
http://www.mdpi.com/2227-7099/4/3/17
Over the past decades, globalization has led to a huge increase in the migration of workers, as well as students. This paper develops a simple two-step model that describes the decisions of an individual vis-à-vis education and migration, and presents a unified model, wherein the two migration decisions are combined into a single, unique model. This paper shows that under the plausible assumption that costs of migration differ over the human life cycle, the usual brain drain strategy is sub-optimal. With an increase in globalization, the brain drain strategy will be replaced by the strategy of migration of students.Economies2016-08-1743Article10.3390/economies4030017172227-70992016-08-17doi: 10.3390/economies4030017Elise Brezis<![CDATA[Axioms, Vol. 5, Pages 21: Is Kazimierz Ajdukiewicz’s Concept of a Real Definition Still Important?]]>
http://www.mdpi.com/2075-1680/5/3/21
The concept of a real definition worked out by Kazimierz Ajdukiewicz is still important in the theory of definition and can be developed by applying Hilary Putnam’s theory of reference of natural kind terms and Karl Popper’s fallibilism. On the one hand, the definiendum of a real definition refers to a natural kind of things and, on the other hand, the definiens of such a definition expresses actual, empirical, fallible knowledge which can be revised and changed.Axioms2016-08-1753Article10.3390/axioms5030021212075-16802016-08-17doi: 10.3390/axioms5030021Robert Kublikowski<![CDATA[Systems, Vol. 4, Pages 29: Model of the Russian Federation Construction Innovation System: An Integrated Participatory Systems Approach]]>
http://www.mdpi.com/2079-8954/4/3/29
This research integrates systemic and participatory techniques to model the Russian Federation construction innovation system. Understanding this complex construction innovation system and determining the best levers for enhancing it require the dynamic modelling of a number of factors, such as flows of resources and activities, policies, uncertainty and time. To build the foundations for such a dynamic model, the employed study method utilised an integrated stakeholder-based participatory approach coupled with structural analysis (MICMAC—Matrice d'Impacts Croisés Multiplication Appliquée à un Classement Cross-Impact Matrix). This method identified the key factors of the Russian Federation construction innovation system, their causal relationship (i.e., influence/dependence map) and, ultimately, a causal loop diagram. The generated model reveals pathways to improving construction innovation in the Russian Federation and underpins the future development of an operationalised system dynamics model.Systems2016-08-1643Article10.3390/systems4030029292079-89542016-08-16doi: 10.3390/systems4030029Emiliya SuprunOz SahinRodney StewartKriengsak Panuwatwanich<![CDATA[Robotics, Vol. 5, Pages 17: Estimation of Physical Human-Robot Interaction Using Cost-Effective Pneumatic Padding]]>
http://www.mdpi.com/2218-6581/5/3/17
The idea to use a cost-effective pneumatic padding for sensing of physical interaction between a user and wearable rehabilitation robots is not new, but until now there has not been any practical relevant realization. In this paper, we present a novel method to estimate physical human-robot interaction using a pneumatic padding based on artificial neural networks (ANNs). This estimation can serve as rough indicator of applied forces/torques by the user and can be applied for visual feedback about the user’s participation or as additional information for interaction controllers. Unlike common mostly very expensive 6-axis force/torque sensors (FTS), the proposed sensor system can be easily integrated in the design of physical human-robot interfaces of rehabilitation robots and adapts itself to the shape of the individual patient’s extremity by pressure changing in pneumatic chambers, in order to provide a safe physical interaction with high user’s comfort. This paper describes a concept of using ANNs for estimation of interaction forces/torques based on pressure variations of eight customized air-pad chambers. The ANNs were trained one-time offline using signals of a high precision FTS which is also used as reference sensor for experimental validation. Experiments with three different subjects confirm the functionality of the concept and the estimation algorithm.Robotics2016-08-1653Article10.3390/robotics5030017172218-65812016-08-16doi: 10.3390/robotics5030017André WilkeningNikolina PulevaOleg Ivlev<![CDATA[Econometrics, Vol. 4, Pages 34: Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets]]>
http://www.mdpi.com/2225-1146/4/3/34
This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 ) , which is better than the convergence rate O P ( n − 1 / 4 ) for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.Econometrics2016-08-1643Article10.3390/econometrics4030034342225-11462016-08-16doi: 10.3390/econometrics4030034Xin ZhangDonggyu KimYazhen Wang<![CDATA[Risks, Vol. 4, Pages 30: On the Capital Allocation Problem for a New Coherent Risk Measure in Collective Risk Theory]]>
http://www.mdpi.com/2227-9091/4/3/30
In this paper we introduce a new coherent cumulative risk measure on a subclass in the space of càdlàg processes. This new coherent risk measure turns out to be tractable enough within a class of models where the aggregate claims is driven by a spectrally positive Lévy process. We focus our motivation and discussion on the problem of capital allocation. Indeed, this risk measure is well-suited to address the problem of capital allocation in an insurance context. We show that the capital allocation problem for this risk measure has a unique solution determined by the Euler allocation method. Some examples and connections with existing results as well as practical implications are also discussed.Risks2016-08-1643Article10.3390/risks4030030302227-90912016-08-16doi: 10.3390/risks4030030Hirbod AssaManuel MoralesHassan Omidi Firouzi<![CDATA[Computation, Vol. 4, Pages 30: Electron Correlations in Local Effective Potential Theory]]>
http://www.mdpi.com/2079-3197/4/3/30
Local effective potential theory, both stationary-state and time-dependent, constitutes the mapping from a system of electrons in an external field to one of the noninteracting fermions possessing the same basic variable such as the density, thereby enabling the determination of the energy and other properties of the electronic system. This paper is a description via Quantal Density Functional Theory (QDFT) of the electron correlations that must be accounted for in such a mapping. It is proved through QDFT that independent of the form of external field, (a) it is possible to map to a model system possessing all the basic variables; and that (b) with the requirement that the model fermions are subject to the same external fields, the only correlations that must be considered are those due to the Pauli exclusion principle, Coulomb repulsion, and Correlation–Kinetic effects. The cases of both a static and time-dependent electromagnetic field, for which the basic variables are the density and physical current density, are considered. The examples of solely an external electrostatic or time-dependent electric field constitute special cases. An efficacious unification in terms of electron correlations, independent of the type of external field, is thereby achieved. The mapping is explicated for the example of a quantum dot in a magnetostatic field, and for a quantum dot in a magnetostatic and time-dependent electric field.Computation2016-08-1643Article10.3390/computation4030030302079-31972016-08-16doi: 10.3390/computation4030030Viraht SahniXiao-Yin PanTao Yang<![CDATA[Technologies, Vol. 4, Pages 23: Handcrafted Electrocorticography Electrodes for a Rodent Behavioral Model]]>
http://www.mdpi.com/2227-7080/4/3/23
Electrocorticography (ECoG) is a minimally invasive neural recording method that has been extensively used for neuroscience applications. It has proven to have the potential to ease the establishment of proper links for neural interfaces that can offer disabled patients an alternative solution for their lost sensory and motor functions through the use of brain-computer interface (BCI) technology. Although many neural recording methods exist, ECoG provides a combination of stability, high spatial and temporal resolution with chronic and mobile capabilities that could make BCI systems accessible for daily applications. However, many ECoG electrodes require MEMS fabricating techniques which are accompanied by various expenses that are obstacles for research projects. For this reason, this paper presents an animal study using a low cost and simple handcrafted ECoG electrode that is made of commercially accessible materials. The study is performed on a Lewis rat implanted with a handcrafted 32-channel non-penetrative ECoG electrode covering an area of 3 × 3 mm2 on the cortical surface. The ECoG electrodes were placed on the motor and somatosensory cortex to record the signal patterns while the animal was active on a treadmill. Using a Tucker-Davis Technologies acquisition system and the software Synapse to monitor and analyze the electrophysiological signals, the electrodes obtained signals within the amplitude range of 200 µV for local field potentials with reliable spatiotemporal profiles. It was also confirmed that the handcrafted ECoG electrode has the stability and chronic features found in other commercial electrodes.Technologies2016-08-1643Article10.3390/technologies4030023232227-70802016-08-16doi: 10.3390/technologies4030023Nishat TasnimAli AjamRaul RamosMukhesh KoripalliManisankar ChennamsettiYoonsu Choi<![CDATA[IJGI, Vol. 5, Pages 146: Can Hawaii Meet Its Renewable Fuel Target? Case Study of Banagrass-Based Cellulosic Ethanol]]>
http://www.mdpi.com/2220-9964/5/8/146
Banagrass is a biomass crop candidate for ethanol production in the State of Hawaii. This study examines: (i) whether enough banagrass can be produced to meet Hawaii’s renewable fuel target of 20% highway fuel demand produced with renewable sources by 2020 and (ii) at what cost. This study proposes to locate suitable land areas for banagrass production and ethanol processing, focusing on the two largest islands in the state of Hawaii—Hawaii and Maui. The results suggest that the 20% target is not achievable by using all suitable land resources for banagrass production on both Hawaii and Maui. A total of about 74,224,160 gallons, accounting for 16.04% of the state’s highway fuel demand, can be potentially produced at a cost of $6.28/gallon. Lower ethanol cost is found when using a smaller production scale. The lowest cost of $3.31/gallon is found at a production processing capacity of about 9 million gallons per year (MGY), which meets about 2% of state demand. This cost is still higher than the average imported ethanol price of $3/gallon. Sensitivity analysis finds that it is possible to produce banagrass-based ethanol on Hawaii Island at a cost below the average imported ethanol price if banagrass yield increases of at least 35.56%.ISPRS International Journal of Geo-Information2016-08-1658Article10.3390/ijgi50801461462220-99642016-08-16doi: 10.3390/ijgi5080146Chinh TranJohn Yanagida<![CDATA[Economies, Vol. 4, Pages 16: Convergence and Heterogeneity in Euro Based Economies: Stability and Dynamics]]>
http://www.mdpi.com/2227-7099/4/3/16
Cluster analysis is used to explore the performance of key macroeconomic variables in European countries that share the euro, from the inception of the currency in 2002 through to 2013. An original applied statistical approach searches for a pattern synthesis across a matrix of macroeconomic data to examine if there is evidence for country clusters and whether there is convergence of the cluster patterns over time. A number of different clusters appear and these change over time as the economies of the member states dynamically interact. This includes some new countries joining the currency during the period of examination. As found in previous research, Southern European countries tend to remain separate from other countries. The new methods used, however, add to an understanding of some differences between Southern European countries, in addition to replicating their broad similarities. Hypotheses are formed about the country clusters existing in 2002, 2006 and 2013, at key points in time of the euro integration process. These hypotheses are tested using the rigour of a bivariate analysis and the multivariate method of Qualitative Comparative Analysis (QCA). The results confirm the hypotheses of cluster memberships in all three periods. The confirmation analysis provides evidence about which variables are most influencing cluster memberships at each time point. In 2002 and 2006, differences between countries are influenced by their different Harmonised Index of Consumer Prices (HICP) and labour productivity scores. In 2013, after the crisis, there is a noticeable change. Long term interest rates and gross government debt become key determinants of differences, in addition to the continuing influence of labour productivity. The paper concludes that in the last decade the convergence of countries sharing the euro has been limited, by the joining of new countries and the circumstances of the global economic crisis. The financial crisis has driven divergences from pre-existing integration. Country convergence needs to be understood as a dynamic and multivariate concept. This is a significant development of convergence theory and is an addition to how the concept has been understood previously.Economies2016-08-1643Article10.3390/economies4030016162227-70992016-08-16doi: 10.3390/economies4030016Philip HaynesJonathan Haynes<![CDATA[IJGI, Vol. 5, Pages 144: Methodology for Evaluating the Quality of Ecosystem Maps: A Case Study in the Andes]]>
http://www.mdpi.com/2220-9964/5/8/144
Uncertainty in thematic maps has been tested mainly in maps with discrete or fuzzy classifications based on spectral data. However, many ecosystem maps in tropical countries consist of discrete polygons containing information on various ecosystem properties such as vegetation cover, soil, climate, geomorphology and biodiversity. The combination of these properties into one class leads to error. We propose a probability-based sampling design with two domains, multiple stages, and stratification with selection of primary sampling units (PSUs) proportional to the richness of strata present. Validation is undertaken through field visits and fine resolution remote sensing data. A pilot site in the center of the Colombian Andes was chosen to validate a government official ecosystem map. Twenty primary sampling units (PSUs) of 10 × 15 km were selected, and the final numbers of final sampling units (FSUs) were 76 for the terrestrial domain and 46 for the aquatic domain. Our results showed a confidence level of 95%, with the accuracy in the terrestrial domain varying between 51.8% and 64.3% and in the aquatic domain varying between 75% and 92%. Governments need to account for uncertainty since they rely on the quality of these maps to make decisions and guide policies.ISPRS International Journal of Geo-Information2016-08-1558Article10.3390/ijgi50801441442220-99642016-08-15doi: 10.3390/ijgi5080144Dolors ArmenterasTania GonzálezFrancisco LuqueDenis LópezNelly Rodríguez<![CDATA[Mathematics, Vol. 4, Pages 52: Role of Measurement Incompatibility and Uncertainty in Determining Nonlocality]]>
http://www.mdpi.com/2227-7390/4/3/52
It has been recently shown that measurement incompatibility and fine grained uncertainty—a particular form of preparation uncertainty relation—are deeply related to the nonlocal feature of quantum mechanics. In particular, the degree of measurement incompatibility in a no-signaling theory determines the bound on the violation of Bell-CHSH inequality, and a similar role is also played by (fine-grained) uncertainty along with steering, a subtle non-local phenomenon. We review these connections, along with comments on the difference in the roles played by measurement incompatibility and uncertainty. We also discuss why the toy model of Spekkens (Phys. Rev. A 75, 032110 (2007)) shows no nonlocal feature even though steering is present in this theory.Mathematics2016-08-1543Article10.3390/math4030052522227-73902016-08-15doi: 10.3390/math4030052Guruprasad KarSibasish GhoshSujit ChoudharyManik Banik<![CDATA[IJFS, Vol. 4, Pages 16: Capital Regulation and Bank Risk-Taking Behavior: Evidence from Pakistan]]>
http://www.mdpi.com/2227-7072/4/3/16
In response to the global financial crisis of 2007–2009, risk-based capital requirements have been reinforced in the new Basel III Accord to counter excessive bank risk-taking behavior. However, prior theoretical as well as empirical literature that studies the impact of risk-based capital requirements on bank risk-taking behavior is inconclusive. The primary purpose of this paper is to examine the impact of risk-based capital requirements on bank risk-taking behavior, using a panel dataset of 21 listed commercial banks of Pakistan over the period 2005–2012. Purely regulatory measures of bank capital, capital adequacy ratio, and bank assets portfolio risk, risk-weighted assets to total assets ratio, are used for the main analysis. Recently developed small N panel methods (bias corrected least squares dummy variable (LSDVC) method and system GMM method with instruments collapse option) are used to control for panel fixed effects, dynamic dependent variables, and endogenous independent variables. Overall, the results suggest that commercial banks have reduced assets portfolio risk in response to stringent risk-based capital requirements. Results also confirm that all banks having risk-based capital ratios either lower or higher than the regulatory required limits, have decreased portfolio risk in response to stringent risk-based capital requirements. The results are robust to alternative proxies of bank risk-taking, alternative estimation methods, and alternative samples.International Journal of Financial Studies2016-08-1543Article10.3390/ijfs4030016162227-70722016-08-15doi: 10.3390/ijfs4030016Badar AshrafSidra ArshadYuancheng Hu<![CDATA[Informatics, Vol. 3, Pages 14: Advancing the Direction of Health Information Management in Greek Public Hospitals: Theoretical Directions and Methodological Implications for Sharing Information in order to Obtain Decision-Making]]>
http://www.mdpi.com/2227-9709/3/3/14
Although consultants have long placed the use of research information at the centre of their activity, the extent that physicians use this information tends to vary widely. Despite this study and its recommendations, there is still a gap between the functions of a manager and the use of the associated information, while the decision-making procedures vary according to the organization in which they work. The cost of IT remains the largest barrier, while some current IT solutions are not user friendly and out-of-date, particularly for public hospitals in Greece. The knowledge management is concerned not only with the facts and figures of production, but also with the know-how of staff. The information needs protocol should not be referred only to those who comply with formal computer-based information systems, but also to those who take into account other informal information and its flow within the organization. In a field such as medicine, where out-of-date information may be positively dangerous, doctors make heavy use of journals and several texts from the web. The decision-making process is a complex approach, particularly in human diagnostic and therapeutic applications. Therefore, it is very important to set priorities in the sector of health information management and promote education and training on information and communication technology (ICT).Informatics2016-08-1533Article10.3390/informatics3030014142227-97092016-08-15doi: 10.3390/informatics3030014Evagelia Lappa<![CDATA[Symmetry, Vol. 8, Pages 79: Modeling Bottom-Up Visual Attention Using Dihedral Group D4]]>
http://www.mdpi.com/2073-8994/8/8/79
In this paper, first, we briefly describe the dihedral group D 4 that serves as the basis for calculating saliency in our proposed model. Second, our saliency model makes two major changes in a latest state-of-the-art model known as group-based asymmetry. First, based on the properties of the dihedral group D 4 , we simplify the asymmetry calculations associated with the measurement of saliency. This results is an algorithm that reduces the number of calculations by at least half that makes it the fastest among the six best algorithms used in this research article. Second, in order to maximize the information across different chromatic and multi-resolution features, the color image space is de-correlated. We evaluate our algorithm against 10 state-of-the-art saliency models. Our results show that by using optimal parameters for a given dataset, our proposed model can outperform the best saliency algorithm in the literature. However, as the differences among the (few) best saliency models are small, we would like to suggest that our proposed model is among the best and the fastest among the best. Finally, as a part of future work, we suggest that our proposed approach on saliency can be extended to include three-dimensional image data.Symmetry2016-08-1588Article10.3390/sym8080079792073-89942016-08-15doi: 10.3390/sym8080079Puneet Sharma<![CDATA[IJGI, Vol. 5, Pages 145: Continuous Road Network Generalization throughout All Scales]]>
http://www.mdpi.com/2220-9964/5/8/145
Until now, road network generalization has mainly been applied to the task of generalizing from one fixed source scale to another fixed target scale. These actions result in large differences in content and representation, e.g., a sudden change of the representation of road segments from areas to lines, which may confuse users. Therefore, we aim at the continuous generalization of a road network for the whole range, from the large scale, where roads are represented as areas, to mid- and small scales, where roads are represented progressively more frequently as lines. As a consequence of this process, there is an intermediate scale range where at the same time some roads will be represented as areas, while others will be represented as lines. We propose a new data model together with a specific data structure where for all map objects, a range of valid map scales is stored. This model is based on the integrated and explicit representation of: (1) a planar area partition; and (2) a linear road network. This enables the generalization process to include the knowledge and understanding of a linear network. This paper further discusses the actual generalization options and algorithms for populating this data structure with high quality vario-scale cartographic content.ISPRS International Journal of Geo-Information2016-08-1358Article10.3390/ijgi50801451452220-99642016-08-13doi: 10.3390/ijgi5080145Radan ŠubaMartijn MeijersPeter Oosterom<![CDATA[Algorithms, Vol. 9, Pages 54: Sign Function Based Sparse Adaptive Filtering Algorithms for Robust Channel Estimation under Non-Gaussian Noise Environments]]>
http://www.mdpi.com/1999-4893/9/3/54
Robust channel estimation is required for coherent demodulation in multipath fading wireless communication systems which are often deteriorated by non-Gaussian noises. Our research is motivated by the fact that classical sparse least mean square error (LMS) algorithms are very sensitive to impulsive noise while standard SLMS algorithm does not take into account the inherent sparsity information of wireless channels. This paper proposes a sign function based sparse adaptive filtering algorithm for developing robust channel estimation techniques. Specifically, sign function based least mean square error (SLMS) algorithms to remove the non-Gaussian noise that is described by a symmetric α-stable noise model. By exploiting channel sparsity, sparse SLMS algorithms are proposed by introducing several effective sparse-promoting functions into the standard SLMS algorithm. The convergence analysis of the proposed sparse SLMS algorithms indicates that they outperform the standard SLMS algorithm for robust sparse channel estimation, which can be also verified by simulation results.Algorithms2016-08-1293Article10.3390/a9030054541999-48932016-08-12doi: 10.3390/a9030054Tingping ZhangGuan Gui<![CDATA[Axioms, Vol. 5, Pages 20: Approach of Complexity in Nature: Entropic Nonuniqueness]]>
http://www.mdpi.com/2075-1680/5/3/20
Boltzmann introduced in the 1870s a logarithmic measure for the connection between the thermodynamical entropy and the probabilities of the microscopic configurations of the system. His celebrated entropic functional for classical systems was then extended by Gibbs to the entire phase space of a many-body system and by von Neumann in order to cover quantum systems, as well. Finally, it was used by Shannon within the theory of information. The simplest expression of this functional corresponds to a discrete set of W microscopic possibilities and is given by S B G = − k ∑ i = 1 W p i ln p i (k is a positive universal constant; BG stands for Boltzmann–Gibbs). This relation enables the construction of BGstatistical mechanics, which, together with the Maxwell equations and classical, quantum and relativistic mechanics, constitutes one of the pillars of contemporary physics. The BG theory has provided uncountable important applications in physics, chemistry, computational sciences, economics, biology, networks and others. As argued in the textbooks, its application in physical systems is legitimate whenever the hypothesis of ergodicity is satisfied, i.e., when ensemble and time averages coincide. However, what can we do when ergodicity and similar simple hypotheses are violated, which indeed happens in very many natural, artificial and social complex systems. The possibility of generalizing BG statistical mechanics through a family of non-additive entropies was advanced in 1988, namely S q = k 1 − ∑ i = 1 W p i q q − 1 , which recovers the additive S B G entropy in the q→ 1 limit. The index q is to be determined from mechanical first principles, corresponding to complexity universality classes. Along three decades, this idea intensively evolved world-wide (see the Bibliography in http://tsallis.cat.cbpf.br/biblio.htm) and led to a plethora of predictions, verifications and applications in physical systems and elsewhere. As expected, whenever a paradigm shift is explored, some controversy naturally emerged, as well, in the community. The present status of the general picture is here described, starting from its dynamical and thermodynamical foundations and ending with its most recent physical applications.Axioms2016-08-1253Review10.3390/axioms5030020202075-16802016-08-12doi: 10.3390/axioms5030020Constantino Tsallis<![CDATA[Computation, Vol. 4, Pages 29: DiamondTorre Algorithm for High-Performance Wave Modeling]]>
http://www.mdpi.com/2079-3197/4/3/29
Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit) memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.Computation2016-08-1243Article10.3390/computation4030029292079-31972016-08-12doi: 10.3390/computation4030029Vadim LevchenkoAnastasia PerepelkinaAndrey Zakirov<![CDATA[MCA, Vol. 21, Pages 36: On Generalized Double Statistical Convergence of Order α in Intuitionistic Fuzzy Normed Spaces]]>
http://www.mdpi.com/2297-8747/21/3/36
Our goal in this work is to introduce the notion V , λ ( I ) 2 -summability and ideal λ-double statistical convergence of order α with respect to the intuitionistic fuzzy norm μ , v . We also make some observations about these spaces and prove some inclusion relations.Mathematical and Computational Applications2016-08-10213Article10.3390/mca21030036362297-87472016-08-10doi: 10.3390/mca21030036Ekrem Savaş<![CDATA[Future Internet, Vol. 8, Pages 41: Coproduction as an Approach to Technology-Mediated Citizen Participation in Emergency Management]]>
http://www.mdpi.com/1999-5903/8/3/41
Social and mobile computing open up new possibilities for integrating citizens’ information, knowledge, and social capital in emergency management (EM). This participation can improve the capacity of local agencies to respond to unexpected events by involving citizens not only as first line informants, but also as first responders. This participation could contribute to build resilient communities aware of the risks they are threatened by and able to mobilize their social capital to cope with them and, in turn, decrease the impact of threats and hazards. However for this participation to be possible organizations in charge of EM need to realize that involving citizens does not interfere with their protocols and that citizens are a valuable asset that can contribute to the EM process with specific skills and capabilities. In this paper we discuss the design challenges of using social and mobile computing to move to a more participatory EM process that starts by empowering both citizens and organizations in a coproduction service envisioned as a partnership effort. As an example, we describe a case study of a participatory design approach that involved professional EM workers and decision makers in an effort to understand the challenges of using technology-based solutions to integrate citizen skills and capabilities in their operation protocols. The case study made it possible to identify specific roles that citizens might play in a crisis or disaster and to envision scenarios were technologies could be used to integrate their skills into the EM process. In this way the paper contributes to the roles and the scenarios of theory-building about coproduction in EM services.Future Internet2016-08-1083Article10.3390/fi8030041411999-59032016-08-10doi: 10.3390/fi8030041Paloma DíazJohn CarrollIgnacio Aedo<![CDATA[Algorithms, Vol. 9, Pages 52: Control for Ship Course-Keeping Using Optimized Support Vector Machines]]>
http://www.mdpi.com/1999-4893/9/3/52
Support vector machines (SVM) are proposed in order to obtain a robust controller for ship course-keeping. A cascaded system is constructed by combining the dynamics of the rudder actuator with the dynamics of ship motion. Modeling errors and disturbances are taken into account in the plant. A controller with a simple structure is produced by applying an SVM and L2-gain design. The SVM is used to identify the complicated nonlinear functions and the modeling errors in the plant. The Lagrangian factors in the SVM are obtained using on-line tuning algorithms. L2-gain design is applied to suppress the disturbances. To obtain the optimal parameters in the SVM, then particle swarm optimization (PSO) method is incorporated. The stability and robustness of the close-loop system are confirmed by Lyapunov stability analysis. Numerical simulation is performed to demonstrate the validity of the proposed hybrid controller and its superior performance over a conventional PD controller.Algorithms2016-08-1093Article10.3390/a9030052521999-48932016-08-10doi: 10.3390/a9030052Weilin LuoHongchao Cong<![CDATA[Electronics, Vol. 5, Pages 48: A Comparative Review of Footwear-Based Wearable Systems]]>
http://www.mdpi.com/2079-9292/5/3/48
Footwear is an integral part of daily life. Embedding sensors and electronics in footwear for various different applications started more than two decades ago. This review article summarizes the developments in the field of footwear-based wearable sensors and systems. The electronics, sensing technologies, data transmission, and data processing methodologies of such wearable systems are all principally dependent on the target application. Hence, the article describes key application scenarios utilizing footwear-based systems with critical discussion on their merits. The reviewed application scenarios include gait monitoring, plantar pressure measurement, posture and activity classification, body weight and energy expenditure estimation, biofeedback, navigation, and fall risk applications. In addition, energy harvesting from the footwear is also considered for review. The article also attempts to shed light on some of the most recent developments in the field along with the future work required to advance the field.Electronics2016-08-1053Review10.3390/electronics5030048482079-92922016-08-10doi: 10.3390/electronics5030048Nagaraj HegdeMatthew BriesEdward Sazonov<![CDATA[IJGI, Vol. 5, Pages 141: Hypergraph+: An Improved Hypergraph-Based Task-Scheduling Algorithm for Massive Spatial Data Processing on Master-Slave Platforms]]>
http://www.mdpi.com/2220-9964/5/8/141
Spatial data processing often requires massive datasets, and the task/data scheduling efficiency of these applications has an impact on the overall processing performance. Among the existing scheduling strategies, hypergraph-based algorithms capture the data sharing pattern in a global way and significantly reduce total communication volume. Due to heterogeneous processing platforms, however, single hypergraph partitioning for later scheduling may be not optimal. Moreover, these scheduling algorithms neglect the overlap between task execution and data transfer that could further decrease execution time. In order to address these problems, an extended hypergraph-based task-scheduling algorithm, named Hypergraph+, is proposed for massive spatial data processing. Hypergraph+ improves upon current hypergraph scheduling algorithms in two ways: (1) It takes platform heterogeneity into consideration offering a metric function to evaluate the partitioning quality in order to derive the best task/file schedule; and (2) It can maximize the overlap between communication and computation. The GridSim toolkit was used to evaluate Hypergraph+ in an IDW spatial interpolation application on heterogeneous master-slave platforms. Experiments illustrate that the proposed Hypergraph+ algorithm achieves on average a 43% smaller makespan than the original hypergraph scheduling algorithm but still preserves high scheduling efficiency.ISPRS International Journal of Geo-Information2016-08-1058Article10.3390/ijgi50801411412220-99642016-08-10doi: 10.3390/ijgi5080141Bo ChengXuefeng GuanHuayi WuRui Li<![CDATA[Symmetry, Vol. 8, Pages 78: Automatic Frequency Identification under Sample Loss in Sinusoidal Pulse Width Modulation Signals Using an Iterative Autocorrelation Algorithm]]>
http://www.mdpi.com/2073-8994/8/8/78
In this work, we present a simple algorithm to calculate automatically the Fourier spectrum of a Sinusoidal Pulse Width Modulation Signal (SPWM). Modulated voltage signals of this kind are used in industry by speed drives to vary the speed of alternating current motors while maintaining a smooth torque. Nevertheless, the SPWM technique produces undesired harmonics, which yield stator heating and power losses. By monitoring these signals without human interaction, it is possible to identify the harmonic content of SPWM signals in a fast and continuous manner. The algorithm is based in the autocorrelation function, commonly used in radar and voice signal processing. Taking advantage of the symmetry properties of the autocorrelation, the algorithm is capable of estimating half of the period of the fundamental frequency; thus, allowing one to estimate the necessary number of samples to produce an accurate Fourier spectrum. To deal with the loss of samples, i.e., the scan backlog, the algorithm iteratively acquires and trims the discrete sequence of samples until the required number of samples reaches a stable value. The simulation shows that the algorithm is not affected by either the magnitude of the switching pulses or the acquisition noise.Symmetry2016-08-1088Article10.3390/sym8080078782073-89942016-08-10doi: 10.3390/sym8080078Alejandro SaidYasser DavizónPiero Espino-RománRoberto Rodríguez-SaidCarlos Hernández-Santos<![CDATA[MCA, Vol. 21, Pages 33: The Cubic α-Catmull-Rom Spline]]>
http://www.mdpi.com/2297-8747/21/3/33
By extending the definition interval of the standard cubic Catmull-Rom spline basis functions from [0,1] to [0,α], a class of cubic Catmull-Rom spline basis functions with a shape parameter α, named cubic α-Catmull-Rom spline basis functions, is constructed. Then, the corresponding cubic α-Catmull-Rom spline curves are generated based on the introduced basis functions. The cubic α-Catmull-Rom spline curves not only have the same properties as the standard cubic Catmull-Rom spline curves, but also can be adjusted by altering the value of the shape parameter α even if the control points are fixed. Furthermore, the cubic α-Catmull-Rom spline interpolation function is discussed, and a method for determining the optimal interpolation function is presented.Mathematical and Computational Applications2016-08-09213Article10.3390/mca21030033332297-87472016-08-09doi: 10.3390/mca21030033Juncheng LiSheng Chen<![CDATA[Future Internet, Vol. 8, Pages 40: Sensor Observation Service API for Providing Gridded Climate Data to Agricultural Applications]]>
http://www.mdpi.com/1999-5903/8/3/40
We developed a mechanism for seamlessly providing weather data and long-term historical climate data from a gridded data source through an international standard web API, which was the Sensor Observation Service (SOS) defined by the Open Geospatial Consortium (OGC). The National Agriculture and Food Research Organization (NARO) Japan has been providing gridded climate data consisting of nine daily meteorological variables, which are average, minimum, maximum of air temperature, relative humidity, sunshine duration, solar radiant exposure, downward longwave radiation, precipitation and wind speed for 35 years covering Japan. The gridded data structure is quite useful for spatial analysis, such as developing crop suitability maps and monitoring regional crop development. Individual farmers, however, make decisions using historical climate information and forecasts for an incoming cropping season of their farms. In this regard, climate data at a point-based structure are convenient for application development to support farmers’ decisions. Through the proposed mechanism in this paper, the agricultural applications and analysis can request point-based climate data from a gridded data source through the standard API with no need to deal with the complicated hierarchical data structure of the gridded climate data source. Clients can easily obtain data and metadata by only accessing the service endpoint. The mechanism also provides several web bindings and data encodings for the clients’ convenience. Caching, including the pre-caching mechanism, was developed and evaluated to secure an effective response time. The mechanism enhances the accessibility and usability of the gridded weather data source, as well as SOS API for agricultural applications.Future Internet2016-08-0983Article10.3390/fi8030040401999-59032016-08-09doi: 10.3390/fi8030040Rassarin ChinnachodteeranunKiyoshi Honda<![CDATA[IJGI, Vol. 5, Pages 143: A Novel Simplified Algorithm for Bare Surface Soil Moisture Retrieval Using L-Band Radiometer]]>
http://www.mdpi.com/2220-9964/5/8/143
Soil moisture plays an important role in understanding climate change and hydrology, and L-band passive microwave radiometers have been verified as effective tools for monitoring soil moisture. This paper proposes a novel, simplified algorithm for bare surface soil moisture retrieval using L-band radiometer. The algorithm consists of two sub-algorithms: a surface emission model and a soil moisture retrieval model. In analyses of the advanced integral equation model (AIEM) simulated database, the surface emission model was developed to diminish the effects of surface roughness using dual-polarization surface reflectivity. The soil moisture retrieval model, which was calibrated using the Dobson simulated database, is based on the relationship between the adjusted real refractive index N r and the volumetric soil moisture. Soil moisture can be determined via a numerical solution that uses several freely available input parameters: dual-polarization microwave brightness temperature, surface temperature, and the contents of sand and clay. The results showed good agreement with the input soil moisture values simulated by the AIEM model, with root mean square errors (RMSEs) lower than 3% at all incidence angles. The algorithm was then verified based on data from the four-year L-band experiments conducted at Beltsville Agricultural Research Center (BARC) test sites, achieving RMSEs of 4.3% and 3.4% at 40° and 50°, respectively. These results indicate that the simplified algorithm proposed in this paper shows a very good accuracy in soil moisture retrieval. Additionally, the algorithm exhibits a better performance for the large incidence angle radiometers in L-band such as those produced by the Soil Moisture Active and Passive (SMAP).ISPRS International Journal of Geo-Information2016-08-0958Article10.3390/ijgi50801431432220-99642016-08-09doi: 10.3390/ijgi5080143Bin ZhuXiaoning SongPei LengChuan SunRuixin WangXiaoguang Jiang<![CDATA[IJGI, Vol. 5, Pages 135: A Novel Absolute Orientation Method Using Local Similarities Representation]]>
http://www.mdpi.com/2220-9964/5/8/135
Absolute orientation is an important method in the field of photogrammetry. The technique is used to transform points between a local coordinate reference system and a global (geodetic) reference system. The classical transformation method uses a single set of similarity transformation parameters. However, the root mean square error (RMSE) of the classical method is large, especially for large-scale aerial photogrammetry analyses in which the points used are triangulated through free-net bundle adjustment. To improve the transformation accuracy, this study proposes a novel absolute orientation method in which the transformation uses various sets of local similarities. A Triangular Irregular Network (TIN) model is applied to divide the Ground Control Points (GCPs) into numerous triangles. Local similarities can then be computed using the three vertices of each triangle. These local similarities are combined to formulate the new transformation based on a weighting function. Both simulated and real data sets were used to assess the accuracy of the proposed method. The proposed method yields significantly improved plane and z-direction transformed point accuracies compared with the classical method. On a real data set with a mapping scale of 1:30,000 for a 53 km × 35 km study area, the plane and z RMSEs can be reduced from 1.2 m and 12.4 m to 0.4 m and 3.2 m, respectively.ISPRS International Journal of Geo-Information2016-08-0958Article10.3390/ijgi50801351352220-99642016-08-09doi: 10.3390/ijgi5080135Lei YanJie WanYanbiao SunShiyue FanYizhen YanRui Chen<![CDATA[IJGI, Vol. 5, Pages 142: A Workflow for Automatic Quantification of Structure and Dynamic of the German Building Stock Using Official Spatial Data]]>
http://www.mdpi.com/2220-9964/5/8/142
Knowledge of the German building stock is largely based on census data and annual construction statistics. Despite the wide range of statistical data, they are constrained in terms of temporal, thematic and spatial resolution, and hence do not satisfy all requirements of spatial planning and research. In this paper, we describe a new workflow for data integration that allows the quantification of the structure and dynamic of national building stocks by analyzing authoritative geodata. The proposed workflow has been developed, tested and demonstrated exemplarily for the whole country of Germany. We use nationwide and commonly available authoritative geodata products such as building footprint and address data derived from the real estate cadaster and land use information from the digital landscape model. The processing steps are (1) data preprocessing; (2) the calculation of building attributes; (3) semantic enrichment of the building using a classification tree; (4) the intersection with spatial units; and finally (5) the quantification and cartographic visualization of the building structure and dynamic. Applying the workflow to German authoritative geodata, it was possible to describe the entire building stock by 48 million polygons at different scale levels. Approximately one third of the total building stock are outbuildings. The methodological approach reveals that 62% of residential buildings are detached, 80% semi-detached and 20% terraced houses. The approach and the novel database will be very valuable for urban and energy modeling, material flow analysis, risk assessment and facility management.ISPRS International Journal of Geo-Information2016-08-0958Article10.3390/ijgi50801421422220-99642016-08-09doi: 10.3390/ijgi5080142André HartmannGotthard MeinelRobert HechtMartin Behnisch<![CDATA[IJGI, Vol. 5, Pages 140: Design and Implementation of a Robust Decision Support System for Marine Space Resource Utilization]]>
http://www.mdpi.com/2220-9964/5/8/140
Increasing coastal space resource utilization (CSRU) activities and their impact on coastal environments has been recognized as a critical coastal zone stressor. Consequently, the need for sustainable and valid CSRU management has been highlighted. In this study, a highly-intelligent prototype decision-aided system for CSRU was developed. In contrast with existing coastal decision-aided systems, this system is aimed at the management of CSRU, providing reliable and dynamic numerical simulation, analysis, and aided decision making for real coastal engineering based on a self-developed fully automatic numerical program. It was established on multi-tier distributed architecture based on Java EE. The most efficient strategies for spatial data organization, automatic coastal numerical programs, and impact assessment modules are demonstrated. In addition, its integrated construction involving the addition of a new coastal project on the webpage, its one-click numerical prediction of coastal environmental impacts, assessments based on numerical results, and its aided decision-making capabilities are addressed. The system was applied to Ningbo Sea, China, establishing the Ningbo CSRU Decision Support System. Two projects were demonstrated: one reclamation project and one land-based outlet planning case. Results indicated that these projects had detrimental effects on local coastal environments. Therefore, the approvals of these projects were not recommended.ISPRS International Journal of Geo-Information2016-08-0858Article10.3390/ijgi50801401402220-99642016-08-08doi: 10.3390/ijgi5080140Jing XieShuxiu LiangZhaochen SunJiang ChangJianwen Sun<![CDATA[Information, Vol. 7, Pages 50: Smart Homes and Sensors for Surveillance and Preventive Education at Home: Example of Obesity]]>
http://www.mdpi.com/2078-2489/7/3/50
(1) Background: The aim of this paper is to show that e-health tools like smart homes allow the personalization of the surveillance and preventive education of chronic patients, such as obese persons, in order to maintain a comfortable and preventive lifestyle at home. (2) Technologies and methods: Several types of sensors allow coaching the patient at home, e.g., the sensors recording the activity and monitoring the physiology of the person. All of this information serves to personalize serious games dedicated to preventive education, for example in nutrition and vision. (3) Results: We built a system of personalized preventive education at home based on serious games, derived from the feedback information they provide through a monitoring system. Therefore, it is possible to define (after clustering and personalized calibration) from the at home surveillance of chronic patients different comfort zones where their behavior can be estimated as normal or abnormal and, then, to adapt both alarm levels for surveillance and education programs for prevention, the chosen example of application being obesity.Information2016-08-0873Article10.3390/info7030050502078-24892016-08-08doi: 10.3390/info7030050Jacques DemongeotAdrien ElenaMariem JelassiSlimane Ben MiledNarjès Bellamine Ben SaoudCarla Taramasco<![CDATA[Mathematics, Vol. 4, Pages 50: Complete Classification of Cylindrically Symmetric Static Spacetimes and the Corresponding Conservation Laws]]>
http://www.mdpi.com/2227-7390/4/3/50
In this paper we find the Noether symmetries of the Lagrangian of cylindrically symmetric static spacetimes. Using this approach we recover all cylindrically symmetric static spacetimes appeared in the classification by isometries and homotheties. We give different classes of cylindrically symmetric static spacetimes along with the Noether symmetries of the corresponding Lagrangians and conservation laws.Mathematics2016-08-0843Article10.3390/math4030050502227-73902016-08-08doi: 10.3390/math4030050Farhad AliTooba Feroze<![CDATA[IJGI, Vol. 5, Pages 139: Spatiotemporal Modeling of Urban Growth Predictions Based on Driving Force Factors in Five Saudi Arabian Cities]]>
http://www.mdpi.com/2220-9964/5/8/139
This paper investigates the effect of four driving forces, including elevation, slope, distance to drainage and distance to major roads, on urban expansion in five Saudi Arabian cities: Riyadh, Jeddah, Makkah, Al-Taif and Eastern Area. The prediction of urban probabilities in the selected cities based on the four driving forces is generated using a logistic regression model for two time periods of urban change in 1985 and 2014. The validation of the model was tested using two approaches. The first approach was a quantitative analysis by using the Relative Operating Characteristic (ROC) method. The second approach was a qualitative analysis in which the probable urban growth maps based on urban changes in 1985 is used to test the performance of the model to predict the probable urban growth after 2014 by comparing the probable maps of 1985 and the actual urban growth of 2014. The results indicate that the prediction model of 2014 provides a reliable and consistent prediction based on the performance of 1985. The analysis of driving forces shows variable effects over time. Variables such as elevation, slope and road distance had significant effects on the selected cities. However, distance to major roads was the factor with the most impact to determine the urban form in all five cites in both 1985 and 2014.ISPRS International Journal of Geo-Information2016-08-0858Article10.3390/ijgi50801391392220-99642016-08-08doi: 10.3390/ijgi5080139Abdullah AlqurashiLalit KumarKhalid Al-Ghamdi<![CDATA[IJGI, Vol. 5, Pages 138: Occlusion-Free Visualization of Important Geographic Features in 3D Urban Environments]]>
http://www.mdpi.com/2220-9964/5/8/138
Modern cities are dense with very tall buildings, which often leads to features of interest (FOIs, e.g., relevant roads and associated landmarks) being occluded by clusters of buildings. Thus, from any given point of view, users can see only a small area of the city. However, it is currently an important technical problem to maintain the visibility of FOIs while preserving the urban shapes and spatial relationships between features. In this paper, we present a novel automatic visualization method to generate occlusion-free views for FOIs in real time. Our method integrates with three effective cartographic schemes: route broadening, building displacement, and building scaling, using an optimization framework A series of distortion energies are presented to preserve the urban resemblance, considering the view position and the urban features based on spatial cognition to maintain spatial and temporal coherence. Our approach can be used to visualize large urban environments at interactive framerates in which the visibility of the occluded FOIs is maximized while the deformation of the landscape’s shape is minimized. Using this approach, the visual readability of such 3D urban maps can be much improved.ISPRS International Journal of Geo-Information2016-08-0858Article10.3390/ijgi50801381382220-99642016-08-08doi: 10.3390/ijgi5080138Liang ZhangLiqiang ZhangXiang Xu<![CDATA[Mathematics, Vol. 4, Pages 51: A New Approach to Study Fixed Point of Multivalued Mappings in Modular Metric Spaces and Applications]]>
http://www.mdpi.com/2227-7390/4/3/51
The purpose of this paper is to present a new approach to study the existence of fixed points for multivalued F-contraction in the setting of modular metric spaces. In establishing this connection, we introduce the notion of multivalued F-contraction and prove corresponding fixed point theorems in complete modular metric space with some specific assumption on the modular. Then we apply our results to establish the existence of solutions for a certain type of non-linear integral equations.Mathematics2016-08-0843Article10.3390/math4030051512227-73902016-08-08doi: 10.3390/math4030051Dilip JainAnantachai PadcharoenPoom KumamDhananjay Gopal<![CDATA[IJGI, Vol. 5, Pages 137: A Supervised Approach to Delineate Built-Up Areas for Monitoring and Analysis of Settlements]]>
http://www.mdpi.com/2220-9964/5/8/137
Monitoring urban growth and measuring urban sprawl is essential for improving urban planning and development. In this paper, we introduce a supervised approach for the delineation of urban areas using commonly available topographic data and commercial GIS software. The method uses a supervised parameter optimization approach along with buffer-based quality measuring method. The approach was developed, tested and evaluated in terms of possible usage in monitoring built-up areas in spatial science at a very fine-grained level. Results show that built-up area boundaries can be delineated automatically with higher quality compared to the settlement boundaries actually used. The approach has been applied to 166 settlement bodies in Germany. The study shows a very efficient way of extracting settlement boundaries from topographic data and maps and contributes to the quantification and monitoring of urban sprawl. Moreover, the findings from this study can potentially guide policy makers and urban planners from other countries.ISPRS International Journal of Geo-Information2016-08-0658Article10.3390/ijgi50801371372220-99642016-08-06doi: 10.3390/ijgi5080137Oliver HarigDirk BurghardtRobert Hecht<![CDATA[IJGI, Vol. 5, Pages 136: GeoWeb Crawler: An Extensible and Scalable Web Crawling Framework for Discovering Geospatial Web Resources]]>
http://www.mdpi.com/2220-9964/5/8/136
With the advance of the World-Wide Web (WWW) technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb) also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC) web services, Keyhole Markup Language (KML) and Environmental Systems Research Institute, Inc (ESRI) Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.ISPRS International Journal of Geo-Information2016-08-0558Article10.3390/ijgi50801361362220-99642016-08-05doi: 10.3390/ijgi5080136Chih-Yuan HuangHao Chang<![CDATA[Symmetry, Vol. 8, Pages 77: The Role of Orthogonal Polynomials in Tailoring Spherical Distributions to Kurtosis Requirements]]>
http://www.mdpi.com/2073-8994/8/8/77
This paper carries out an investigation of the orthogonal-polynomial approach to reshaping symmetric distributions to fit in with data requirements so as to cover the multivariate case. With this objective in mind, reference is made to the class of spherical distributions, given that they provide a natural multivariate generalization of univariate even densities. After showing how to tailor a spherical distribution via orthogonal polynomials to better comply with kurtosis requirements, we provide operational conditions for the positiveness of the resulting multivariate Gram–Charlier-like expansion, together with its kurtosis range. Finally, the approach proposed here is applied to some selected spherical distributions.Symmetry2016-08-0588Article10.3390/sym8080077772073-89942016-08-05doi: 10.3390/sym8080077Luca BagnatoMario FalivaMaria Zoia<![CDATA[Computation, Vol. 4, Pages 28: Highly Excited States from a Time Independent Density Functional Method]]>
http://www.mdpi.com/2079-3197/4/3/28
A constrained optimized effective potential (COEP) methodology proposed earlier by us for singly low-lying excited states is extended to highly excited states having the same spatial and spin symmetry. Basic tenets of time independent density functional theory and its COEP implementation for excited states are briefly reviewed. The amended Kohn–Sham-like equations for excited state orbitals and their specific features for highly excited states are discussed. The accuracy of the method is demonstrated using exchange-only calculations for highly excited states of the He and Li atoms.Computation2016-08-0543Article10.3390/computation4030028282079-31972016-08-05doi: 10.3390/computation4030028Vitaly GlushkovMel Levy<![CDATA[Risks, Vol. 4, Pages 29: Optimal Insurance with Heterogeneous Beliefs and Disagreement about Zero-Probability Events]]>
http://www.mdpi.com/2227-9091/4/3/29
In problems of optimal insurance design, Arrow’s classical result on the optimality of the deductible indemnity schedule holds in a situation where the insurer is a risk-neutral Expected-Utility (EU) maximizer, the insured is a risk-averse EU-maximizer, and the two parties share the same probabilistic beliefs about the realizations of the underlying insurable loss. Recently, Ghossoub re-examined Arrow’s problem in a setting where the two parties have different subjective beliefs about the realizations of the insurable random loss, and he showed that if these beliefs satisfy a certain compatibility condition that is weaker than the Monotone Likelihood Ratio (MLR) condition, then optimal indemnity schedules exist and are nondecreasing in the loss. However, Ghossoub only gave a characterization of these optimal indemnity schedules in the special case of an MLR. In this paper, we consider the general case, allowing for disagreement about zero-probability events. We fully characterize the class of all optimal indemnity schedules that are nondecreasing in the loss, in terms of their distribution under the insured’s probability measure, and we obtain Arrow’s classical result, as well as one of the results of Ghossoub as corollaries. Finally, we formalize Marshall’s argument that, in a setting of belief heterogeneity, an optimal indemnity schedule may take “any”shape.Risks2016-08-0543Article10.3390/risks4030029292227-90912016-08-05doi: 10.3390/risks4030029Mario Ghossoub<![CDATA[Electronics, Vol. 5, Pages 47: An Embedded Sensing and Communication Platform, and a Healthcare Model for Remote Monitoring of Chronic Diseases]]>
http://www.mdpi.com/2079-9292/5/3/47
This paper presents a new remote healthcare model, which, exploiting wireless biomedical sensors, an embedded local unit (gateway) for sensor data acquisition-processing-communication, and a remote e-Health service center, can be scaled in different telemedicine scenarios. The aim is avoiding hospitalization cost and long waiting lists for patients affected by chronic illness who need continuous and long-term monitoring of some vital parameters. In the “1:1” scenario, the patient has a set of biomedical sensors and a gateway to exchange data and healthcare protocols with the remote service center. In the “1:N” scenario the use of gateway and sensors is managed by a professional caregiver, e.g., assigned by the Public Health System to a number N of different patients. In the “point of care” scenario the patient, instead of being hospitalized, can take the needed measurements at a specific health corner, which is then connected to the remote e-Health center. A mix of commercially available sensors and new custom-designed ones is presented. The new custom-designed sensors range from a single-lead electrocardiograph for easy measurements taken by the patients at their home, to a multi-channel biomedical integrated circuit for acquisition of multi-channel bio signals, to a new motion sensor for patient posture estimation and fall detection. Experimental trials in real-world telemedicine applications assess the proposed system in terms of easy usability from patients, specialist and family doctors, and caregivers, in terms of scalability in different scenarios, and in terms of suitability for implementation of needed care plans.Electronics2016-08-0453Article10.3390/electronics5030047472079-92922016-08-04doi: 10.3390/electronics5030047Sergio SaponaraMassimiliano DonatiLuca FanucciAlessio Celli<![CDATA[Symmetry, Vol. 8, Pages 76: Almost Contact Metric Structures on 5-Dimensional Nilpotent Lie Algebras]]>
http://www.mdpi.com/2073-8994/8/8/76
We study almost contact metric structures on 5-dimensional nilpotent Lie algebras and investigate the class of left invariant almost contact metric structures on corresponding Lie groups. We determine certain classes that a five-dimensional nilpotent Lie group can not be equipped with.Symmetry2016-08-0488Article10.3390/sym8080076762073-89942016-08-04doi: 10.3390/sym8080076Nülifer ÖzdemirMehmet SolgunŞirin Aktay<![CDATA[IJGI, Vol. 5, Pages 134: Measuring Land Take: Usability of National Topographic Databases as Input for Land Use Change Analysis: A Case Study from Germany]]>
http://www.mdpi.com/2220-9964/5/8/134
The implementation of sustainable land policies is in need of monitoring methods that go beyond a mere description of the proportion values of land use classes. The annual statistical surface area report on actual land utilization (German: “Bodenfläche nach Art der tatsächlichen Nutzung”), published by the statistical offices of the German federal states and the federation, provides information on a set of pre-defined land use classes for municipalities, districts and federal states. Due to its surveying method of summing up usage information from cadastral registers, it is not possible to determine previous and subsequent usages of land parcels. Hence, it is hard to precisely indicate to what extent particular land use classes contribute to the settlement area increase. Nevertheless, this information is crucial to the understanding of land use change processes, which is needed for a subsequent identification of driving forces. To overcome this lack of information, a method for the spatial and quantitative determination of previous and subsequent land usages has been developed, implemented and tested. It is based on pre-processed land use data for different time slices, which are derived from authoritative geo-topographical base data. The developed method allows for the identification of land use changes considering small geometric shifts and changes in the underlying data model, which can be adaptively excluded from the balance.ISPRS International Journal of Geo-Information2016-08-0458Article10.3390/ijgi50801341342220-99642016-08-04doi: 10.3390/ijgi5080134Martin SchorchtTobias KrügerGotthard Meinel<![CDATA[Systems, Vol. 4, Pages 28: Using Textual Data in System Dynamics Model Conceptualization]]>
http://www.mdpi.com/2079-8954/4/3/28
Qualitative data is an important source of information for system dynamics modeling. It can potentially support any stage of the modeling process, yet it is mainly used in the early steps such as problem identification and model conceptualization. Existing approaches that outline a systematic use of qualitative data in model conceptualization are often not adopted for reasons of time constraints resulting from an abundance of data. In this paper, we introduce an approach that synthesizes the strengths of existing methods. This alternative approach (i) is focused on causal relationships starting from the initial steps of coding; (ii) generates a generalized and simplified causal map without recording individual relationships so that time consumption can be reduced; and (iii) maintains the links from the final causal map to the data sources by using software. We demonstrate an application of this approach in a study about integrated decision making in the housing sector of the UK.Systems2016-08-0443Article10.3390/systems4030028282079-89542016-08-04doi: 10.3390/systems4030028Sibel EkerNici Zimmermann<![CDATA[Algorithms, Vol. 9, Pages 53: Faster Force-Directed Graph Drawing with the Well-Separated Pair Decomposition]]>
http://www.mdpi.com/1999-4893/9/3/53
The force-directed paradigm is one of the few generic approaches to drawing graphs. Since force-directed algorithms can be extended easily, they are used frequently. Most of these algorithms are, however, quite slow on large graphs, as they compute a quadratic number of forces in each iteration. We give a new algorithm that takes only O ( m + n log n ) time per iteration when laying out a graph with n vertices and m edges. Our algorithm approximates the true forces using the so-called well-separated pair decomposition. We perform experiments on a large number of graphs and show that we can strongly reduce the runtime, even on graphs with less than a hundred vertices, without a significant influence on the quality of the drawings (in terms of the number of crossings and deviation in edge lengths).Algorithms2016-08-0493Article10.3390/a9030053531999-48932016-08-04doi: 10.3390/a9030053Fabian LippAlexander WolffJohannes Zink<![CDATA[Computation, Vol. 4, Pages 27: Automatic Generation of Massively Parallel Codes from ExaSlang]]>
http://www.mdpi.com/2079-3197/4/3/27
Domain-specific languages (DSLs) have the potential to provide an intuitive interface for specifying problems and solutions for domain experts. Based on this, code generation frameworks can produce compilable source code. However, apart from optimizing execution performance, parallelization is key for pushing the limits in problem size and an essential ingredient for exascale performance. We discuss necessary concepts for the introduction of such capabilities in code generators. In particular, those for partitioning the problem to be solved and accessing the partitioned data are elaborated. Furthermore, possible approaches to expose parallelism to users through a given DSL are discussed. Moreover, we present the implementation of these concepts in the ExaStencils framework. In its scope, a code generation framework for highly optimized and massively parallel geometric multigrid solvers is developed. It uses specifications from its multi-layered external DSL ExaSlang as input. Based on a general version for generating parallel code, we develop and implement widely applicable extensions and optimizations. Finally, a performance study of generated applications is conducted on the JuQueen supercomputer.Computation2016-08-0443Article10.3390/computation4030027272079-31972016-08-04doi: 10.3390/computation4030027Sebastian KuckukHarald Köstler<![CDATA[Electronics, Vol. 5, Pages 46: Skin Admittance Measurement for Emotion Recognition: A Study over Frequency Sweep]]>
http://www.mdpi.com/2079-9292/5/3/46
The electrodermal activity (EDA) is a reliable physiological signal for monitoring the sympathetic nervous system. Several studies have demonstrated that EDA can be a source of effective markers for the assessment of emotional states in humans. There are two main methods for measuring EDA: endosomatic (internal electrical source) and exosomatic (external electrical source). Even though the exosomatic approach is the most widely used, differences between alternating current (AC) and direct current (DC) methods and their implication in the emotional assessment field have not yet been deeply investigated. This paper aims at investigating how the admittance contribution of EDA, studied at different frequency sources, affects the EDA statistical power in inferring on the subject’s arousing level (neutral or aroused). To this extent, 40 healthy subjects underwent visual affective elicitations, including neutral and arousing levels, while EDA was gathered through DC and AC sources from 0 to 1 kHz. Results concern the accuracy of an automatic, EDA feature-based arousal recognition system for each frequency source. We show how the frequency of the external electrical source affects the accuracy of arousal recognition. This suggests a role of skin susceptance in the study of affective stimuli through electrodermal response.Electronics2016-08-0453Article10.3390/electronics5030046462079-92922016-08-04doi: 10.3390/electronics5030046Alberto GrecoAntonio LanataLuca CitiNicola VanelloGaetano ValenzaEnzo Scilingo<![CDATA[Symmetry, Vol. 8, Pages 74: M&E-NetPay: A Micropayment System for Mobile and Electronic Commerce]]>
http://www.mdpi.com/2073-8994/8/8/74
As an increasing number of people purchase goods and services online, micropayment systems are becoming particularly important for mobile and electronic commerce. We have designed and developed such a system called M&amp;E-NetPay (Mobile and Electronic NetPay). With open interoperability and mobility, M&amp;E-NetPay uses web services to connect brokers and vendors, providing secure, flexible and reliable credit services over the Internet. In particular, M&amp;E-NetPay makes use of a secure, inexpensive and debit-based off-line protocol that allows vendors to interact only with customers, after validating coins. The design of the architecture and protocol of M&amp;E-NetPay are presented, together with the implementation of its prototype in ringtone and wallpaper sites. To validate our system, we have conducted its evaluations on performance, usability and heuristics. Furthermore, we compare our system to the CORBA-based (Common Object Request Broker Architecture) off-line micro-payment systems. The results have demonstrated that M&amp;E-NetPay outperforms the .NET-based M&amp;E-NetPay system in terms of performance and user satisfaction.Symmetry2016-08-0388Article10.3390/sym8080074742073-89942016-08-03doi: 10.3390/sym8080074Xiaodi HuangJinsong BaoXiaoling DaiEdwin SinghWeidong HuangChangqin Huang<![CDATA[Symmetry, Vol. 8, Pages 75: Fuzzy System-Based Face Detection Robust to In-Plane Rotation Based on Symmetrical Characteristics of a Face]]>
http://www.mdpi.com/2073-8994/8/8/75
As face recognition technology has developed, it has become widely used in various applications such as door access control, intelligent surveillance, and mobile phone security. One of its applications is its adoption in TV environments to supply viewers with intelligent services and high convenience. In a TV environment, the in-plane rotation of a viewer’s face frequently occurs because he or she may decide to watch the TV from a lying position, which degrades the accuracy of the face recognition. Nevertheless, there has been little previous research to deal with this problem. Therefore, we propose a new fuzzy system–based face detection algorithm that is robust to in-plane rotation based on the symmetrical characteristics of a face. Experimental results on two databases with one open database show that our method outperforms previous methods.Symmetry2016-08-0388Article10.3390/sym8080075752073-89942016-08-03doi: 10.3390/sym8080075Hyung HongWon LeeYeong KimKi KimDat NguyenKang Park<![CDATA[Future Internet, Vol. 8, Pages 39: Ontology-Based Representation and Reasoning in Building Construction Cost Estimation in China]]>
http://www.mdpi.com/1999-5903/8/3/39
Cost estimation is one of the most critical tasks for building construction project management. The existing building construction cost estimation methods of many countries, including China, require information from several sources, including material, labor, and equipment, and tend to be manual, time-consuming, and error-prone. To solve these problems, a building construction cost estimation model based on ontology representation and reasoning is established, which includes three major components, i.e., concept model ontology, work item ontology, and construction condition ontology. Using this model, the cost estimation information is modeled into OWL axioms and SWRL rules that leverage the semantically rich ontology representation to reason about cost estimation. Based on OWL axioms and SWRL rules, the cost estimation information can be translated into a set of concept models, work items, and construction conditions associated with the specific construction conditions. The proposed method is demonstrated in Protégé 3.4.8 through case studies based on the Measurement Specifications of Building Construction and Decoration Engineering taken from GB 50500-2013 (the Chinese national mandatory specifications). Finally, this research discusses the limitations of the proposed method and future research directions. The proposed method can help a building construction cost estimator extract information more easily and quickly.Future Internet2016-08-0383Article10.3390/fi8030039391999-59032016-08-03doi: 10.3390/fi8030039Xin LiuZhongfu LiShaohua Jiang<![CDATA[Risks, Vol. 4, Pages 28: Using Climate and Weather Data to Support Regional Vulnerability Screening Assessments of Transportation Infrastructure]]>
http://www.mdpi.com/2227-9091/4/3/28
Extreme weather and climate change can have a significant impact on all types of infrastructure and assets, regardless of location, with the potential for human casualties, physical damage to assets, disruption of operations, economic and community distress, and environmental degradation. This paper describes a methodology for using extreme weather and climate data to identify climate-related risks and to quantify the potential impact of extreme weather events on certain types of transportation infrastructure as part of a vulnerability screening assessment. This screening assessment can be especially useful when a large number of assets or large geographical areas are being studied, with the results enabling planners and asset managers to undertake a more detailed assessment of vulnerability on a more targeted number of assets or locations. The methodology combines climate, weather, and impact data to identify vulnerabilities to a range of weather and climate related risks over a multi-decadal planning period. The paper applies the methodology to perform an extreme weather and climate change vulnerability screening assessment on transportation infrastructure assets for the State of Tennessee. This paper represents the results of one of the first efforts at spatial vulnerability assessments of transportation infrastructure and provides important insights for any organization considering the impact of climate and weather events on transportation or other critical infrastructure systems.Risks2016-08-0343Article10.3390/risks4030028282227-90912016-08-03doi: 10.3390/risks4030028Leah DundonKatherine NelsonJaney CampMark AbkowitzAlan Jones<![CDATA[Information, Vol. 7, Pages 49: The Role of Physical Layer Security in IoT: A Novel Perspective]]>
http://www.mdpi.com/2078-2489/7/3/49
This paper deals with the problem of securing the configuration phase of an Internet of Things (IoT) system. The main drawbacks of current approaches are the focus on specific techniques and methods, and the lack of a cross layer vision of the problem. In a smart environment, each IoT device has limited resources and is often battery operated with limited capabilities (e.g., no keyboard). As a consequence, network security must be carefully analyzed in order to prevent security and privacy issues. In this paper, we will analyze the IoT threats, we will propose a security framework for the device initialization and we will show how physical layer security can effectively boost the security of IoT systems.Information2016-08-0273Article10.3390/info7030049492078-24892016-08-02doi: 10.3390/info7030049Tommaso PecorellaLuca BrilliLorenzo Mucchi<![CDATA[Econometrics, Vol. 4, Pages 33: Econometrics Best Paper Award 2016]]>
http://www.mdpi.com/2225-1146/4/3/33
n/aEconometrics2016-08-0143Editorial10.3390/econometrics4030033332225-11462016-08-01doi: 10.3390/econometrics4030033Kerry Patterson<![CDATA[Future Internet, Vol. 8, Pages 38: A Novel QoS Provisioning Algorithm for Optimal Multicast Routing in WMNs]]>
http://www.mdpi.com/1999-5903/8/3/38
The problem of optimal multicast routing in Wireless Mess Networks (WMNs) with Quality-of-Service (QoS) provisioning, which is Non-Deterministic Polynomial (NP)-complete, is studied in this paper. The existing algorithms are not very efficient or effective. In order to find an approximation optimal solution for WMNs in feasible time from source to the set of destination nodes, combining the previous deterministic algorithm with the well-known Minimum Path Cost Heuristic (MPH) algorithm, a novel multicast heuristic approximation (NMHA) algorithm with QoS provisioning is proposed in this paper to deal with it. The theoretical validations for the proposed algorithm are presented to show its performance and efficiency. After that, the random static networks with different destination nodes are evaluated. Simulations in these networks show that the proposed algorithm can achieve the approximate optimal solution with the approximation factor of 2(1 + ε)(1 − 1/q) and the time complexity of O(qmn2τK−1).Future Internet2016-08-0183Article10.3390/fi8030038381999-59032016-08-01doi: 10.3390/fi8030038Weijun YangYuanfeng Chen<![CDATA[MCA, Vol. 21, Pages 34: A Comparison of Information Criteria in Clustering Based on Mixture of Multivariate Normal Distributions]]>
http://www.mdpi.com/2297-8747/21/3/34
Clustering analysis based on a mixture of multivariate normal distributions is commonly used in the clustering of multidimensional data sets. Model selection is one of the most important problems in mixture cluster analysis based on the mixture of multivariate normal distributions. Model selection involves the determination of the number of components (clusters) and the selection of an appropriate covariance structure in the mixture cluster analysis. In this study, the efficiency of information criteria that are commonly used in model selection is examined. The effectiveness of information criteria has been determined according to the success in the selection of the number of components and in the selection of an appropriate covariance matrix.Mathematical and Computational Applications2016-08-01213Article10.3390/mca21030034342297-87472016-08-01doi: 10.3390/mca21030034Serkan AkogulMurat Erisoglu<![CDATA[Administrative Sciences, Vol. 6, Pages 10: Understanding Collaboration in Integrated Forms of Project Delivery by Taking a Risk-Uncertainty Based Perspective]]>
http://www.mdpi.com/2076-3387/6/3/10
Background: Cross-discipline team collaboration between the project ownership team, design team and project delivery team is central to effective management of risk, uncertainty and ambiguity. A recently-developed framework that was developed to provide a visualisation tool to enable various project procurement and delivery forms has been adapted to answer the research question How can uncertainty best be managed in complex projects? Methods: The research involved reviewing transcribed recorded interviews with 50 subject matter experts that was originally analysed using axial coding with Nvivo 10 software to develop the framework that the paper refers to. It extends analysis to focus on risk and uncertainty previously reported upon in that study. Results and Conclusions: The adaptation presents a hypothetical partnering and alliancing project collaboration map taken from a risk and uncertainty management perspective and it also refines its focus on coping and sensemaking mechanisms to help manage risk-uncertainty in a practical and ‘how to do’ manner. This contributes to theory by extending the relationship based procurement (RBP) framework from taking a purely procurement theory focus to being applied in a risk-uncertainty project management theory domain. It also provides a practice contribution by explaining how the RBP mutation to a collaboration and risk-uncertainty management framework may be applied.Administrative Sciences2016-08-0163Concept Paper10.3390/admsci6030010102076-33872016-08-01doi: 10.3390/admsci6030010Derek WalkerBeverley Lloyd-Walker<![CDATA[Technologies, Vol. 4, Pages 22: Measuring Outcomes for Children with Cerebral Palsy Who Use Gait Trainers]]>
http://www.mdpi.com/2227-7080/4/3/22
Gait trainers are walking devices that provide additional trunk and pelvic support. The primary population of children using gait trainers includes children with cerebral palsy (CP) functioning at Gross Motor Function Classification System (GMFCS) levels IV and V. A recent systematic review found that evidence supporting the effectiveness of gait trainer interventions for children was primarily descriptive and insufficient to draw firm conclusions. A major limitation identified was the lack of valid, sensitive and reliable tools for measuring change in body structure and function, activity and participation outcomes. Twelve different clinical tools were identified in the systematic review and in this paper we review and discuss the evidence supporting their reliability, validity and clinical utility for use with children using gait trainers. We also describe seven additional clinical measurement tools that may be useful with this intervention and population. The Pediatric Evaluation of Disability Inventory (PEDI) rated highest across all areas at this time. Individualized outcome measures, such as the Canadian Occupational Performance Measure (COPM) and Goal Attainment Scaling and measuring user satisfaction with tools, such as the Quebec User Evaluation of Satisfaction with assistive Technology, show potential for gait trainer outcomes research. Spatiotemporal measures appear to be less useful than functional measures with this intervention and population. All tools would benefit from further development for use with children with CP functioning at GMFCS levels IV and V.Technologies2016-08-0143Review10.3390/technologies4030022222227-70802016-08-01doi: 10.3390/technologies4030022Roslyn LivingstoneGinny Paleg<![CDATA[IJGI, Vol. 5, Pages 132: Soil Sealing and the Complex Bundle of Influential Factors: Germany as a Case Study]]>
http://www.mdpi.com/2220-9964/5/8/132
In order to discuss the impact of land consumption, it is first necessary to localize and quantify the extent of sealed surfaces. Since 2010, the monitoring of land use structures and developments in Germany has been provided by the Monitor of Settlement and Open Space Development at the Leibniz Institute of Ecological Urban and Regional Development (IÖR; IÖR Monitor), a scientific service operated by the Leibniz Institute of Ecological Urban and Regional Development. The IÖR Monitor includes an indicator for soil sealing for the years 2006, 2009 and 2012. Using this new source of data, it is possible for the first time to conduct quantitative studies at the level of Germany’s municipalities with the aim of documenting the extent of soil sealing as a form of spatial classification, as well as to investigate possible correlations with other influential factors. Here, we describe a comprehensive data inspection of soil sealing and potential influential factors. Structural interrelationships are identified under the application of classical and spatial regression methods.ISPRS International Journal of Geo-Information2016-08-0158Article10.3390/ijgi50801321322220-99642016-08-01doi: 10.3390/ijgi5080132Martin BehnischHanna PoglitschTobias Krüger<![CDATA[IJGI, Vol. 5, Pages 133: The Göttingen eResearch Alliance: A Case Study of Developing and Establishing Institutional Support for Research Data Management]]>
http://www.mdpi.com/2220-9964/5/8/133
The Göttingen eResearch Alliance is presented as a case study for establishing institutional support for research data management within the context of the Göttingen Campus, a particular alliance of several research institutes at Göttingen. The cross-cutting, “horizontal” approach of the Göttingen eResearch Alliance, established by two research-oriented infrastructure providers, a research library and a computing and IT competence center, aims to coordinate Campus-led activities to establish sustainable and innovative services to support all phases of the research data life cycle. In this article, the core activities of the first phase aimed at developing a modular approach to provide support for research data management to researchers will be described. It closes with lessons learned and an outlook on future activities.ISPRS International Journal of Geo-Information2016-08-0158Article10.3390/ijgi50801331332220-99642016-08-01doi: 10.3390/ijgi5080133Jens DierkesUlrike Wuttke<![CDATA[Economies, Vol. 4, Pages 15: The Formation of Immigrant Networks in the Short and the Long Run]]>
http://www.mdpi.com/2227-7099/4/3/15
In this paper, we present a formal framework of possible network formations among immigrants. After arriving in the new country, one of the new immigrant’s important decisions is with whom to maintain a link in the foreign country. We find that the behavior of the first two immigrants affects all those who come after them. We also find that in the long run, under specific conditions, the first immigrant will become the leader of the immigrant society. Over time, as the stock of immigrants in the host country increases, the investment in the link with the leader will increase as well.Economies2016-07-3043Article10.3390/economies4030015152227-70992016-07-30doi: 10.3390/economies4030015Gil EpsteinOdelia Heizler-Cohen<![CDATA[Algorithms, Vol. 9, Pages 51: A Multi-Objective Harmony Search Algorithm for Sustainable Design of Floating Settlements]]>
http://www.mdpi.com/1999-4893/9/3/51
This paper is concerned with the application of computational intelligence techniques to the conceptual design and development of a large-scale floating settlement. The settlement in question is a design for the area of Urla, which is a rural touristic region located on the west coast of Turkey, near the metropolis of Izmir. The problem at hand includes both engineering and architectural aspects that need to be addressed in a comprehensive manner. We thus adapt the view as a multi-objective constrained real-parameter optimization problem. Specifically, we consider three objectives, which are conflicting. The first one aims at maximizing accessibility of urban functions such as housing and public spaces, as well as special functions, such as a marina for yachts and a yacht club. The second one aims at ensuring the wind protection of the general areas of the settlement, by adequately placing them in between neighboring land masses. The third one aims at maximizing visibility of the settlement from external observation points, so as to maximize the exposure of the settlement. To address this complex multi-objective optimization problem and identify lucrative alternative design solutions, a multi-objective harmony search algorithm (MOHS) is developed and applied in this paper. When compared to the Differential Evolution algorithm developed for the problem in the literature, we demonstrate that MOHS achieves competitive or slightly better performance in terms of hyper volume calculation, and gives promising results when the Pareto front approximation is examined.Algorithms2016-07-3093Article10.3390/a9030051511999-48932016-07-30doi: 10.3390/a9030051Cemre CubukcuogluIoannis ChatzikonstantinouMehmet TasgetirenI. SariyildizQuan-Ke Pan<![CDATA[Symmetry, Vol. 8, Pages 73: Broken versus Non-Broken Time Reversal Symmetry: Irreversibility and Response]]>
http://www.mdpi.com/2073-8994/8/8/73
We review some approaches to macroscopic irreversibility from reversible microscopic dynamics, introducing the contribution of time dependent perturbations within the framework of recent developments in non-equilibrium statistical physics. We show that situations commonly assumed to violate the time reversal symmetry (presence of magnetic fields, rotating reference frames, and some time dependent perturbations) in reality do not violate this symmetry, and can be treated with standard theories and within standard experimental protocols.Symmetry2016-07-2988Article10.3390/sym8080073732073-89942016-07-29doi: 10.3390/sym8080073Sara Dal CengioLamberto Rondoni<![CDATA[JRFM, Vol. 9, Pages 9: The Nexus between Social Capital and Bank Risk Taking]]>
http://www.mdpi.com/1911-8074/9/3/9
This study explores social capital and its relevance to bank risk taking across countries. Our empirical results show that the levels of bank risk taking are lower in countries with higher levels of social capital, and that the impact of social capital is mainly reflected by the reduced value of the standard deviation of return on assets. Moreover, the impact of social capital is found to be weaker when the legal system lacks strength. Furthermore, the study considers the impacts of social capital of the banks’ largest shareholders in these countries and finds that high levels of social capital present in these countries exert a negative effect on bank risk taking, but the effect is not strongly significant.Journal of Risk and Financial Management2016-07-2993Article10.3390/jrfm903000991911-80742016-07-29doi: 10.3390/jrfm9030009Wenjing XieHaoyuan DingTerence Chong