Computer Science & Mathematics
http://www.mdpi.com/journal/computer-math
Latest open access articles published in Computer Science & Mathematics at http://www.mdpi.com/journal/computer-math<![CDATA[IJGI, Vol. 5, Pages 103: Volunteered Geographic Information in Natural Hazard Analysis: A Systematic Literature Review of Current Approaches with a Focus on Preparedness and Mitigation]]>
http://www.mdpi.com/2220-9964/5/7/103
With the rise of new technologies, citizens can contribute to scientific research via Web 2.0 applications for collecting and distributing geospatial data. Integrating local knowledge, personal experience and up-to-date geoinformation indicates a promising approach for the theoretical framework and the methods of natural hazard analysis. Our systematic literature review aims at identifying current research and directions for future research in terms of Volunteered Geographic Information (VGI) within natural hazard analysis. Focusing on both the preparedness and mitigation phase results in eleven articles from two literature databases. A qualitative analysis for in-depth information extraction reveals auspicious approaches regarding community engagement and data fusion, but also important research gaps. Mainly based in Europe and North America, the analysed studies deal primarily with floods and forest fires, applying geodata collected by trained citizens who are improving their knowledge and making their own interpretations. Yet, there is still a lack of common scientific terms and concepts. Future research can use these findings for the adaptation of scientific models of natural hazard analysis in order to enable the fusion of data from technical sensors and VGI. The development of such general methods shall contribute to establishing the user integration into various contexts, such as natural hazard analysis.ISPRS International Journal of Geo-Information2016-06-2557Review10.3390/ijgi50701031032220-99642016-06-25doi: 10.3390/ijgi5070103Carolin KlonnerSabrina MarxTomás UsónJoão Porto de AlbuquerqueBernhard Höfle<![CDATA[Robotics, Vol. 5, Pages 12: IDC Robocon: A Transnational Teaming Competition for Project-Based Design Education in Undergraduate Robotics]]>
http://www.mdpi.com/2218-6581/5/3/12
This paper presents a robot design competition called ‘IDC Robocon’ as an effective tool for engineering education. The International Design Contest (IDC) Robocon competition has several benefits in creating a meaningful design experience for undergraduate engineering students and includes an international flavour as participants of the competition hail from all around the world. The problem posed to the contestants is to design, build and test mobile robots that are capable of accomplishing a task. A primary goal of the competition is to provide undergraduates with a meaningful design experience with an emphasis on mechanical design, electronic circuits and programming. It is hoped that by placing the emphasis on the design, the course will encourage more undergraduates to go into the field of engineering design. This paper presents the latest 2015 IDC Robocon (the 26th edition) in detail and discusses course of events and results in terms of the educational experience. In this competition, a simulated space problem of cleaning the debris from orbit is proposed for the latest IDC Robocon competition. Teams, comprising of students from multiple countries work together to develop robotic systems to compete with each other in collecting the foam balls and delivering them to the rotating the holder.Robotics2016-06-2453Article10.3390/robotics5030012122218-65812016-06-24doi: 10.3390/robotics5030012Ning TanRajesh MohanShaohui FoongMasaki YamakitaMasami IwaseShoshiro HatakeyamaNorihiro KamamichiLibo SongYou WangQiuguo Zhu<![CDATA[Econometrics, Vol. 4, Pages 30: Estimation of Gini Index within Pre-Specified Error Bound]]>
http://www.mdpi.com/2225-1146/4/3/30
Gini index is a widely used measure of economic inequality. This article develops a theory and methodology for constructing a confidence interval for Gini index with a specified confidence coefficient and a specified width without assuming any specific distribution of the data. Fixed sample size methods cannot simultaneously achieve both specified confidence coefficient and fixed width. We develop a purely sequential procedure for interval estimation of Gini index with a specified confidence coefficient and a specified margin of error. Optimality properties of the proposed method, namely first order asymptotic efficiency and asymptotic consistency properties are proved under mild moment assumptions of the distribution of the data.Econometrics2016-06-2443Article10.3390/econometrics4030030302225-11462016-06-24doi: 10.3390/econometrics4030030Bhargab ChattopadhyayShyamal De<![CDATA[IJFS, Vol. 4, Pages 13: Determination of Systemically Important Companies with Cross-Shareholding Network Analysis: A Case Study from an Emerging Market]]>
http://www.mdpi.com/2227-7072/4/3/13
Systemic risk events constitute an important issue in current financial systems. A leading course of action used to mitigate such events is identification of systemically important agents in order to implement the prudential policies in a financial system. In this paper, a bi-level cross-shareholding network of the stock market is considered according to direct and integrated ownership structure. Furthermore, different systemic risk indices are applied to identify systemically important companies in an early warning system. Results of application of these indices on cross-shareholding data from Tehran Stock Exchange show that integrated network indices produce more reliable results. Moreover, results of statistical analysis of the networks indicated the existence of scale-free characteristics in the TSE cross-shareholding network.International Journal of Financial Studies2016-06-2443Article10.3390/ijfs4030013132227-70722016-06-24doi: 10.3390/ijfs4030013Hossein DastkhanNaser Shams Gharneh<![CDATA[Computers, Vol. 5, Pages 13: Prediction of Dermoscopy Patterns for Recognition of both Melanocytic and Non-Melanocytic Skin Lesions]]>
http://www.mdpi.com/2073-431X/5/3/13
A differentiation between all types of melanocytic and non-melanocytic skin lesions (MnM–SK) is a challenging task for both computer-aided diagnosis (CAD) and dermatologists due to the complex structure of patterns. The dermatologists are widely using pattern analysis as a first step with clinical attributes to recognize all categories of pigmented skin lesions (PSLs). To increase the diagnostic accuracy of CAD systems, a new pattern classification algorithm is proposed to predict skin lesions patterns by integrating the majority voting (MV–SVM) scheme with multi-class support vector machine (SVM). The optimal color and texture features are also extracted from each region-of-interest (ROI) dermoscopy image and then these normalized features are fed into an MV–SVM classifier to recognize seven classes. The overall system is evaluated using a dataset of 350 dermoscopy images (50 ROIs per class). On average, the sensitivity of 94%, specificity of 84%, 93% of accuracy and area under the receiver operating curve (AUC) of 0.94 are achieved by the proposed MnM–SK system compared to state-of-the-art methods. The obtained result indicates that the MnM–SK system is successful for obtaining the high level of diagnostic accuracy. Thus, it can be used as an alternative pattern classification system to differentiate among all types of pigmented skin lesions (PSLs).Computers2016-06-2453Article10.3390/computers5030013132073-431X2016-06-24doi: 10.3390/computers5030013Qaisar AbbasMisbah SadafAnum Akram<![CDATA[Information, Vol. 7, Pages 37: Design of Hybrid Wired/Wireless Fieldbus Network for Turbine Power Generation System]]>
http://www.mdpi.com/2078-2489/7/3/37
Hybrid fieldbus network integrating wireless networks with existing wired fieldbuses has become new a research direction in industrial automation systems. In comparison to wired fieldbuses, the hybrid wired/wireless fieldbus network has a different system architecture, data transmission mechanism, communication protocol, etc. This leads to different challenges that need to be addressed. This paper proposes a hybrid wired/wireless fieldbus network which consists of a wireless industrial control network (WICN), a wired PROFIBUS-DP (Process Field Bus-Decentralized Periphery) fieldbus network, and a wired MODBUS/TCP (Mod Bus/Transmission Control Protocol) fieldbus network. They are connected by a new gateway which uses a shared data model to solve data exchange in different network protocols. In this paper, we describe the architecture of the proposed hybrid wired/wireless fieldbus network and data transmission mechanisms in detail, and then evaluate the performance of hybrid fieldbus network via a set of experiments. The experiment results confirm that the proposed hybrid wired/wireless fieldbus network can satisfy the performance requirement of industrial network control systems. Furthermore, in order to further investigate feasibility of the proposed hybrid wired/wireless fieldbus network, it is deployed at a steam turbine power generation system, and the performance figures obtained further verify its feasibility and effectiveness.Information2016-06-2473Article10.3390/info7030037372078-24892016-06-24doi: 10.3390/info7030037Sheng XuMinrui FeiHaikuan Wang<![CDATA[Informatics, Vol. 3, Pages 10: Tagging Users’ Social Circles via Multiple Linear Regression]]>
http://www.mdpi.com/2227-9709/3/3/10
A social circle is a category of strong social relationships, such as families, classmates and good friends and so on. The information diffusion among members of online social circles is frequent and credible. The research of users’ online social circles has become popular in recent years. Many scholars propose methods for detecting users’ online social circles. On the other hand, the social meanings and the tags of a social circle are also important for the analysis of a social circle. However, little work involves the tags discovery of social circles. This paper proposes an algorithm for social circle tag detection by multiple linear regression. The model solves the data sparse problem of tags in social circles and successfully combines different categories of features in social circles. We also redmap the concept of the social circle into "reference circles" of an academic paper. We evaluate our method in datasets of both Facebook and Microsoft Academic Search, and prove that it is more effective than other relevant methods.Informatics2016-06-2433Article10.3390/informatics3030010102227-97092016-06-24doi: 10.3390/informatics3030010Hailong QinJing LiuChin-Yew LinTing Liu<![CDATA[Symmetry, Vol. 8, Pages 54: Top-N Recommender Systems Using Genetic Algorithm-Based Visual-Clustering Methods]]>
http://www.mdpi.com/2073-8994/8/7/54
The drastic increase of websites is one of the causes behind the recent information overload on the internet. A recommender system (RS) has been developed for helping users filter information. However, the cold-start and sparsity problems lead to low performance of the RS. In this paper, we propose methods including the visual-clustering recommendation (VCR) method, the hybrid between the VCR and user-based methods, and the hybrid between the VCR and item-based methods. The user-item clustering is based on the genetic algorithm (GA). The recommendation performance of the proposed methods was compared with that of traditional methods. The results showed that the GA-based visual clustering could properly cluster user-item binary images. They also demonstrated that the proposed recommendation methods were more efficient than the traditional methods. The proposed VCR2 method yielded an F1 score roughly three times higher than the traditional approaches.Symmetry2016-06-2487Article10.3390/sym8070054542073-89942016-06-24doi: 10.3390/sym8070054Ukrit MarungNipon Theera-UmponSansanee Auephanwiriyakul<![CDATA[Symmetry, Vol. 8, Pages 53: Three New Classes of Solvable N-Body Problems of Goldfish Type with Many Arbitrary Coupling Constants]]>
http://www.mdpi.com/2073-8994/8/7/53
Three new classes of N-body problems of goldfish type are identified, with N an arbitrary positive integer ( N ≥ 2 ). These models are characterized by nonlinear Newtonian (“accelerations equal forces”) equations of motion describing N equal point-particles moving in the complex z-plane. These highly nonlinear equations feature many arbitrary coupling constants, yet they can be solved by algebraic operations. Some of these N-body problems are isochronous, their generic solutions being all completely periodic with an overall period T independent of the initial data (but quite a few of these solutions are actually periodic with smaller periods T / p with p a positive integer); other models are isochronous for an open region of initial data, while the motions for other initial data are not periodic, featuring instead scattering phenomena with some of the particles incoming from, or escaping to, infinity in the remote past or future.Symmetry2016-06-2487Article10.3390/sym8070053532073-89942016-06-24doi: 10.3390/sym8070053Francesco Calogero<![CDATA[Electronics, Vol. 5, Pages 33: Educational Programming on the Raspberry Pi]]>
http://www.mdpi.com/2079-9292/5/3/33
The original aim when creating the Raspberry Pi was to encourage “kids”—pre-university learners—to engage with programming, and to develop an interest in and understanding of programming and computer science concepts. The method to achieve this was to give them their own, low cost computer that they could use to program on, as a replacement for a family PC that often did not allow this option. With the original release, the Raspberry Pi included two programming environments in the standard distribution software: Scratch and IDLE, a Python environment. In this paper, we describe two programming environments that we developed and recently ported and optimised for the Raspberry Pi, Greenfoot and BlueJ, both using the Java programming language. Greenfoot and BlueJ are both now included in the Raspberry Pi standard software distribution, and they differ in many respects from IDLE; they are more graphical, more interactive, more engaging, and illustrate concepts of object orientation more clearly. Thus, they have the potential to support the original aim of the Raspberry Pi by creating a deeper engagement with programming. This paper describes these two environments and how they may be used, and discusses their differences and relationships to the two previously available systems.Electronics2016-06-2453Article10.3390/electronics5030033332079-92922016-06-24doi: 10.3390/electronics5030033Michael Kölling<![CDATA[Electronics, Vol. 5, Pages 32: Benefits of Considering More than Temperature Acceleration for GaN HEMT Life Testing]]>
http://www.mdpi.com/2079-9292/5/3/32
The purpose of this work was to investigate the validity of Arrhenius accelerated-life testing when applied to gallium nitride (GaN) high electron mobility transistors (HEMT) lifetime assessments, where the standard assumption is that only critical stressor is temperature, which is derived from operating power, device channel-case, thermal resistance, and baseplate temperature. We found that power or temperature alone could not explain difference in observed degradation, and that accelerated life tests employed by industry can benefit by considering the impact of accelerating factors besides temperature. Specifically, we found that the voltage used to reach a desired power dissipation is important, and also that temperature acceleration alone or voltage alone (without much power dissipation) is insufficient to assess lifetime at operating conditions.Electronics2016-06-2353Article10.3390/electronics5030032322079-92922016-06-23doi: 10.3390/electronics5030032Ronald CoutuRobert LakeBradley ChristiansenEric HellerChristopher BozadaBrian PolingGlen ViaJames TheimerStephen TetlakRamakrishna VeturyJeffrey Shealy<![CDATA[IJGI, Vol. 5, Pages 101: Simulation and Evaluation of Urban Growth for Germany Including Climate Change Mitigation and Adaptation Measures]]>
http://www.mdpi.com/2220-9964/5/7/101
Decision-makers in the fields of urban and regional planning in Germany face new challenges. High rates of urban sprawl need to be reduced by increased inner-urban development while settlements have to adapt to climate change and contribute to the reduction of greenhouse gas emissions at the same time. In this study, we analyze conflicts in the management of urban areas and develop integrated sustainable land use strategies for Germany. The spatial explicit land use change model Land Use Scanner is used to simulate alternative scenarios of land use change for Germany for 2030. A multi-criteria analysis is set up based on these scenarios and based on a set of indicators. They are used to measure whether the mitigation and adaptation objectives can be achieved and to uncover conflicts between these aims. The results show that the built-up and transport area development can be influenced both in terms of magnitude and spatial distribution to contribute to climate change mitigation and adaptation. Strengthening the inner-urban development is particularly effective in terms of reducing built-up and transport area development. It is possible to reduce built-up and transport area development to approximately 30 ha per day in 2030, which matches the sustainability objective of the German Federal Government for the year 2020. In the case of adaptation to climate change, the inclusion of extreme flood events in the context of spatial planning requirements may contribute to a reduction of the damage potential.ISPRS International Journal of Geo-Information2016-06-2357Article10.3390/ijgi50701011012220-99642016-06-23doi: 10.3390/ijgi5070101Jana HoymannRoland Goetzke<![CDATA[Risks, Vol. 4, Pages 18: Consistent Re-Calibration of the Discrete-Time Multifactor Vasiček Model]]>
http://www.mdpi.com/2227-9091/4/3/18
The discrete-time multifactor Vasiček model is a tractable Gaussian spot rate model. Typically, two- or three-factor versions allow one to capture the dependence structure between yields with different times to maturity in an appropriate way. In practice, re-calibration of the model to the prevailing market conditions leads to model parameters that change over time. Therefore, the model parameters should be understood as being time-dependent or even stochastic. Following the consistent re-calibration (CRC) approach, we construct models as concatenations of yield curve increments of Hull–White extended multifactor Vasiček models with different parameters. The CRC approach provides attractive tractable models that preserve the no-arbitrage premise. As a numerical example, we fit Swiss interest rates using CRC multifactor Vasiček models.Risks2016-06-2343Article10.3390/risks4030018182227-90912016-06-23doi: 10.3390/risks4030018Philipp HarmsDavid StefanovitsJosef TeichmannMario Wüthrich<![CDATA[IJGI, Vol. 5, Pages 102: Exploring the Influence of Neighborhood Characteristics on Burglary Risks: A Bayesian Random Effects Modeling Approach]]>
http://www.mdpi.com/2220-9964/5/7/102
A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.ISPRS International Journal of Geo-Information2016-06-2357Article10.3390/ijgi50701021022220-99642016-06-23doi: 10.3390/ijgi5070102Hongqiang LiuXinyan Zhu<![CDATA[Information, Vol. 7, Pages 36: Standard Compliant Hazard and Threat Analysis for the Automotive Domain]]>
http://www.mdpi.com/2078-2489/7/3/36
The automotive industry has successfully collaborated to release the ISO 26262 standard for developing safe software for cars. The standard describes in detail how to conduct hazard analysis and risk assessments to determine the necessary safety measures for each feature. However, the standard does not concern threat analysis for malicious attackers or how to select appropriate security countermeasures. We propose the application of ISO 27001 for this purpose and show how it can be applied together with ISO 26262. We show how ISO 26262 documentation can be re-used and enhanced to satisfy the analysis and documentation demands of the ISO 27001 standard. We illustrate our approach based on an electronic steering column lock system.Information2016-06-2373Article10.3390/info7030036362078-24892016-06-23doi: 10.3390/info7030036Kristian BeckersJürgen DürrwangDominik Holling<![CDATA[IJGI, Vol. 5, Pages 100: Mapping Historical Data: Recovering a Forgotten Floristic and Vegetation Database for Biodiversity Monitoring]]>
http://www.mdpi.com/2220-9964/5/7/100
Multitemporal biodiversity data on a forest ecosystem can provide useful information about the evolution of biodiversity in a territory. The present study describes the recovery of an archive used to determine the main Schmid’s vegetation belts in Trento Province, Italy. The archive covers 20 years, from the 1970s to the 1990s. During the FORCING project (an Italian acronym for Cingoli Forestali, i.e., forest belts), a comprehensive process of database recovering was executed, and missing data were digitized from historical maps, preserving paper-based maps and documents. All of the maps of 16 forest districts, and the related 8000 detected transects, have been georeferenced to make the whole database spatially explicit and to evaluate the possibility of performing comparative samplings on up-to-date datasets. The floristic raw data (approximately 200,000 specific identifications, including frequency indices) still retain an important and irreplaceable information value. The data can now be browsed via a web-GIS. We provide here a set of examples of the use of this type of data, and we highlight the potential and the limits of the specific dataset and of the historical database, in general.ISPRS International Journal of Geo-Information2016-06-2357Article10.3390/ijgi50701001002220-99642016-06-23doi: 10.3390/ijgi5070100Francesco GeriNicola La PortaFabio ZotteleMarco Ciolli<![CDATA[Safety, Vol. 2, Pages 15: A Review on All Terrain Vehicle Safety]]>
http://www.mdpi.com/2313-576X/2/2/15
All-terrain vehicles (ATVs) have become increasing popular in many countries around the world, both for occupational use, as well as recreational use. With an increase in popularity, and the supply of heavier and more powerful machines on the market, major traumas and deaths from ATV use are growing concerns for public health and injury prevention professionals. This review of the literature on ATVs will focus on the mechanism and patterns of ATV-related injuries, the challenges of injury prevention, and the effects of legislation and regulations regarding ATV usage. The increasing burden of injuries and the substantial economic cost from ATV-related traumas and deaths calls for intensification of injury prevention efforts. Modification of risk factors, institution of regulations and legislation, and enforcement of those rules are important steps for prevention of ATV-related harm.Safety2016-06-2222Review10.3390/safety2020015152313-576X2016-06-22doi: 10.3390/safety2020015Vanessa FawcettBonnie TsangAmir TaheriKathy BeltonSandy Widder<![CDATA[Algorithms, Vol. 9, Pages 42: Joint Antenna Selection and Beamforming Algorithms for Physical Layer Multicasting with Massive Antennas]]>
http://www.mdpi.com/1999-4893/9/2/42
We investigate the problem of minimizing the total power consumption under the constraint of the signal-to-noise ratio (SNR) requirement for the physical layer multicasting system with large-scale antenna arrays. In contrast with existing work, we explicitly consider both the transmit power and the circuit power scaling with the number of antennas. The joint antenna selection and beamforming technique is proposed to minimize the total power consumption. The problem is a challenging one, which aims to minimize the linear combination of ℓ 0 -norm and ℓ 2 -norm. To our best knowledge, this minimization problem has not yet been well solved. A random decremental antenna selection algorithm is designed, which is further modified by an approximation of the minimal transmit power based on the asymptotic orthogonality of the channels. Then, a more efficient decremental antenna selection algorithm is proposed based on minimizing the ℓ 0 norm. Performance results show that the ℓ 0 norm minimization algorithm greatly outperforms the random selection algorithm in terms of the total power consumption and the average run time.Algorithms2016-06-2292Article10.3390/a9020042421999-48932016-06-22doi: 10.3390/a9020042Xinhua WangJinlu Sheng<![CDATA[Systems, Vol. 4, Pages 25: Improved Time Response of Stabilization in Synchronization of Chaotic Oscillators Using Mathematica]]>
http://www.mdpi.com/2079-8954/4/2/25
Chaotic dynamics are an interesting topic in nonlinear science that has been intensively studied during the last three decades due to its wide availability. Motivated by much researches on synchronization, the authors of this study have improved the time response of stabilization when parametrically excited Φ6—Van der Pol Oscillator (VDPO) and Φ6—Duffing Oscillator (DO) are synchronized identically as well as non-identically (with each other) using the Linear Active Control (LAC) technique using Mathematica. Furthermore, the authors have synchronized the same pairs of the oscillators using a more robust synchronization with faster time response of stability called Robust Adaptive Sliding Mode Control (RASMC). A comparative study has been done between the previous results of Njah’s work and our results based on Mathematica via LAC. The time response of stabilization of synchronization using RASMC has been discussed.Systems2016-06-2242Article10.3390/systems4020025252079-89542016-06-22doi: 10.3390/systems4020025Mohammad ShahzadIsrar AhmadAzizan SaabanAdyda Ibrahim<![CDATA[Electronics, Vol. 5, Pages 31: InAlGaN/GaN HEMTs at Cryogenic Temperatures]]>
http://www.mdpi.com/2079-9292/5/2/31
We report on the electron transport properties of two-dimensional electron gas confined in a quaternary barrier InAlGaN/AlN/GaN heterostructure down to cryogenic temperatures for the first time. A state-of-the-art electron mobility of 7340 cm2·V−1·s−1 combined with a sheet carrier density of 1.93 × 1013 cm−2 leading to a remarkably low sheet resistance of 44 Ω/□ are measured at 4 K. A strong improvement of Direct current (DC) and Radio frequency (RF) characteristics is observed at low temperatures. The excellent current and power gain cutoff frequencies (fT/fmax) of 65/180 GHz and 95/265 GHz at room temperature and 77 K, respectively, using a 0.12 μm technology confirmed the outstanding 2DEG properties.Electronics2016-06-2252Article10.3390/electronics5020031312079-92922016-06-22doi: 10.3390/electronics5020031Ezgi DogmusRiad KaboucheSylvie LepillietAstrid LingeMalek ZegaouiHichem Ben-AmmarMarie-Pierre ChauvatPierre RuteranaPiero GamarraCédric LacamMaurice TordjmanFarid Medjdoub<![CDATA[Technologies, Vol. 4, Pages 18: Designing Closed-Loop Brain-Machine Interfaces Using Model Predictive Control]]>
http://www.mdpi.com/2227-7080/4/2/18
Brain-machine interfaces (BMIs) are broadly defined as systems that establish direct communications between living brain tissue and external devices, such as artificial arms. By sensing and interpreting neuronal activities to actuate an external device, BMI-based neuroprostheses hold great promise in rehabilitating motor disabled subjects, such as amputees. In this paper, we develop a control-theoretic analysis of a BMI-based neuroprosthetic system for voluntary single joint reaching task in the absence of visual feedback. Using synthetic data obtained through the simulation of an experimentally validated psycho-physiological cortical circuit model, both the Wiener filter and the Kalman filter based linear decoders are developed. We analyze the performance of both decoders in the presence and in the absence of natural proprioceptive feedback information. By performing simulations, we show that the performance of both decoders degrades significantly in the absence of the natural proprioception. To recover the performance of these decoders, we propose two problems, namely tracking the desired position trajectory and tracking the firing rate trajectory of neurons which encode the proprioception, in the model predictive control framework to design optimal artificial sensory feedback. Our results indicate that while the position trajectory based design can only recover the position and velocity trajectories, the firing rate trajectory based design can recover the performance of the motor task along with the recovery of firing rates in other cortical regions. Finally, we extend our design by incorporating a network of spiking neurons and designing artificial sensory feedback in the form of a charged balanced biphasic stimulating current.Technologies2016-06-2242Article10.3390/technologies4020018182227-70802016-06-22doi: 10.3390/technologies4020018Gautam KumarMayuresh KothareNitish ThakorMarc SchieberHongguang PanBaocang DingWeimin Zhong<![CDATA[Informatics, Vol. 3, Pages 9: Back-Off Time Calculation Algorithms in WSN]]>
http://www.mdpi.com/2227-9709/3/2/9
In a Mobile Wireless Sensor Mesh Network (MWSMN), based on the IEEE 802.15.4 standard, low power consumption is vitally important since the network devices are mostly battery driven. This is especially true for devices dependent on small form factors, such as those used in wireless sensor network. This paper proposes four new approaches to reduce the Back-Off Time in ZigBee standard in order to minimize the collisions caused by transmission between neighbouring nodes within the mesh network. The four alternate algorithms for the Back-Off Time calculation are compared to the ZigBee standard Back-Off Time algorithm regarding their energy needs using the simulation suite OPNET Modeler. To study the behaviour of the parameters of all algorithms in all scenarios, the statistical Analysis of Variance (ANOVA) has been used and it shows that the null hypotheses are rejected except for one case. The results show that the two passive algorithms Tabu Search and Simulated Annealing search techniques are suitable for battery-driven, energy-sensible networks. The Ant Colony Optimization (ACO) approaches increase throughput and reduce the packet loss but cost more in terms of energy due to the implementation of additional control packets. To the best of the authors’ knowledge, this is the first approach for MWSMN that uses the Swarm Intelligence technique and the search solution algorithm for the Back-Off Time optimization.Informatics2016-06-2232Article10.3390/informatics302000992227-97092016-06-22doi: 10.3390/informatics3020009Ali Al-HumairiAlexander Probst<![CDATA[JRFM, Vol. 9, Pages 6: Down-Side Risk Metrics as Portfolio Diversification Strategies across the Global Financial Crisis]]>
http://www.mdpi.com/1911-8074/9/2/6
This paper features an analysis of the effectiveness of a range of portfolio diversification strategies, with a focus on down-side risk metrics, as a portfolio diversification strategy in a European market context. We apply these measures to a set of daily arithmetically-compounded returns, in U.S. dollar terms, on a set of ten market indices representing the major European markets for a nine-year period from the beginning of 2005 to the end of 2013. The sample period, which incorporates the periods of both the Global Financial Crisis (GFC) and the subsequent European Debt Crisis (EDC), is a challenging one for the application of portfolio investment strategies. The analysis is undertaken via the examination of multiple investment strategies and a variety of hold-out periods and backtests. We commence by using four two-year estimation periods and a subsequent one-year investment hold out period, to analyse a naive 1/N diversification strategy and to contrast its effectiveness with Markowitz mean variance analysis with positive weights. Markowitz optimisation is then compared to various down-side investment optimisation strategies. We begin by comparing Markowitz with CVaR, and then proceed to evaluate the relative effectiveness of Markowitz with various draw-down strategies, utilising a series of backtests. Our results suggest that none of the more sophisticated optimisation strategies appear to dominate naive diversification.Journal of Risk and Financial Management2016-06-2192Article10.3390/jrfm902000661911-80742016-06-21doi: 10.3390/jrfm9020006David AllenMichael McAleerRobert PowellAbhay Singh<![CDATA[Algorithms, Vol. 9, Pages 41: Visual and Textual Sentiment Analysis of a Microblog Using Deep Convolutional Neural Networks]]>
http://www.mdpi.com/1999-4893/9/2/41
Sentiment analysis of online social media has attracted significant interest recently. Many studies have been performed, but most existing methods focus on either only textual content or only visual content. In this paper, we utilize deep learning models in a convolutional neural network (CNN) to analyze the sentiment in Chinese microblogs from both textual and visual content. We first train a CNN on top of pre-trained word vectors for textual sentiment analysis and employ a deep convolutional neural network (DNN) with generalized dropout for visual sentiment analysis. We then evaluate our sentiment prediction framework on a dataset collected from a famous Chinese social media network (Sina Weibo) that includes text and related images and demonstrate state-of-the-art results on this Chinese sentiment analysis benchmark.Algorithms2016-06-2192Article10.3390/a9020041411999-48932016-06-21doi: 10.3390/a9020041Yuhai YuHongfei LinJiana MengZhehuan Zhao<![CDATA[Safety, Vol. 2, Pages 14: Bicycle-Bicycle Accidents Emerge from Encounters: An Agent-Based Approach]]>
http://www.mdpi.com/2313-576X/2/2/14
Traditional accident risk prediction models need adequate data on explanatory variables, most importantly data on traffic flows. However, in the case of accidents between bicycles the availability of such data is often limited. Therefore, alternative bottom-up simulation modelling approaches are expected to complement traditional equation-based models. In this paper we present an agent-based approach to explore bicycle-bicycle accidents. Specifically, we hypothesise that (1) bicycle-bicycle accidents are based on the population of encounters between cyclists rather than on bicycle flows and (2) that encounters have a non-linear relationship with flows. Bicycle flows and encounters are simulated by means of an agent-based model that is implemented for the road network of the city of Salzburg. Simulation results are tested against a 10-year dataset of police records on bicycle-bicycle accidents. The results affirm both hypotheses: First, cyclist encounters exhibit a linear relationship to accidents and thus suggest being the true population of bicycle-bicycle accidents. Second, flows show a relationship in the form of a second-order polynomial function with encounters as well as accidents.Safety2016-06-2122Article10.3390/safety2020014142313-576X2016-06-21doi: 10.3390/safety2020014Gudrun WallentinMartin Loidl<![CDATA[Administrative Sciences, Vol. 6, Pages 6: The State of Innovation and Entrepreneurship Research]]>
http://www.mdpi.com/2076-3387/6/2/6
Innovation is informed by the ability to see connections, spot opportunities, and take advantage of them.Administrative Sciences2016-06-2162Editorial10.3390/admsci602000662076-33872016-06-21doi: 10.3390/admsci6020006Fernando Muñoz-Bullón<![CDATA[Administrative Sciences, Vol. 6, Pages 5: Advertising between Archetype and Brand Personality]]>
http://www.mdpi.com/2076-3387/6/2/5
The aim of the paper is the alignment of C.G. Jung’s (1954) archetypes and Aaker’s (1997) brand personality framework in the context of advertising. C.G. Jung’s theories had a tremendous impact on psychology. David Aaker and his daughter Jennifer are seen by many as the branding gurus. Despite the fact that both frameworks refer to persons/personalities there is no publication linking the two frameworks. Our research tried to fill this gap by developing a joint framework combining Jung’s and Aaker’s attributes and apply it by analyzing two distinctively different TV commercials from Asian hotel chains. A total of 102 Executive MBA students had to watch both TV commercials and then conduct an Archetype (C.G. Jung) Indicator test and rate Brand Personality (Aaker) traits of the two commercials. Results show that there is common ground. This has implications for advertisers who may want to specify an archetype and related personality attributes for their promotional campaigns. Game changers in the hospitality sector may want to be seen as Outlaw whereas established hotel chains may position themselves as Lover with personality attributes such as welcoming, charming, and embraced.Administrative Sciences2016-06-2162Article10.3390/admsci602000552076-33872016-06-21doi: 10.3390/admsci6020005Clemens BechterGiorgio FarinelliRolf-Dieter DanielMichael Frey<![CDATA[Econometrics, Vol. 4, Pages 29: Evaluating Eigenvector Spatial Filter Corrections for Omitted Georeferenced Variables]]>
http://www.mdpi.com/2225-1146/4/2/29
The Ramsey regression equation specification error test (RESET) furnishes a diagnostic for omitted variables in a linear regression model specification (i.e., the null hypothesis is no omitted variables). Integer powers of fitted values from a regression analysis are introduced as additional covariates in a second regression analysis. The former regression model can be considered restricted, whereas the latter model can be considered unrestricted; this first model is nested within this second model. A RESET significance test is conducted with an F-test using the error sums of squares and the degrees of freedom for the two models. For georeferenced data, eigenvectors can be extracted from a modified spatial weights matrix, and included in a linear regression model specification to account for the presence of nonzero spatial autocorrelation. The intuition underlying this methodology is that these synthetic variates function as surrogates for omitted variables. Accordingly, a restricted regression model without eigenvectors should indicate an omitted variables problem, whereas an unrestricted regression model with eigenvectors should result in a failure to reject the RESET null hypothesis. This paper furnishes eleven empirical examples, covering a wide range of spatial attribute data types, that illustrate the effectiveness of eigenvector spatial filtering in addressing the omitted variables problem for georeferenced data as measured by the RESET.Econometrics2016-06-2142Article10.3390/econometrics4020029292225-11462016-06-21doi: 10.3390/econometrics4020029Daniel GriffithYongwan Chun<![CDATA[Symmetry, Vol. 8, Pages 52: Parity-Time Symmetry and the Toy Models of Gain-Loss Dynamics near the Real Kato’s Exceptional Points]]>
http://www.mdpi.com/2073-8994/8/6/52
For a given operator D ( t ) of an observable in theoretical parity-time symmetric quantum physics (or for its evolution-generator analogues in the experimental gain-loss classical optics, etc.) the instant t c r i t i c a l of a spontaneous breakdown of the parity-time alias gain-loss symmetry should be given, in the rigorous language of mathematics, the Kato’s name of an “exceptional point”, t c r i t i c a l = t ( E P ) . In the majority of conventional applications the exceptional point (EP) values are not real. In our paper, we pay attention to several exactly tractable toy-model evolutions for which at least some of the values of t ( E P ) become real. These values are interpreted as “instants of a catastrophe”, be it classical or quantum. In the classical optical setting the discrete nature of our toy models might make them amenable to simulations. In the latter context the instant of Big Bang is mentioned as an illustrative sample of possible physical meaning of such an EP catastrophe in quantum cosmology.Symmetry2016-06-2086Article10.3390/sym8060052522073-89942016-06-20doi: 10.3390/sym8060052Miloslav Znojil<![CDATA[MCA, Vol. 21, Pages 25: Fixed Order Controller for Schur Stability]]>
http://www.mdpi.com/2297-8747/21/2/25
If the characteristic polynomial of a discrete-time system has all its roots in the open unit disc of the complex plane, the system is called Schur stable. In this paper, the Schur stabilization problem of closed loop discrete-time system by affine compensator is considered. For this purpose, the distance function between the Schur stability region and the affine controller subset is investigated.Mathematical and Computational Applications2016-06-20212Article10.3390/mca21020025252297-87472016-06-20doi: 10.3390/mca21020025Taner Büyükköroğlu<![CDATA[IJGI, Vol. 5, Pages 98: Heading Estimation with Real-time Compensation Based on Kalman Filter Algorithm for an Indoor Positioning System]]>
http://www.mdpi.com/2220-9964/5/6/98
The problem of heading drift error using only low cost Micro-Electro-Mechanical (MEMS) Inertial-Measurement-Unit (IMU) has not been well solved. In this paper, a heading estimation method with real-time compensation based on Kalman filter has been proposed, abbreviated as KHD. For the KHD method, a unified heading error model is established for various predictable errors in magnetic compass for pedestrian navigation, and an effective method for solving the model parameters is proposed in the indoor environment with regular structure. In addition, error model parameters are solved by Kalman filtering algorithm with building geometry information in order to achieve real-time heading compensation. The experimental results show that the KHD method can not only effectively correct the original heading information, but also effectively inhibit the accumulation effect of positioning errors. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that apply KHD method to PDR(Pedestrian Dead Reckoning) algorithm can reliably achieve meter-level positioning using a low cost MEMS IMU only.ISPRS International Journal of Geo-Information2016-06-2056Article10.3390/ijgi5060098982220-99642016-06-20doi: 10.3390/ijgi5060098Xin LiJian WangChunyan Liu<![CDATA[IJGI, Vol. 5, Pages 99: Evaluation of Deterministic and Complex Analytical Hierarchy Process Methods for Agricultural Land Suitability Analysis in a Changing Climate]]>
http://www.mdpi.com/2220-9964/5/6/99
Land suitability analysis is employed to evaluate the appropriateness of land for a particular purpose whilst integrating both qualitative and quantitative inputs, which can be continuous in nature. However, in agricultural modelling there is often a disregard of this contiguous aspect. Therefore, some parametric procedures for suitability analysis compartmentalise units into defined membership classes. This imposition of crisp boundaries neglects the continuous formations found throughout nature and overlooks differences and inherent uncertainties found in the modelling. This research will compare two approaches to suitability analysis over three differing methods. The primary approach will use an Analytical Hierarchy Process (AHP), while the other approach will use a Fuzzy AHP over two methods; Fitted Fuzzy AHP and Nested Fuzzy AHP. Secondary to this, each method will be assessed into how it behaves in a climate change scenario to understand and highlight the role of uncertainties in model conceptualisation and structure. Outputs and comparisons between each method, in relation to area, proportion of membership classes and spatial representation, showed that fuzzy modelling techniques detailed a more robust and continuous output. In particular the Nested Fuzzy AHP was concluded to be more pertinent, as it incorporated complex modelling techniques, as well as the initial AHP framework. Through this comparison and assessment of model behaviour, an evaluation of each methods predictive capacity and relevance for decision-making purposes in agricultural applications is gained.ISPRS International Journal of Geo-Information2016-06-2056Concept Paper10.3390/ijgi5060099992220-99642016-06-20doi: 10.3390/ijgi5060099Harmen RomeijnRobert FaggianVasco DiogoVictor Sposito<![CDATA[Informatics, Vol. 3, Pages 8: Choosing a Model for eConsult Specialist Remuneration: Factors to Consider]]>
http://www.mdpi.com/2227-9709/3/2/8
Electronic consultation (eConsult) is an innovative solution that allows specialists and primary care providers to communicate electronically, improving access to specialist care. Understanding the cost implications of different remuneration models available to pay specialists is of critical importance as adoption of these services continues to increase. We used data collected through the Champlain BASE (Building Access to Specialists through eConsultation) eConsult service to simulate the cost implications of different remuneration models in Canada. The prorated hourly rate model averaged $45.72 CAD (Canadian Dollar) per eConsult while the prorated hourly rate with incentive averaged $51.90 CAD per eConsult, and the fee for service cost $60.50 CAD per eConsult. Paying all specialty groups to block three hours per week for eConsults averaged $337.44 CAD per eConsult and paying for 1-h blocks averaged $133.41 CAD per eConsult. As the remuneration of specialists is the largest cost driver of an established eConsult service, our findings can inform policymakers considering the implementation of eConsult or wishing to further develop an existing service.Informatics2016-06-1832Article10.3390/informatics302000882227-97092016-06-18doi: 10.3390/informatics3020008Clare LiddyCatherine Deri ArmstrongFanny McKellipsPaul DrosinisAmir AfkhamErin Keely<![CDATA[Fluids, Vol. 1, Pages 19: On Thermomechanics of a Nonlinear Heat Conducting Suspension]]>
http://www.mdpi.com/2311-5521/1/2/19
In this short paper, we discuss and provide constitutive relations for the stress tensor and the heat flux vector for a nonlinear density-gradient dependent (Korteweg-type) fluid. Specifically, we attempt to present a unified thermo-mechanical approach to the two models given in papers of Massoudi (International Journal of Non-Linear Mechanics, 2001, 36(1), pp. 25–37.) and Massoudi (Mathematical Methods in the Applied Sciences, 2006, 29(13), pp. 1599–1613.) where the entropy law is used and restrictions are also obtained on the constitutive parameters. In most thermomechanical studies of nonlinear fluids using the entropy law, the stress tensor is assumed to be nonlinear and the heat flux vector still has the form of the Fourier type, i.e., it is proportional to the temperature gradient. In this paper, we use a generalized (nonlinear) form for the heat flux vector. When our model is linearized we obtain constraints, due to the entropy inequality, which are in agreement with the earlier results.Fluids2016-06-1812Article10.3390/fluids1020019192311-55212016-06-18doi: 10.3390/fluids1020019Mehrdad MassoudiA. Kirwan<![CDATA[Robotics, Vol. 5, Pages 11: State of the Art Robotic Grippers and Applications]]>
http://www.mdpi.com/2218-6581/5/2/11
In this paper, we present a recent survey on robotic grippers. In many cases, modern grippers outperform their older counterparts which are now stronger, more repeatable, and faster. Technological advancements have also attributed to the development of gripping various objects. This includes soft fabrics, microelectromechanical systems, and synthetic sheets. In addition, newer materials are being used to improve functionality of grippers, which include piezoelectric, shape memory alloys, smart fluids, carbon fiber, and many more. This paper covers the very first robotic gripper to the newest developments in grasping methods. Unlike other survey papers, we focus on the applications of robotic grippers in industrial, medical, for fragile objects and soft fabrics grippers. We report on new advancements on grasping mechanisms and discuss their behavior for different purposes. Finally, we present the future trends of grippers in terms of flexibility and performance and their vital applications in emerging areas of robotic surgery, industrial assembly, space exploration, and micromanipulation. These advancements will provide a future outlook on the new trends in robotic grippers.Robotics2016-06-1752Review10.3390/robotics5020011112218-65812016-06-17doi: 10.3390/robotics5020011Kevin TaiAbdul-Rahman El-SayedMohammadali ShahriariMohammad BiglarbegianShohel Mahmud<![CDATA[Symmetry, Vol. 8, Pages 51: On Classification of Symmetry Reductions for the Eikonal Equation]]>
http://www.mdpi.com/2073-8994/8/6/51
We study the relationship between the classification of three-dimensional nonconjugate subalgebras of the Lie algebra of the Poincaré group P ( 1 , 4 ) and the types of symmetry reduction of the eikonal equation to ordinary differential equations (ODEs).Symmetry2016-06-1786Article10.3390/sym8060051512073-89942016-06-17doi: 10.3390/sym8060051Vasyl FedorchukVolodymyr Fedorchuk<![CDATA[Econometrics, Vol. 4, Pages 28: Testing Symmetry of Unknown Densities via Smoothing with the Generalized Gamma Kernels]]>
http://www.mdpi.com/2225-1146/4/2/28
This paper improves a kernel-smoothed test of symmetry through combining it with a new class of asymmetric kernels called the generalized gamma kernels. It is demonstrated that the improved test statistic has a normal limit under the null of symmetry and is consistent under the alternative. A test-oriented smoothing parameter selection method is also proposed to implement the test. Monte Carlo simulations indicate superior finite-sample performance of the test statistic. It is worth emphasizing that the performance is grounded on the first-order normal limit and a small number of observations, despite a nonparametric convergence rate and a sample-splitting procedure of the test.Econometrics2016-06-1742Article10.3390/econometrics4020028282225-11462016-06-17doi: 10.3390/econometrics4020028Masayuki HirukawaMari Sakudo<![CDATA[Informatics, Vol. 3, Pages 7: Developing and Improving Student Non-Technical Skills in IT Education: A Literature Review and Model]]>
http://www.mdpi.com/2227-9709/3/2/7
The purpose of this paper is to identify portions of the literature in the areas of Information Technology (IT) management, skills development, and curriculum development that support the design of a holistic conceptual framework for instruction in non-technical skills within the IT higher education context. This article review provides a framework for understanding how the critical success factors related to IT and Information Systems (IS) professional success is impacted by developing students’ non-technical skills. The article culminates in a holistic conceptual framework for developing non-technical skills within the IT higher education context. Implications for theory and research are provided.Informatics2016-06-1732Concept Paper10.3390/informatics302000772227-97092016-06-17doi: 10.3390/informatics3020007Marcia HagenDavid Bouchard<![CDATA[IJGI, Vol. 5, Pages 97: Parallel Landscape Driven Data Reduction & Spatial Interpolation Algorithm for Big LiDAR Data]]>
http://www.mdpi.com/2220-9964/5/6/97
Airborne Light Detection and Ranging (LiDAR) topographic data provide highly accurate digital terrain information, which is used widely in applications like creating flood insurance rate maps, forest and tree studies, coastal change mapping, soil and landscape classification, 3D urban modeling, river bank management, agricultural crop studies, etc. In this paper, we focus mainly on the use of LiDAR data in terrain modeling/Digital Elevation Model (DEM) generation. Technological advancements in building LiDAR sensors have enabled highly accurate and highly dense LiDAR point clouds, which have made possible high resolution modeling of terrain surfaces. However, high density data result in massive data volumes, which pose computing issues. Computational time required for dissemination, processing and storage of these data is directly proportional to the volume of the data. We describe a novel technique based on the slope map of the terrain, which addresses the challenging problem in the area of spatial data analysis, of reducing this dense LiDAR data without sacrificing its accuracy. To the best of our knowledge, this is the first ever landscape-driven data reduction algorithm. We also perform an empirical study, which shows that there is no significant loss in accuracy for the DEM generated from a 52% reduced LiDAR dataset generated by our algorithm, compared to the DEM generated from an original, complete LiDAR dataset. For the accuracy of our statistical analysis, we perform Root Mean Square Error (RMSE) comparing all of the grid points of the original DEM to the DEM generated by reduced data, instead of comparing a few random control points. Besides, our multi-core data reduction algorithm is highly scalable. We also describe a modified parallel Inverse Distance Weighted (IDW) spatial interpolation method and show that the DEMs it generates are time-efficient and have better accuracy than the one’s generated by the traditional IDW method.ISPRS International Journal of Geo-Information2016-06-1756Article10.3390/ijgi5060097972220-99642016-06-17doi: 10.3390/ijgi5060097Rahil SharmaZewei XuRamanathan SugumaranSuely Oliveira<![CDATA[Information, Vol. 7, Pages 33: A Framework for Measuring Security as a System Property in Cyberphysical Systems]]>
http://www.mdpi.com/2078-2489/7/2/33
This paper addresses the challenge of measuring security, understood as a system property, of cyberphysical systems, in the category of similar properties, such as safety and reliability. First, it attempts to define precisely what security, as a system property, really is. Then, an application context is presented, in terms of an attack surface in cyberphysical systems. Contemporary approaches related to the principles of measuring software properties are also discussed, with emphasis on building models. These concepts are illustrated in several case studies, based on previous work of the authors, to conduct experimental security measurements.Information2016-06-1772Article10.3390/info7020033332078-24892016-06-17doi: 10.3390/info7020033Janusz ZalewskiIngrid BuckleyBogdan CzejdoSteven DragerAndrew KorneckiNary Subramanian<![CDATA[IJGI, Vol. 5, Pages 96: OpenCL Implementation of a Parallel Universal Kriging Algorithm for Massive Spatial Data Interpolation on Heterogeneous Systems]]>
http://www.mdpi.com/2220-9964/5/6/96
In some digital Earth engineering applications, spatial interpolation algorithms are required to process and analyze large amounts of data. Due to its powerful computing capacity, heterogeneous computing has been used in many applications for data processing in various fields. In this study, we explore the design and implementation of a parallel universal kriging spatial interpolation algorithm using the OpenCL programming model on heterogeneous computing platforms for massive Geo-spatial data processing. This study focuses primarily on transforming the hotspots in serial algorithms, i.e., the universal kriging interpolation function, into the corresponding kernel function in OpenCL. We also employ parallelization and optimization techniques in our implementation to improve the code performance. Finally, based on the results of experiments performed on two different high performance heterogeneous platforms, i.e., an NVIDIA graphics processing unit system and an Intel Xeon Phi system (MIC), we show that the parallel universal kriging algorithm can achieve the highest speedup of up to 40× with a single computing device and the highest speedup of up to 80× with multiple devices.ISPRS International Journal of Geo-Information2016-06-1756Article10.3390/ijgi5060096962220-99642016-06-17doi: 10.3390/ijgi5060096Fang HuangShuanshuan BuJian TaoXicheng Tan<![CDATA[MCA, Vol. 21, Pages 24: A Note on Some Solutions of Copper-Water (Cu-Water) Nanofluids in a Channel with Slowly Expanding or Contracting Walls with Heat Transfer]]>
http://www.mdpi.com/2297-8747/21/2/24
A study has been carried out to examine the occurrence of multiple solutions for Copper-Water nanofluids flows in a porous channel with slowly expanding and contracting walls. The governing equations are first transformed to similarity equations by using similarity transformation. The resulting equations are then solved numerically by using the shooting method. The effects of wall expansion ratio and solid volume fraction on velocity and temperature profile have been studied. Numerical results are presented graphically for the variations of different physical parameters. The study reveals that triple solutions exist only for the case of suction.Mathematical and Computational Applications2016-06-16212Article10.3390/mca21020024242297-87472016-06-16doi: 10.3390/mca21020024Jawad RazaAzizah RohniZurni Omar<![CDATA[Symmetry, Vol. 8, Pages 50: Exact and Numerical Solutions of a Spatially-Distributed Mathematical Model for Fluid and Solute Transport in Peritoneal Dialysis]]>
http://www.mdpi.com/2073-8994/8/6/50
The nonlinear mathematical model for solute and fluid transport induced by the osmotic pressure of glucose and albumin with the dependence of several parameters on the hydrostatic pressure is described. In particular, the fractional space available for macromolecules (albumin was used as a typical example) and fractional fluid void volume were assumed to be different functions of hydrostatic pressure. In order to find non-uniform steady-state solutions analytically, some mathematical restrictions on the model parameters were applied. Exact formulae (involving hypergeometric functions) for the density of fluid flux from blood to tissue and the fluid flux across tissues were constructed. In order to justify the applicability of the analytical results obtained, a wide range of numerical simulations were performed. It was found that the analytical formulae can describe with good approximation the fluid and solute transport (especially the rate of ultrafiltration) for a wide range of values of the model parameters.Symmetry2016-06-1686Article10.3390/sym8060050502073-89942016-06-16doi: 10.3390/sym8060050Roman ChernihaKateryna GozakJacek Waniewski<![CDATA[IJGI, Vol. 5, Pages 95: A Fractal Perspective on Scale in Geography]]>
http://www.mdpi.com/2220-9964/5/6/95
Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution and the modifiable areal unit problem (MAUP). This paper argues that the confusion and frustration arise from traditional Euclidean geometric thinking, in which locations, directions, and sizes are considered absolute, and it is now time to revise this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, most geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we present topological and scaling analyses illustrated by street-related concepts such as natural streets, street blocks, and natural cities. We further contend that one of the two spatial properties, spatial heterogeneity, is de facto the fractal nature of geographic features, and it should be considered the first effect among the two, because it is global and universal across all scales, which should receive more attention from practitioners of geography.ISPRS International Journal of Geo-Information2016-06-1556Article10.3390/ijgi5060095952220-99642016-06-15doi: 10.3390/ijgi5060095Bin JiangS. Brandt<![CDATA[Symmetry, Vol. 8, Pages 48: Optimal Face-Iris Multimodal Fusion Scheme]]>
http://www.mdpi.com/2073-8994/8/6/48
Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA), a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.Symmetry2016-06-1586Article10.3390/sym8060048482073-89942016-06-15doi: 10.3390/sym8060048Omid SharifiMaryam Eskandari<![CDATA[Information, Vol. 7, Pages 35: User in the Loop: Adaptive Smart Homes Exploiting User Feedback—State of the Art and Future Directions]]>
http://www.mdpi.com/2078-2489/7/2/35
Due to the decrease of sensor and actuator prices and their ease of installation, smart homes and smart environments are more and more exploited in automation and health applications. In these applications, activity recognition has an important place. This article presents a general architecture that is responsible for adapting automation for the different users of the smart home while recognizing their activities. For that, semi-supervised learning algorithms and Markov-based models are used to determine the preferences of the user considering a combination of: (1) observations of the data that have been acquired since the start of the experiment and (2) feedback of the users on decisions that have been taken by the automation. We present preliminarily simulated experimental results regarding the determination of preferences for a user.Information2016-06-1572Article10.3390/info7020035352078-24892016-06-15doi: 10.3390/info7020035Abir KaramiAnthony FleuryJacques BoonaertStéphane Lecoeuche<![CDATA[IJGI, Vol. 5, Pages 94: Analyzing the Impact of Highways Associated with Farmland Loss under Rapid Urbanization]]>
http://www.mdpi.com/2220-9964/5/6/94
Highway construction has accelerated urban growth and induced direct and indirect changes to land use. Although many studies have analyzed the relationship between highway construction and local development, relatively less attention has been paid to clarifying the various impacts of highways associated with farmland loss. This paper integrates GIS spatial analysis, remote sensing, buffer analysis and landscape metrics to analyze the landscape pattern change induced by direct and indirect highway impacts. This paper explores the interaction between the impact of highways and farmland loss, using the case of the highly urbanized traffic hubs in eastern China, Hang-Jia-Hu Plain. Our results demonstrate that the Hang-Jia-Hu Plain experienced extensive highway construction during 1990–2010, with a clear acceleration of expressway development since 2000. This unprecedented highway construction has directly fragmented the regional landscape and indirectly disturbed the regional landscape by attracting a large amount of built-up land transition from farmland during the last two decades. In the highway-effect zone, serious farmland loss initially occurred in the urban region and then spread to the rural region. Moreover, we found the discontinuous expansion of built-up land scattered the farmland in the rural region and expressway-effect zone. Furthermore, farmland protection policies in the 1990s had the effect of controlling the total area of farmland loss. However, the cohesive farmland structure was still fragmented by the direct and indirect impacts of highway construction. We suggest that an overall farmland protection system should be established to enhance spatial control and mitigate the adverse impacts caused by highway construction. This work improves the understanding of regional sustainable development, and provides a scientific basis for balanced urban development with farmland protection in decision-making processes.ISPRS International Journal of Geo-Information2016-06-1556Article10.3390/ijgi5060094942220-99642016-06-15doi: 10.3390/ijgi5060094Jie SongJintian YeEnyan ZhuJinsong DengKe Wang<![CDATA[MCA, Vol. 21, Pages 23: A Recommendation System for Execution Plans Using Machine Learning]]>
http://www.mdpi.com/2297-8747/21/2/23
Generating execution plans is a costly operation for the DataBase Management System (DBMS). An interesting alternative to this operation is to reuse the old execution plans, that were already generated by the optimizer for past queries, to execute new queries. In this paper, we present an approach for execution plan recommendation in two phases. We firstly propose a textual representation of our SQL queries and use it to build a Features Extractor module. Then, we present a straightforward solution to identify query similarity.This solution relies only on the comparison of the SQL statements. Next, we show how to build an improved solution enabled by machine learning techniques. The improved version takes into account the features of the queries’ execution plans. By comparing three machine learning algorithms, we find that the improved solution using Classification Based on Associative Rules (CAR) identifies similarity in 91 % of the cases.Mathematical and Computational Applications2016-06-15212Article10.3390/mca21020023232297-87472016-06-15doi: 10.3390/mca21020023Jihad ZahirAbderrahim El Qadi<![CDATA[Symmetry, Vol. 8, Pages 49: Neutrino Signals in Electron-Capture Storage-Ring Experiments]]>
http://www.mdpi.com/2073-8994/8/6/49
Neutrino signals in electron-capture decays of hydrogen-like parent ions P in storage-ring experiments at GSI are reconsidered, with special emphasis placed on the storage-ring quasi-circular motion of the daughter ions D in two-body decays P → D + ν e . It is argued that, to the extent that daughter ions are detected, these detection rates might exhibit modulations with periods of order seconds, similar to those reported in the GSI storage-ring experiments for two-body decay rates. New dedicated experiments in storage rings, or using traps, could explore these modulations.Symmetry2016-06-1586Article10.3390/sym8060049492073-89942016-06-15doi: 10.3390/sym8060049Avraham Gal<![CDATA[Risks, Vol. 4, Pages 17: Ruin Probabilities with Dependence on the Number of Claims within a Fixed Time Window]]>
http://www.mdpi.com/2227-9091/4/2/17
We analyse the ruin probabilities for a renewal insurance risk process with inter-arrival times depending on the claims that arrive within a fixed (past) time window. This dependence could be explained through a regenerative structure. The main inspiration of the model comes from the bonus-malus (BM) feature of pricing car insurance. We discuss first the asymptotic results of ruin probabilities for different regimes of claim distributions. For numerical results, we recognise an embedded Markov additive process, and via an appropriate change of measure, ruin probabilities could be computed to a closed-form formulae. Additionally, we employ the importance sampling simulations to derive ruin probabilities, which further permit an in-depth analysis of a few concrete cases.Risks2016-06-1542Article10.3390/risks4020017172227-90912016-06-15doi: 10.3390/risks4020017Corina ConstantinescuSuhang DaiWeihong NiZbigniew Palmowski<![CDATA[Information, Vol. 7, Pages 34: Implementation Support of Security Design Patterns Using Test Templates]]>
http://www.mdpi.com/2078-2489/7/2/34
Security patterns are intended to support software developers as the patterns encapsulate security expert knowledge. However, these patterns may be inappropriately applied because most developers are not security experts, leading to threats and vulnerabilities. Here we propose a support method for security design patterns in the implementation phase of software development. Our method creates a test template from a security design pattern, consisting of an “aspect test template” to observe the internal processing and a “test case template”. Providing design information creates a test from the test template with a tool. Because our test template is reusable, it can easily perform a test to validate a security design pattern. In an experiment involving four students majoring in information sciences, we confirm that our method can realize an effective test, verify pattern applications, and support pattern implementation.Information2016-06-1572Article10.3390/info7020034342078-24892016-06-15doi: 10.3390/info7020034Masatoshi YoshizawaHironori WashizakiYoshiaki FukazawaTakao OkuboHaruhiko KaiyaNobukazu Yoshioka<![CDATA[Axioms, Vol. 5, Pages 18: Potential Infinity, Abstraction Principles and Arithmetic (Leśniewski Style)]]>
http://www.mdpi.com/2075-1680/5/2/18
This paper starts with an explanation of how the logicist research program can be approached within the framework of Leśniewski’s systems. One nice feature of the system is that Hume’s Principle is derivable in it from an explicit definition of natural numbers. I generalize this result to show that all predicative abstraction principles corresponding to second-level relations, which are provably equivalence relations, are provable. However, the system fails, despite being much neater than the construction of Principia Mathematica (PM). One of the key reasons is that, just as in the case of the system of PM, without the assumption that infinitely many objects exist, (renderings of) most of the standard axioms of Peano Arithmetic are not derivable in the system. I prove that introducing modal quantifiers meant to capture the intuitions behind potential infinity results in the (renderings of) axioms of Peano Arithmetic (PA) being valid in all relational models (i.e. Kripke-style models, to be defined later on) of the extended language. The second, historical part of the paper contains a user-friendly description of Leśniewski’s own arithmetic and a brief investigation into its properties.Axioms2016-06-1552Article10.3390/axioms5020018182075-16802016-06-15doi: 10.3390/axioms5020018Rafal Urbaniak<![CDATA[IJGI, Vol. 5, Pages 93: Morphological Operations to Extract Urban Curbs in 3D MLS Point Clouds]]>
http://www.mdpi.com/2220-9964/5/6/93
Automatic curb detection is an important issue in road maintenance, three-dimensional (3D) urban modeling, and autonomous navigation fields. This paper is focused on the segmentation of curbs and street boundaries using a 3D point cloud captured by a mobile laser scanner (MLS) system. Our method provides a solution based on the projection of the measured point cloud on the XY plane. Over that plane, a segmentation algorithm is carried out based on morphological operations to determine the location of street boundaries. In addition, a solution to extract curb edges based on the roughness of the point cloud is proposed. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. The proposed method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. The extraction method provides completeness and correctness rates above 90% and quality values higher than 85% in both studied datasets.ISPRS International Journal of Geo-Information2016-06-1456Article10.3390/ijgi5060093932220-99642016-06-14doi: 10.3390/ijgi5060093Borja Rodríguez-CuencaSilverio García-CortésCelestino OrdóñezMaría Alonso<![CDATA[IJGI, Vol. 5, Pages 90: Geospatial Information Categories Mapping in a Cross-lingual Environment: A Case Study of “Surface Water” Categories in Chinese and American Topographic Maps]]>
http://www.mdpi.com/2220-9964/5/6/90
The need for integrating geospatial information (GI) data from various heterogeneous sources has seen increased importance for geographic information system (GIS) interoperability. Using domain ontologies to clarify and integrate the semantics of data is considered as a crucial step for successful semantic integration in the GI domain. Nevertheless, mechanisms are still needed to facilitate semantic mapping between GI ontologies described in different natural languages. This research establishes a formal ontology model for cross-lingual geospatial information ontology mapping. By first extracting semantic primitives from a free-text definition of categories in two GI classification standards with different natural languages, an ontology-driven approach is used, and a formal ontology model is established to formally represent these semantic primitives into semantic statements, in which the spatial-related properties and relations are considered as crucial statements for the representation and identification of the semantics of the GI categories. Then, an algorithm is proposed to compare these semantic statements in a cross-lingual environment. We further design a similarity calculation algorithm based on the proposed formal ontology model to distance the semantic similarities and identify the mapping relationships between categories. In particular, we work with two GI classification standards for Chinese and American topographic maps. The experimental results demonstrate the feasibility and reliability of the proposed model for cross-lingual geospatial information ontology mapping.ISPRS International Journal of Geo-Information2016-06-1456Article10.3390/ijgi5060090902220-99642016-06-14doi: 10.3390/ijgi5060090Xi KuaiLin LiHeng LuoShen HangZhijun ZhangYu Liu<![CDATA[IJGI, Vol. 5, Pages 91: Towards Narrowing the Curation Gap—Theoretical Considerations and Lessons Learned from Decades of Practice]]>
http://www.mdpi.com/2220-9964/5/6/91
Research as a digital enterprise has created new, often poorly addressed challenges for the management and curation of research to ensure continuity, transparency, and accountability. There is a common misunderstanding that curation can be considered at a later point in the research cycle or delegated or that it is too burdensome or too expensive due to a lack of efficient tools. This creates a curation gap between research practice and curation needs. We argue that this gap can be narrowed if curators provide attractive support that befits research needs and if researchers consistently manage their work according to generic concepts consistently from the beginning. A rather uniquely long-term case study demonstrates how such concepts have helped to pragmatically implement a research practice intentionally using only minimalist tools for sustained, self-contained archiving since 1989. The paper sketches the concepts underlying three core research activities. (i) handling of research data, (ii) reference management as part of scholarly publishing, and (iii) advancing theories through modelling and simulation. These concepts represent a universally transferable best research practice, while technical details are obviously prone to continuous change. We hope it stimulates researchers to manage research similarly and that curators gain a better understanding of the curation challenges research practice actually faces.ISPRS International Journal of Geo-Information2016-06-1456Article10.3390/ijgi5060091912220-99642016-06-14doi: 10.3390/ijgi5060091Ana SesartićAndreas FischlinMatthias Töwe<![CDATA[IJGI, Vol. 5, Pages 92: WiGeR: WiFi-Based Gesture Recognition System]]>
http://www.mdpi.com/2220-9964/5/6/92
Recently, researchers around the world have been striving to develop and modernize human–computer interaction systems by exploiting advances in modern communication systems. The priority in this field involves exploiting radio signals so human–computer interaction will require neither special devices nor vision-based technology. In this context, hand gesture recognition is one of the most important issues in human–computer interfaces. In this paper, we present a novel device-free WiFi-based gesture recognition system (WiGeR) by leveraging the fluctuations in the channel state information (CSI) of WiFi signals caused by hand motions. We extract CSI from any common WiFi router and then filter out the noise to obtain the CSI fluctuation trends generated by hand motions. We design a novel and agile segmentation and windowing algorithm based on wavelet analysis and short-time energy to reveal the specific pattern associated with each hand gesture and detect duration of the hand motion. Furthermore, we design a fast dynamic time warping algorithm to classify our system’s proposed hand gestures. We implement and test our system through experiments involving various scenarios. The results show that WiGeR can classify gestures with high accuracy, even in scenarios where the signal passes through multiple walls.ISPRS International Journal of Geo-Information2016-06-1456Article10.3390/ijgi5060092922220-99642016-06-14doi: 10.3390/ijgi5060092Mohammed Al-qanessFangmin Li<![CDATA[Fluids, Vol. 1, Pages 18: Rendering the Navier–Stokes Equations for a Compressible Fluid into the Schrödinger Equation for Quantum Mechanics]]>
http://www.mdpi.com/2311-5521/1/2/18
The mass and momentum transfer phenomena in a compressible fluid represented by the Navier–Stokes equations are shown to convert into the Schrödinger equation for quantum mechanics. The complete Navier–Stokes equations render into an extended generalized version of Schrödinger equation. These results complement the Madelung’s (Zeitschrift für Physik 40 (3–4), pp. 322–326, 1926–1927) derivations that show how Schrödinger’s equation in quantum mechanics can be converted into the Euler equations for irrotational compressible flow. The theoretical results presented here join the classical Madelung paper to suggest the possibility that quantum effects at sub-atomic levels deal with a compressible fluid susceptible to wave propagation, rather than a particle. The link between such a fluid and the “quantum particle” is under current investigation.Fluids2016-06-1312Article10.3390/fluids1020018182311-55212016-06-13doi: 10.3390/fluids1020018Peter Vadasz<![CDATA[Symmetry, Vol. 8, Pages 47: A Data Mining Approach for Cardiovascular Disease Diagnosis Using Heart Rate Variability and Images of Carotid Arteries]]>
http://www.mdpi.com/2073-8994/8/6/47
In this paper, we proposed not only an extraction methodology of multiple feature vectors from ultrasound images for carotid arteries (CAs) and heart rate variability (HRV) of electrocardiogram signal, but also a suitable and reliable prediction model useful in the diagnosis of cardiovascular disease (CVD). For inventing the multiple feature vectors, we extract a candidate feature vector through image processing and measurement of the thickness of carotid intima-media (IMT). As a complementary way, the linear and/or nonlinear feature vectors are also extracted from HRV, a main index for cardiac disorder. The significance of the multiple feature vectors is tested with several machine learning methods, namely Neural Networks, Support Vector Machine (SVM), Classification based on Multiple Association Rule (CMAR), Decision tree induction and Bayesian classifier. As a result, multiple feature vectors extracted from both CAs and HRV (CA+HRV) showed higher accuracy than the separative feature vectors of CAs and HRV. Furthermore, the SVM and CMAR showed about 89.51% and 89.46%, respectively, in terms of diagnosing accuracy rate after evaluating the diagnosis or prediction methods using the finally chosen multiple feature vectors. Therefore, the multiple feature vectors devised in this paper can be effective diagnostic indicators of CVD. In addition, the feature vector analysis and prediction techniques are expected to be helpful tools in the decisions of cardiologists.Symmetry2016-06-1386Article10.3390/sym8060047472073-89942016-06-13doi: 10.3390/sym8060047Hyeongsoo KimMusa IshagMinghao PiaoTaeil KwonKeun Ryu<![CDATA[J. Imaging, Vol. 2, Pages 20: Detection by Infrared Thermography of the Effect of Local Cryotherapy Exposure on Thermal Spreadin Skin]]>
http://www.mdpi.com/2313-433X/2/2/20
The aim of the study is to evaluate the impact of the exposure duration of local cryotherapy on the skin temperature of the thigh and of the knee. Ten subjects performed a low-intensity exercise, rested for 20 min without ice, and then rested for 5 min and 10 min with ice under the right knee. The skin temperatures were measured by infrared thermography to assess the thermal spread. The results of the statistical analysis reveal an increase of skin temperature of the knee after an exposure of 5 min to the cryotherapy (p &lt; 0.05). There are also differences inthermal regulation between the 10-min exposure and the absence of ice pack. The exposure time variation of local cryotherapy gives different physiological responses which vary in intensity and in location.Journal of Imaging2016-06-1322Article10.3390/jimaging2020020202313-433X2016-06-13doi: 10.3390/jimaging2020020Matthieu VellardAhlem Arfaoui<![CDATA[Algorithms, Vol. 9, Pages 40: A Direct Search Algorithm for Global Optimization]]>
http://www.mdpi.com/1999-4893/9/2/40
A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.Algorithms2016-06-1392Article10.3390/a9020040401999-48932016-06-13doi: 10.3390/a9020040Enrique BaeyensAlberto HerrerosJosé Perán<![CDATA[Symmetry, Vol. 8, Pages 46: Coherent States of Harmonic and Reversed Harmonic Oscillator]]>
http://www.mdpi.com/2073-8994/8/6/46
A one-dimensional wave function is assumed whose logarithm is a quadratic form in the configuration variable with time-dependent coefficients. This trial function allows for general time-dependent solutions both of the harmonic oscillator (HO) and the reversed harmonic oscillator (RO). For the HO, apart from the standard coherent states, a further class of solutions is derived with a time-dependent width parameter. The width of the corresponding probability density fluctuates, or "breathes" periodically with the oscillator frequency. In the case of the RO, one also obtains normalized wave packets which, however, show diffusion through exponential broadening with time. At the initial time, the integration constants give rise to complete sets of coherent states in the three cases considered. The results are applicable to the quantum mechanics of the Kepler-Coulomb problem when transformed to the model of a four-dimensional harmonic oscillator with a constraint. In the classical limit, as was shown recently, the wave packets of the RO basis generate the hyperbolic Kepler orbits, and, by means of analytic continuation, the elliptic orbits are also obtained quantum mechanically.Symmetry2016-06-1386Article10.3390/sym8060046462073-89942016-06-13doi: 10.3390/sym8060046Alexander Rauh<![CDATA[Future Internet, Vol. 8, Pages 26: Elusive Learning—Using Learning Analytics to Support Reflective Sensemaking of Ill-Structured Ethical Problems: A Learner-Managed Dashboard Solution]]>
http://www.mdpi.com/1999-5903/8/2/26
Since the turn of the 21st century, we have seen a surge of studies on the state of U.S. education addressing issues such as cost, graduation rates, retention, achievement, engagement, and curricular outcomes. There is an expectation that graduates should be able to enter the workplace equipped to take on complex and “messy” or ill-structured problems as part of their professional and everyday life. In the context of online learning, we have identified two key issues that are elusive (hard to capture and make visible): learning with ill-structured problems and the interaction of social and individual learning. We believe that the intersection between learning and analytics has the potential, in the long-term, to minimize the elusiveness of deep learning. A proposed analytics model is described in this article that is meant to capture and also support further development of a learner’s reflective sensemaking.Future Internet2016-06-1182Article10.3390/fi8020026261999-59032016-06-11doi: 10.3390/fi8020026Yianna VovidesSarah Inman<![CDATA[JRFM, Vol. 9, Pages 5: Humanizing Finance by Hedging Property Values]]>
http://www.mdpi.com/1911-8074/9/2/5
The recent financial crisis triggered the greatest recession since the 1930s and had a devastating impact on households’ wealth and on their capacity to reduce their indebtedness. In the aftermath, it became clear that there is significant room for improvement in property risk management. While there has been innovation in the management of corporate finance risk, real estate has lagged behind. Now is the time to expand the range of tools available for hedging households’ risks and, thus, to advance the democratization of finance. Property equity represents the major asset in households’ portfolios in developed and undeveloped countries. The present paper analyzes a set of potential innovations in real estate risk management, such as price level-adjusted mortgages, property derivatives, and home equity value insurance. Financial institutions, households, and governments should work together to improve the performance of the financial instruments available and, thus, to help mitigate the worst impacts of economic cycles.Journal of Risk and Financial Management2016-06-1092Article10.3390/jrfm902000551911-80742016-06-10doi: 10.3390/jrfm9020005Jaume Roig Hernando<![CDATA[Symmetry, Vol. 8, Pages 45: Fluctuating Charge Order: A Universal Phenomenon in Unconventional Superconductivity?]]>
http://www.mdpi.com/2073-8994/8/6/45
Unconventional superconductors are characterized by various competing ordering phenomena in the normal state, such as antiferromagnetism, charge order, orbital order or nematicity. According to a widespread view, antiferromagnetic fluctuations are the dominant ordering phenomenon in cuprates and Fe based superconductors and are responsible for electron pairing. In contrast, charge order is believed to be subdominant and compete with superconductivity. Here, we argue that fluctuating charge order in the (0,π) direction is a feature shared by the cuprates and the Fe based superconductors alike. Recent data and theoretical models suggest that superconductivity is brought about by charge order excitations independently from spin fluctuations. Thus, quantum fluctuations of charge order may provide an alternative to spin fluctuations as a mechanism of electron pairing in unconventional superconductors.Symmetry2016-06-1086Article10.3390/sym8060045452073-89942016-06-10doi: 10.3390/sym8060045Erminald BertelAlexander Menzel<![CDATA[Mathematics, Vol. 4, Pages 42: Exponential Energy Decay of Solutions for a Transmission Problem With Viscoelastic Term and Delay]]>
http://www.mdpi.com/2227-7390/4/2/42
In our previous work (Journal of Nonlinear Science and Applications 9: 1202–1215, 2016), we studied the well-posedness and general decay rate for a transmission problem in a bounded domain with a viscoelastic term and a delay term. In this paper, we continue to study the similar problem but without the frictional damping term. The main difficulty arises since we have no frictional damping term to control the delay term in the estimate of the energy decay. By introducing suitable energy and Lyapunov functionals, we establish an exponential decay result for the energy.Mathematics2016-06-0942Article10.3390/math4020042422227-73902016-06-09doi: 10.3390/math4020042Danhua WangGang LiBiqing Zhu<![CDATA[Algorithms, Vol. 9, Pages 39: Review of Recent Type-2 Fuzzy Controller Applications]]>
http://www.mdpi.com/1999-4893/9/2/39
Type-2 fuzzy logic controllers (T2 FLC) can be viewed as an emerging class of intelligent controllers because of their abilities in handling uncertainties; in many cases, they have been shown to outperform their Type-1 counterparts. This paper presents a literature review on recent applications of T2 FLCs. To follow the developments in this field, we first review general T2 FLCs and the most well-known interval T2 FLS algorithms that have been used for control design. Certain applications of these controllers include robotic control, bandwidth control, industrial systems control, electrical control and aircraft control. The most promising applications are found in the robotics and automotive areas, where T2 FLCs have been demonstrated and proven to perform better than traditional controllers. With the development of enhanced algorithms, along with the advancement in both hardware and software, we shall witness increasing applications of these frontier controllers.Algorithms2016-06-0992Review10.3390/a9020039391999-48932016-06-09doi: 10.3390/a9020039Kevin TaiAbdul-Rahman El-SayedMohammad BiglarbegianClaudia GonzalezOscar CastilloShohel Mahmud<![CDATA[Informatics, Vol. 3, Pages 6: Designing a Situational Awareness Information Display: Adopting an Affordance-Based Framework to Amplify User Experience in Environmental Interaction Design]]>
http://www.mdpi.com/2227-9709/3/2/6
User experience remains a crucial consideration when assessing the successfulness of information visualization systems. The theory of affordances provides a robust framework for user experience design. In this article, we demonstrate a design case that employs an affordance-based framework and evaluate the information visualization display design. SolarWheels is an interactive information visualization designed for large display walls in computer network control rooms to help cybersecurity analysts become aware of network status and emerging issues. Given the critical nature of this context, the status and performance of a computer network must be precisely monitored and remedied in real time. In this study, we consider various aspects of affordances in order to amplify the user experience via visualization and interaction design. SolarWheels visualizes the multilayer multidimensional computer network issues with a series of integrated circular visualizations inspired by the metaphor of the solar system. To amplify user interaction and experience, the system provides a three-zone physical interaction that allows multiple users to interact with the system. Users can read details at different levels depending on their distance from the display. An expert evaluation study, based on a four-layer affordance framework, was conducted to assess and improve the interactive visualization design.Informatics2016-06-0932Article10.3390/informatics302000662227-97092016-06-09doi: 10.3390/informatics3020006Yingjie ChenZhenyu QianWeiran Lei<![CDATA[IJGI, Vol. 5, Pages 89: Implementation of Geographical Conditions Monitoring in Beijing-Tianjin-Hebei, China]]>
http://www.mdpi.com/2220-9964/5/6/89
Increasingly accelerated urbanization and socio-economic development can cause a series of environmental problems. Accurate and efficient monitoring of the geographical conditions is important for achieving sustainable development. This paper presents the first results of the project “Geographical Conditions Monitoring (GCM)” in an exemplified area “Beijing-Tianjin-Hebei (BTH)” in China over the last three decades. It focuses on four hot issues in BTH: distribution of dust surfaces and pollution industries, vegetation coverage, urban sprawl, and ground subsidence. The aim of this project is the detection of geographical condition changes and for the description of this development by indicators, as well as the analysis and evaluation of the effects of such processes on selected environmental perspectives. The results have shown that the contributions of the applied GCM in making the plan of urban design and nature conservation. Valuable experience gained from this project would be useful for further developing and applying GCM at the national level.ISPRS International Journal of Geo-Information2016-06-0856Article10.3390/ijgi5060089892220-99642016-06-08doi: 10.3390/ijgi5060089Jixian ZhangJiping LiuLiang ZhaiWei Hou<![CDATA[Systems, Vol. 4, Pages 24: Adaptation in E-Learning Content Specifications with Dynamic Sharable Objects]]>
http://www.mdpi.com/2079-8954/4/2/24
Dynamic sophisticated real-time adaptation is not possible with current e-learning technologies. Our proposal is based on changing the approach for the development of e-learning systems using dynamic languages and including them in both platforms and learning content specifications thereby making them adaptive. We propose a Sharable Auto-Adaptive Learning Object (SALO), defined as an object that includes learning content and describes its own behaviour supported by dynamic languages. We describe an example implementation of SALO for the delivery and assessment of a web development course using Moodle rubrics. As a result, the learning objects can dynamically adapt their characteristics and behaviour in e-learning platforms.Systems2016-06-0842Article10.3390/systems4020024242079-89542016-06-08doi: 10.3390/systems4020024Ignacio GutiérrezVíctor ÁlvarezM. PauleJuan Pérez-PérezSara de Freitas<![CDATA[Algorithms, Vol. 9, Pages 38: A 3/2-Approximation Algorithm for the Graph Balancing Problem with Two Weights]]>
http://www.mdpi.com/1999-4893/9/2/38
In the pursuit of finding subclasses of the makespan minimization problem on unrelated parallel machines that have approximation algorithms with approximation ratio better than 2, the graph balancing problem has been of current interest. In the graph balancing problem each job can be non-preemptively scheduled on one of at most two machines with the same processing time on either machine. Recently, Ebenlendr, Krčál, and Sgall (Algorithmica 2014, 68, 62–80.) presented a 7 / 4 -approximation algorithm for the graph balancing problem. Let r , s ∈ Z + . In this paper we consider the graph balancing problem with two weights, where a job either takes r time units or s time units. We present a 3 / 2 -approximation algorithm for this problem. This is an improvement over the previously best-known approximation algorithm for the problem with approximation ratio 1.652 and it matches the best known inapproximability bound for it.Algorithms2016-06-0892Article10.3390/a9020038381999-48932016-06-08doi: 10.3390/a9020038Daniel PageRoberto Solis-Oba<![CDATA[Computers, Vol. 5, Pages 12: Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm]]>
http://www.mdpi.com/2073-431X/5/2/12
This research is focusing on the radio frequency (RF) magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA) technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Artificial Immune System (AIS) and Ant Colony Optimization (ACO). Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.Computers2016-06-0852Article10.3390/computers5020012122073-431X2016-06-08doi: 10.3390/computers5020012Norlina Mohd SabriNor Md SinMazidah PutehMohamad Mahmood<![CDATA[MCA, Vol. 21, Pages 22: An Improved Interval-Valued Hesitant Fuzzy Multi-Criteria Group Decision-Making Method and Applications]]>
http://www.mdpi.com/2297-8747/21/2/22
The Bonferroni mean (BM) can be used in situations where the aggregated arguments are correlated. BM is very useful for solving decision-making problems. For describing fuzziness and vagueness more accurately, the interval-valued hesitant fuzzy set (IVHFS), which is a generalization of the hesitant fuzzy set (HFS), can be used to describe the membership degrees with interval numbers. The aim of this paper is to propose the interval-valued hesitant fuzzy Bonferroni mean (IVHFBM) for aggregating interval-valued hesitant fuzzy information. Furthermore, the weighted form of IVHFBM (IVHFWBM) is forwarded and, hereby, a multi-criteria group decision-making (MCGDM) method is established. A case study on the problem of evaluating research funding applications in China is analyzed. A comparison between the proposed method and existing ones demonstrates its practicability.Mathematical and Computational Applications2016-06-08212Article10.3390/mca21020022222297-87472016-06-08doi: 10.3390/mca21020022Zhenhua DingYingyu Wu<![CDATA[IJGI, Vol. 5, Pages 88: Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services]]>
http://www.mdpi.com/2220-9964/5/6/88
One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.ISPRS International Journal of Geo-Information2016-06-0856Article10.3390/ijgi5060088882220-99642016-06-08doi: 10.3390/ijgi5060088Zhipeng GuiJun CaoXiaojing LiuXiaoqiang ChengHuayi Wu<![CDATA[Symmetry, Vol. 8, Pages 44: On a Reduction Formula for a Kind of Double q-Integrals]]>
http://www.mdpi.com/2073-8994/8/6/44
Using the q-integral representation of Sears’ nonterminating extension of the q-Saalschütz summation, we derive a reduction formula for a kind of double q-integrals. This reduction formula is used to derive a curious double q-integral formula, and also allows us to prove a general q-beta integral formula including the Askey–Wilson integral formula as a special case. Using this double q-integral formula and the theory of q-partial differential equations, we derive a general q-beta integral formula, which includes the Nassrallah–Rahman integral as a special case. Our evaluation does not require the orthogonality relation for the q-Hermite polynomials and the Askey–Wilson integral formula.Symmetry2016-06-0886Article10.3390/sym8060044442073-89942016-06-08doi: 10.3390/sym8060044Zhi-Guo Liu<![CDATA[JRFM, Vol. 9, Pages 4: Application of Vine Copulas to Credit Portfolio Risk Modeling]]>
http://www.mdpi.com/1911-8074/9/2/4
In this paper, we demonstrate the superiority of vine copulas over conventional copulas when modeling the dependence structure of a credit portfolio. We show statistical and economic implications of replacing conventional copulas by vine copulas for a subportfolio of the Euro Stoxx 50 and the S&amp;P 500 companies, respectively. Our study includes D-vines and R-vines where the bivariate building blocks are chosen from the Gaussian, the t and the Clayton family. Our findings are (i) the conventional Gauss copula is deficient in modeling the dependence structure of a credit portfolio and economic capital is seriously underestimated; (ii) D-vine structures offer a better statistical fit to the data than classical copulas, but underestimate economic capital compared to R-vines; (iii) when mixing different copula families in an R-vine structure, the best statistical fit to the data can be achieved which corresponds to the most reliable estimate for economic capital.Journal of Risk and Financial Management2016-06-0792Article10.3390/jrfm902000441911-80742016-06-07doi: 10.3390/jrfm9020004Marco GeidoschMatthias Fischer<![CDATA[Systems, Vol. 4, Pages 23: Model-Based Design and Formal Verification Processes for Automated Waterway System Operations]]>
http://www.mdpi.com/2079-8954/4/2/23
Waterway and canal systems are particularly cost effective in the transport of bulk and containerized goods to support global trade. Yet, despite these benefits, they are among the most under-appreciated forms of transportation engineering systems. Looking ahead, the long-term view is not rosy. Failures, delays, incidents and accidents in aging waterway systems are doing little to attract the technical and economic assistance required for modernization and sustainability. In a step toward overcoming these challenges, this paper argues that programs for waterway and canal modernization and sustainability can benefit significantly from system thinking, supported by systems engineering techniques. We propose a multi-level multi-stage methodology for the model-based design, simulation and formal verification of automated waterway system operations. At the front-end of development, semi-formal modeling techniques are employed for the representation of project goals and scenarios, requirements and high-level models of behavior and structure. To assure the accuracy of engineering predictions and the correctness of operations, formal modeling techniques are used for the performance assessment and the formal verification of the correctness of functionality. The essential features of this methodology are highlighted in a case study examination of ship and lock-system behaviors in a two-stage lock system.Systems2016-06-0742Article10.3390/systems4020023232079-89542016-06-07doi: 10.3390/systems4020023Leonard PetngaMark Austin<![CDATA[Axioms, Vol. 5, Pages 17: An Overview of the Fuzzy Axiomatic Systems and Characterizations Proposed at Ghent University]]>
http://www.mdpi.com/2075-1680/5/2/17
During the past 40 years of fuzzy research at the Fuzziness and Uncertainty Modeling research unit of Ghent University several axiomatic systems and characterizations have been introduced. In this paper we highlight some of them. The main purpose of this paper consists of an invitation to continue research on these first attempts to axiomatize important concepts and systems in fuzzy set theory. Currently, these attempts are spread over many journals; with this paper they are now collected in a neat overview. In the literature, many axiom systems have been introduced, but as far as we know the axiomatic system of Huntington concerning a Boolean algebra has been the only one where the axioms have been proven independent. Another line of further research could be with respect to the simplification of these systems, in discovering redundancies between the axioms.Axioms2016-06-0752Article10.3390/axioms5020017172075-16802016-06-07doi: 10.3390/axioms5020017Etienne KerreLynn D´eerBart Van Gasse<![CDATA[Data, Vol. 1, Pages 6: The LAB-Net Soil Moisture Network: Application to Thermal Remote Sensing and Surface Energy Balance]]>
http://www.mdpi.com/2306-5729/1/1/6
A set of Essential Climate Variables (ECV) have been defined to be monitored by current and new remote sensing missions. The ECV retrieved at global scale need to be validated in order to provide reliable products to be used in remote sensing applications. For this, test sites are required to use in calibration and validation of the remote sensing approaches in order to improve the ECV retrievals at global scale. The southern hemisphere presents scarce test sites for calibration and validation field campaigns that focus on soil moisture and land surface temperature retrievals. In Chile, remote sensing applications related to soil moisture estimates have increased during the last decades because of the drought and water use conflicts that generate a strong interest on improved water demand estimates. This work describes the Laboratory for Analysis of the Biosphere (LAB)—NETwork, called herein after ‘LAB-net’, which was designed to be the first network in Chile for remote sensing applications. The test sites were placed in four sites with different cover types: vineyards and olive orchards located in the semi-arid region of Atacama, an irrigated raspberry crop in the Mediterranean climate zone of Chimbarongo, and a rainfed pasture in the south of Chile. Over each site, well implemented meteorological and radiative flux instrumentation was installed and continuously recorded the following parameters: soil moisture and temperature at two ground levels (10 and 20 cm), air temperature and relative humidity, net radiation, global radiation, radiometric temperature (8–14 µm), rainfall and soil heat flux. The LAB-net data base post-processing procedure is also described here. As an application, surface remote sensing products such as soil moisture data derived from the Soil Moisture Ocean Salinity (SMOS) and Land Surface Temperature (LST) extracted from the MODIS-MOD11A1 and GOES LST from Copernicus products were compared to in situ data in Oromo LAB-net site. Moreover, land surface energy flux estimation is also shown as an application of LAB-net data base. These applications revealed a good performance between in situ and remote sensing data. LAB-net data base also contributes to provide suitable information for land surface energy budget and therefore water resources management at cultivars scale. The data based generated by LAB-net is freely available for any research or scientific purpose related to current and future remote sensing applications.Data2016-06-0711Data Descriptor10.3390/data101000662306-57292016-06-07doi: 10.3390/data1010006Cristian MattarAndrés Santamaría-ArtigasClaudio Durán-AlarcónLuis Olivera-GuerraRodrigo FusterDager Borvarán<![CDATA[Technologies, Vol. 4, Pages 17: Correlation Plenoptic Imaging With Entangled Photons]]>
http://www.mdpi.com/2227-7080/4/2/17
Plenoptic imaging is a novel optical technique for three-dimensional imaging in a single shot. It is enabled by the simultaneous measurement of both the location and the propagation direction of light in a given scene. In the standard approach, the maximum spatial and angular resolutions are inversely proportional, and so are the resolution and the maximum achievable depth of focus of the 3D image. We have recently proposed a method to overcome such fundamental limits by combining plenoptic imaging with an intriguing correlation remote-imaging technique: ghost imaging. Here, we theoretically demonstrate that correlation plenoptic imaging can be effectively achieved by exploiting the position-momentum entanglement characterizing spontaneous parametric down-conversion (SPDC) photon pairs. As a proof-of-principle demonstration, we shall show that correlation plenoptic imaging with entangled photons may enable the refocusing of an out-of-focus image at the same depth of focus of a standard plenoptic device, but without sacrificing diffraction-limited image resolution.Technologies2016-06-0742Article10.3390/technologies4020017172227-70802016-06-07doi: 10.3390/technologies4020017Francesco PepeFrancesco Di LenaAugusto GaruccioGiuliano ScarcelliMilena D’Angelo<![CDATA[Electronics, Vol. 5, Pages 30: Two-Dimensional Electronics — Prospects and Challenges]]>
http://www.mdpi.com/2079-9292/5/2/30
For about a decade, 2D (two-dimensional) materials have represented one of the hottest directions in solid-state research.[...]Electronics2016-06-0752Editorial10.3390/electronics5020030302079-92922016-06-07doi: 10.3390/electronics5020030Frank Schwierz<![CDATA[IJGI, Vol. 5, Pages 87: Guided Classification System for Conceptual Overlapping Classes in OpenStreetMap]]>
http://www.mdpi.com/2220-9964/5/6/87
The increased development of Volunteered Geographic Information (VGI) and its potential role in GIScience studies raises questions about the resulting data quality. Several studies address VGI quality from various perspectives like completeness, positional accuracy, consistency, etc. They mostly have consensus on the heterogeneity of data quality. The problem may be due to the lack of standard procedures for data collection and absence of quality control feedback for voluntary participants. In our research, we are concerned with data quality from the classification perspective. Particularly in VGI-mapping projects, the limited expertise of participants and the non-strict definition of geographic features lead to conceptual overlapping classes, where an entity could plausibly belong to multiple classes, e.g., lake or pond, park or garden, marsh or swamp, etc. Usually, quantitative and/or qualitative characteristics exist that distinguish between classes. Nevertheless, these characteristics might not be recognizable for non-expert participants. In previous work, we developed the rule-guided classification approach that guides participants to the most appropriate classes. As exemplification, we tackle the conceptual overlapping of some grass-related classes. For a given data set, our approach presents the most highly recommended classes for each entity. In this paper, we present the validation of our approach. We implement a web-based application called Grass&amp;Green that presents recommendations for crowdsourcing validation. The findings show the applicability of the proposed approach. In four months, the application attracted 212 participants from more than 35 countries who checked 2,865 entities. The results indicate that 89% of the contributions fully/partially agree with our recommendations. We then carried out a detailed analysis that demonstrates the potential of this enhanced data classification. This research encourages the development of customized applications that target a particular geographic feature.ISPRS International Journal of Geo-Information2016-06-0756Article10.3390/ijgi5060087872220-99642016-06-07doi: 10.3390/ijgi5060087Ahmed AliNuttha SirilertworakulAlexander ZipfAmin Mobasheri<![CDATA[J. Imaging, Vol. 2, Pages 19: Optimized Distributed Hyperparameter Search and Simulation for Lung Texture Classification in CT Using Hadoop]]>
http://www.mdpi.com/2313-433X/2/2/19
Many medical image analysis tasks require complex learning strategies to reach a quality of image-based decision support that is sufficient in clinical practice. The analysis of medical texture in tomographic images, for example of lung tissue, is no exception. Via a learning framework, very good classification accuracy can be obtained, but several parameters need to be optimized. This article describes a practical framework for efficient distributed parameter optimization. The proposed solutions are applicable for many research groups with heterogeneous computing infrastructures and for various machine learning algorithms. These infrastructures can easily be connected via distributed computation frameworks. We use the Hadoop framework to run and distribute both grid and random search strategies for hyperparameter optimization and cross-validations on a cluster of 21 nodes composed of desktop computers and servers. We show that significant speedups of up to 364× compared to a serial execution can be achieved using our in-house Hadoop cluster by distributing the computation and automatically pruning the search space while still identifying the best-performing parameter combinations. To the best of our knowledge, this is the first article presenting practical results in detail for complex data analysis tasks on such a heterogeneous infrastructure together with a linked simulation framework that allows for computing resource planning. The results are directly applicable in many scenarios and allow implementing an efficient and effective strategy for medical (image) data analysis and related learning approaches.Journal of Imaging2016-06-0722Article10.3390/jimaging2020019192313-433X2016-06-07doi: 10.3390/jimaging2020019Roger SchaerHenning MüllerAdrien Depeursinge<![CDATA[Axioms, Vol. 5, Pages 15: On the Mutual Definability of the Notions of Entailment, Rejection, and Inconsistency]]>
http://www.mdpi.com/2075-1680/5/2/15
In this paper, two axiomatic theories T− and T′ are constructed, which are dual to Tarski’s theory T+ (1930) of deductive systems based on classical propositional calculus. While in Tarski’s theory T+ the primitive notion is the classical consequence function (entailment) Cn+, in the dual theory T− it is replaced by the notion of Słupecki’s rejection consequence Cn− and in the dual theory T′ it is replaced by the notion of the family Incons of inconsistent sets. The author has proved that the theories T+, T−, and T′ are equivalent.Axioms2016-06-0752Article10.3390/axioms5020015152075-16802016-06-07doi: 10.3390/axioms5020015Urszula Wybraniec-Skardowska<![CDATA[Computation, Vol. 4, Pages 22: Online Adaptive Local-Global Model Reduction for Flows in Heterogeneous Porous Media]]>
http://www.mdpi.com/2079-3197/4/2/22
We propose an online adaptive local-global POD-DEIM model reduction method for flows in heterogeneous porous media. The main idea of the proposed method is to use local online indicators to decide on the global update, which is performed via reduced cost local multiscale basis functions. This unique local-global online combination allows (1) developing local indicators that are used for both local and global updates (2) computing global online modes via local multiscale basis functions. The multiscale basis functions consist of offline and some online local basis functions. The approach used for constructing a global reduced system is based on Proper Orthogonal Decomposition (POD) Galerkin projection. The nonlinearities are approximated by the Discrete Empirical Interpolation Method (DEIM). The online adaption is performed by incorporating new data, which become available at the online stage. Once the criterion for updates is satisfied, we adapt the reduced system online by changing the POD subspace and the DEIM approximation of the nonlinear functions. The main contribution of the paper is that the criterion for adaption and the construction of the global online modes are based on local error indicators and local multiscale basis function which can be cheaply computed. Since the adaption is performed infrequently, the new methodology does not add significant computational overhead associated with when and how to adapt the reduced basis. Our approach is particularly useful for situations where it is desired to solve the reduced system for inputs or controls that result in a solution outside the span of the snapshots generated in the offline stage. Our method also offers an alternative of constructing a robust reduced system even if a potential initial poor choice of snapshots is used. Applications to single-phase and two-phase flow problems demonstrate the efficiency of our method.Computation2016-06-0742Article10.3390/computation4020022222079-31972016-06-07doi: 10.3390/computation4020022Yalchin EfendievEduardo GildinYanfang Yang<![CDATA[Mathematics, Vol. 4, Pages 41: Entropic Uncertainty Relations for Successive Generalized Measurements]]>
http://www.mdpi.com/2227-7390/4/2/41
We derive entropic uncertainty relations for successive generalized measurements by using general descriptions of quantum measurement within two distinctive operational scenarios. In the first scenario, by merging two successive measurements into one we consider successive measurement scheme as a method to perform an overall composite measurement. In the second scenario, on the other hand, we consider it as a method to measure a pair of jointly measurable observables by marginalizing over the distribution obtained in this scheme. In the course of this work, we identify that limits on one’s ability to measure with low uncertainty via this scheme come from intrinsic unsharpness of observables obtained in each scenario. In particular, for the Lüders instrument, disturbance caused by the first measurement to the second one gives rise to the unsharpness at least as much as incompatibility of the observables composing successive measurement.Mathematics2016-06-0742Article10.3390/math4020041412227-73902016-06-07doi: 10.3390/math4020041Kyunghyun BaekWonmin Son<![CDATA[IJGI, Vol. 5, Pages 86: 3-Dimensional Modeling and Simulation of the Cloud Based on Cellular Automata and Particle System]]>
http://www.mdpi.com/2220-9964/5/6/86
The authors combine the cellular automata with particle system to realize the three-dimensional modeling and visualization of the cloud in the paper. First, we use the principle of particle systems to simulate the outline of the cloud; generate uniform particles in the bounding volumes of the cloud through random function; build the cloud particle system; and initialize the particle number, size, location and related properties. Then the principle of cellular automata system is adopted to deal with uniform particles simulated by the particle system to make it conform to the rules set by the user, and calculate its continuous field density. We render the final cloud particles with a texture map and simulate the more realistic three-dimensional cloud. This method not only obtains the real effect in the simulation, but also improves the rendering performance.ISPRS International Journal of Geo-Information2016-06-0656Article10.3390/ijgi5060086862220-99642016-06-06doi: 10.3390/ijgi5060086Shuoben BiShengjie BiXiaowen ZengYuan LuHao Zhou<![CDATA[IJGI, Vol. 5, Pages 85: Comparative Perspective of Human Behavior Patterns to Uncover Ownership Bias among Mobile Phone Users]]>
http://www.mdpi.com/2220-9964/5/6/85
With the rapid spread of mobile devices, call detail records (CDRs) from mobile phones provide more opportunities to incorporate dynamic aspects of human mobility in addressing societal issues. However, it has been increasingly observed that CDR data are not always representative of the population under study because it only includes device users alone. To understand the discrepancy between the population captured by CDRs and the general population, we profile principal populations of CDRs by analyzing routines based on time spent at key locations and compare these data with those of the general population. We employ a topic model to estimate typical routines of mobile phone users using CDRs as topics. The routines are extracted from field survey data and compared between those of the general population and mobile phone users. We found that there are two main population groups of mobile phone users in Dhaka: males engaged in an income-generating activity at a specific location other than home and females performing household tasks and spending most of their time at home. We determine that CDRs tend to omit students, who form a significant component of the Dhaka population.ISPRS International Journal of Geo-Information2016-06-0656Article10.3390/ijgi5060085852220-99642016-06-06doi: 10.3390/ijgi5060085Ayumi AraiZipei FanDunstan MatekenyaRyosuke Shibasaki<![CDATA[Symmetry, Vol. 8, Pages 43: Investigating the Performance of a Fractal Ultrasonic Transducer Under Varying System Conditions]]>
http://www.mdpi.com/2073-8994/8/6/43
As applications become more widespread there is an ever-increasing need to improve the accuracy of ultrasound transducers, in order to detect at much finer resolutions. In comparison with naturally occurring ultrasound systems the man-made systems have much poorer accuracy, and the scope for improvement has somewhat plateaued as existing transducer designs have been iteratively improved over many years. The desire to bridge the gap between the man-made and naturally occurring systems has led to recent investigation of transducers with a more complex geometry, in order to replicate the complex structure of the natural systems. These transducers have structures representing fractal geometries, and these have been shown to be capable of delivering improved performance in comparison with standard transducer designs. This paper undertakes a detailed investigation of the comparative performance of a standard transducer design, and a transducer based on a fractal geometry. By considering how these performances vary with respect to the key system parameters, a robust assessment of the fractal transducer performance is provided.Symmetry2016-06-0686Article10.3390/sym8060043432073-89942016-06-06doi: 10.3390/sym8060043Euan BarlowEbrahem AlgehyneAnthony Mulholland<![CDATA[Electronics, Vol. 5, Pages 29: Understanding the Performance of Low Power Raspberry Pi Cloud for Big Data]]>
http://www.mdpi.com/2079-9292/5/2/29
Nowadays, Internet-of-Things (IoT) devices generate data at high speed and large volume. Often the data require real-time processing to support high system responsiveness which can be supported by localised Cloud and/or Fog computing paradigms. However, there are considerably large deployments of IoT such as sensor networks in remote areas where Internet connectivity is sparse, challenging the localised Cloud and/or Fog computing paradigms. With the advent of the Raspberry Pi, a credit card-sized single board computer, there is a great opportunity to construct low-cost, low-power portable cloud to support real-time data processing next to IoT deployments. In this paper, we extend our previous work on constructing Raspberry Pi Cloud to study its feasibility for real-time big data analytics under realistic application-level workload in both native and virtualised environments. We have extensively tested the performance of a single node Raspberry Pi 2 Model B with httperf and a cluster of 12 nodes with Apache Spark and HDFS (Hadoop Distributed File System). Our results have demonstrated that our portable cloud is useful for supporting real-time big data analytics. On the other hand, our results have also unveiled that overhead for CPU-bound workload in virtualised environment is surprisingly high, at 67.2%. We have found that, for big data applications, the virtualisation overhead is fractional for small jobs but becomes more significant for large jobs, up to 28.6%.Electronics2016-06-0652Article10.3390/electronics5020029292079-92922016-06-06doi: 10.3390/electronics5020029Wajdi HajjiFung Tso<![CDATA[Data, Vol. 1, Pages 7: A MODIS/ASTER Airborne Simulator (MASTER) Imagery for Urban Heat Island Research]]>
http://www.mdpi.com/2306-5729/1/1/7
Thermal imagery is widely used to quantify land surface temperatures to monitor the spatial extent and thermal intensity of the urban heat island (UHI) effect. Previous research has applied Landsat images, Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images, Moderate Resolution Imaging Spectroradiometer (MODIS) images, and other coarse- to medium-resolution remotely sensed imagery to estimate surface temperature. These data are frequently correlated with vegetation, impervious surfaces, and temperature to quantify the drivers of the UHI effect. Because of the coarse- to medium-resolution of the thermal imagery, researchers are unable to correlate these temperature data with the more generally available high-resolution land cover classification, which are derived from high-resolution multispectral imagery. The development of advanced thermal sensors with very high-resolution thermal imagery such as the MODIS/ASTER airborne simulator (MASTER) has investigators quantifying the relationship between detailed land cover and land surface temperature. While this is an obvious next step, the published literature, i.e., the MASTER data, are often used to discriminate burned areas, assess fire severity, and classify urban land cover. Considerably less attention is given to use MASTER data in the UHI research. We demonstrate here that MASTER data in combination with high-resolution multispectral data has made it possible to monitor and model the relationship between temperature and detailed land cover such as building rooftops, residential street pavements, and parcel-based landscaping. Here, we report on data sources to conduct this type of UHI research and endeavor to intrigue researchers and scientists such that high-resolution airborne thermal imagery is used to further explore the UHI effect.Data2016-06-0611Data Descriptor10.3390/data101000772306-57292016-06-06doi: 10.3390/data1010007Qunshan ZhaoElizabeth Wentz<![CDATA[Mathematics, Vol. 4, Pages 39: Morphisms and Order Ideals of Toric Posets]]>
http://www.mdpi.com/2227-7390/4/2/39
Toric posets are in some sense a natural “cyclic” version of finite posets in that they capture the fundamental features of a partial order but without the notion of minimal or maximal elements. They can be thought of combinatorially as equivalence classes of acyclic orientations under the equivalence relation generated by converting sources into sinks, or geometrically as chambers of toric graphic hyperplane arrangements. In this paper, we define toric intervals and toric order-preserving maps, which lead to toric analogues of poset morphisms and order ideals. We develop this theory, discuss some fundamental differences between the toric and ordinary cases, and outline some areas for future research. Additionally, we provide a connection to cyclic reducibility and conjugacy in Coxeter groups.Mathematics2016-06-0442Article10.3390/math4020039392227-73902016-06-04doi: 10.3390/math4020039Matthew Macauley<![CDATA[Future Internet, Vol. 8, Pages 25: A Methodological Approach to Evaluate Livestock Innovations on Small-Scale Farms in Developing Countries]]>
http://www.mdpi.com/1999-5903/8/2/25
The aim of the study was deepening the knowledge of livestock innovations knowledge on small-scale farms in developing countries. First, we developed a methodology focused on identifying potential appropriate livestock innovations for smallholders and grouped them in innovation areas, defined as a set of well-organized practices with a business purpose. Finally, a process management program (PMP) was evaluated according to the livestock innovation level and viability of the small-scale farms. Logistic regression was used to evaluate the impact of PMP on the economic viability of the farm. Information from 1650 small-scale livestock farms in Mexico was collected and the innovations were grouped in five innovation areas: A1. Management, A2. Feeding, A3. Genetic, A4. Reproduction and A5. Animal Health. The resulting innovation level in the system was low at 45.7% and heterogeneous among areas. This study shows the usefulness of the methodology described and confirms that implementing a PMP allows improving the viability an additional 21%, due to a better integration of processes, resulting in more efficient management.Future Internet2016-06-0382Article10.3390/fi8020025251999-59032016-06-03doi: 10.3390/fi8020025Antón García-MartínezJosé Rivas-RangelJaime Rangel-QuintosJosé EspinosaCecilio BarbaCarmen de-Pablos-Heredero<![CDATA[Econometrics, Vol. 4, Pages 26: Removing Specification Errors from the Usual Formulation of Binary Choice Models]]>
http://www.mdpi.com/2225-1146/4/2/26
We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.Econometrics2016-06-0342Article10.3390/econometrics4020026262225-11462016-06-03doi: 10.3390/econometrics4020026P.A.V.B. SwamyI-Lok ChangJatinder MehtaWilliam GreeneStephen HallGeorge Tavlas<![CDATA[Mathematics, Vol. 4, Pages 40: Uncertainty Relations and Possible Experience]]>
http://www.mdpi.com/2227-7390/4/2/40
The uncertainty principle can be understood as a condition of joint indeterminacy of classes of properties in quantum theory. The mathematical expressions most closely associated with this principle have been the uncertainty relations, various inequalities exemplified by the well known expression regarding position and momentum introduced by Heisenberg. Here, recent work involving a new sort of “logical” indeterminacy principle and associated relations introduced by Pitowsky, expressable directly in terms of probabilities of outcomes of measurements of sharp quantum observables, is reviewed and its quantum nature is discussed. These novel relations are derivable from Boolean “conditions of possible experience” of the quantum realm and have been considered both as fundamentally logical and as fundamentally geometrical. This work focuses on the relationship of indeterminacy to the propositions regarding the values of discrete, sharp observables of quantum systems. Here, reasons for favoring each of these two positions are considered. Finally, with an eye toward future research related to indeterminacy relations, further novel approaches grounded in category theory and intended to capture and reconceptualize the complementarity characteristics of quantum propositions are discussed in relation to the former.Mathematics2016-06-0342Review10.3390/math4020040402227-73902016-06-03doi: 10.3390/math4020040Gregg Jaeger<![CDATA[IJGI, Vol. 5, Pages 84: GIS and Transport Modeling—Strengthening the Spatial Perspective]]>
http://www.mdpi.com/2220-9964/5/6/84
The movement and transport of people and goods is spatial by its very nature. Thus, geospatial fundamentals of transport systems need to be adequately considered in transport models. Until recently, this was not always the case. Instead, transport research and geography evolved widely independently in domain silos. However, driven by recent conceptual, methodological and technical developments, the need for an integrated approach is obvious. This paper attempts to outline the potential of Geographical Information Systems (GIS) for transport modeling. We identify three fields of transport modeling where the spatial perspective can significantly contribute to a more efficient modeling process and more reliable model results, namely, geospatial data, disaggregated transport models and the role of geo-visualization. For these three fields, available findings from various domains are compiled, before open aspects are formulated as research directions, with exemplary research questions. The overall aim of this paper is to strengthen the spatial perspective in transport modeling and to call for a further integration of GIS in the domain of transport modeling.ISPRS International Journal of Geo-Information2016-06-0356Article10.3390/ijgi5060084842220-99642016-06-03doi: 10.3390/ijgi5060084Martin LoidlGudrun WallentinRita CyganskiAnita GraserJohannes ScholzEva Haslauer<![CDATA[Axioms, Vol. 5, Pages 16: Contribution of Warsaw Logicians to Computational Logic]]>
http://www.mdpi.com/2075-1680/5/2/16
The newly emerging branch of research of Computer Science received encouragement from the successors of the Warsaw mathematical school: Kuratowski, Mazur, Mostowski, Grzegorczyk, and Rasiowa. Rasiowa realized very early that the spectrum of computer programs should be incorporated into the realm of mathematical logic in order to make a rigorous treatment of program correctness. This gave rise to the concept of algorithmic logic developed since the 1970s by Rasiowa, Salwicki, Mirkowska, and their followers. Together with Pratt’s dynamic logic, algorithmic logic evolved into a mainstream branch of research: logic of programs. In the late 1980s, Warsaw logicians Tiuryn and Urzyczyn categorized various logics of programs, depending on the class of programs involved. Quite unexpectedly, they discovered that some persistent open questions about the expressive power of logics are equivalent to famous open problems in complexity theory. This, along with parallel discoveries by Harel, Immerman and Vardi, contributed to the creation of an important area of theoretical computer science: descriptive complexity. By that time, the modal μ-calculus was recognized as a sort of a universal logic of programs. The mid 1990s saw a landmark result by Walukiewicz, who showed completeness of a natural axiomatization for the μ-calculus proposed by Kozen. The difficult proof of this result, based on automata theory, opened a path to further investigations. Later, Bojanczyk opened a new chapter by introducing an unboundedness quantifier, which allowed for expressing some quantitative properties of programs. Yet another topic, linking the past with the future, is the subject of automata founded in the Fraenkel-Mostowski set theory. The studies on intuitionism found their continuation in the studies of Curry-Howard isomorphism. ukasiewicz’s landmark idea of many-valued logic found its continuation in various approaches to incompleteness and uncertainty.Axioms2016-06-0352Article10.3390/axioms5020016162075-16802016-06-03doi: 10.3390/axioms5020016Damian Niwiński<![CDATA[Information, Vol. 7, Pages 32: Speech Compression]]>
http://www.mdpi.com/2078-2489/7/2/32
Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.Information2016-06-0372Review10.3390/info7020032322078-24892016-06-03doi: 10.3390/info7020032Jerry Gibson<![CDATA[IJGI, Vol. 5, Pages 83: Morphological Principal Component Analysis for Hyperspectral Image Analysis]]>
http://www.mdpi.com/2220-9964/5/6/83
This article deals with the issue of reducing the spectral dimension of a hyperspectral image using principal component analysis (PCA). To perform this dimensionality reduction, we propose the addition of spatial information in order to improve the features that are extracted. Several approaches proposed to add spatial information are discussed in this article. They are based on mathematical morphology operators. These morphological operators are the area opening/closing, granulometries and grey-scale distance function. We name the proposed family of techniques the Morphological Principal Component Analysis (MorphPCA). Present approaches provide new feature spaces able to jointly handle the spatial and spectral information of hyperspectral images. They are computationally simple since the key element is the computation of an empirical covariance matrix which integrates simultaneously both spatial and spectral information. The performance of the different feature spaces is assessed for different tasks in order to prove their practical interest.ISPRS International Journal of Geo-Information2016-06-0356Article10.3390/ijgi5060083832220-99642016-06-03doi: 10.3390/ijgi5060083Gianni FranchiJesús Angulo<![CDATA[Electronics, Vol. 5, Pages 28: An Investigation of Carbon-Doping-Induced Current Collapse in GaN-on-Si High Electron Mobility Transistors]]>
http://www.mdpi.com/2079-9292/5/2/28
This paper reports the successful fabrication of a GaN-on-Si high electron mobility transistor (HEMT) with a 1702 V breakdown voltage (BV) and low current collapse. The strain and threading dislocation density were well-controlled by 100 pairs of AlN/GaN superlattice buffer layers. Relative to the carbon-doped GaN spacer layer, we grew the AlGaN back barrier layer at a high temperature, resulting in a low carbon-doping concentration. The high-bandgap AlGaN provided an effective barrier for blocking leakage from the channel to substrate, leading to a BV comparable to the ordinary carbon-doped GaN HEMTs. In addition, the AlGaN back barrier showed a low dispersion of transiently pulsed ID under substrate bias, implying that the buffer traps were effectively suppressed. Therefore, we obtained a low-dynamic on-resistance with this AlGaN back barrier. These two approaches of high BV with low current collapse improved the device performance, yielding a device that is reliable in power device applications.Electronics2016-06-0252Article10.3390/electronics5020028282079-92922016-06-02doi: 10.3390/electronics5020028An-Jye TzouDan-Hua HsiehSzu-Hung ChenYu-Kuang LiaoZhen-Yu LiChun-Yen ChangHao-Chung Kuo<![CDATA[IJGI, Vol. 5, Pages 82: A High-Efficiency Method of Mobile Positioning Based on Commercial Vehicle Operation Data]]>
http://www.mdpi.com/2220-9964/5/6/82
Commercial vehicle operation (CVO) has been a popular application of intelligent transportation systems. Location determination and route tracing of an on-board unit (OBU) in a vehicle is an important capability for CVO. However, large location errors from global positioning system (GPS) receivers may occur in cities that shield GPS signals. Therefore, a highly efficient mobile positioning method is proposed based on the collection and analysis of the cellular network signals of CVO data. Parallel- and cloud-computing techniques are designed into the proposed method to quickly determine the location of an OBU for CVO. Furthermore, this study proposes analytical models to analyze the availability of the proposed mobile positioning method with various outlier filtering criteria. Experimentally, a CVO system was designed and implemented to collect CVO data from Chunghwa Telecom vehicles and to analyze the cellular network signals of CVO data for location determination. A case study found that the average errors of location determination using the proposed method vs. using the traditional cell-ID-based location method were 163.7 m and 521.2 m, respectively. Furthermore, the practical results show that the average location error and availability of using the proposed method are better than using GPS or the cell-ID-based location method for each road type, particularly urban roads. Therefore, this approach is feasible to determine OBU locations for improving CVO.ISPRS International Journal of Geo-Information2016-06-0256Article10.3390/ijgi5060082822220-99642016-06-02doi: 10.3390/ijgi5060082Chi-Hua ChenJia-Hong LinTa-Sheng KuanKuen-Rong Lo