Computer Science & Mathematics
http://www.mdpi.com/journal/computer-math
Latest open access articles published in Computer Science & Mathematics at http://www.mdpi.com/journal/computer-math<![CDATA[Algorithms, Vol. 8, Pages 407-414: Implementation of a Parallel Algorithm Based on a Spark Cloud Computing Platform]]>
http://www.mdpi.com/1999-4893/8/3/407
Parallel algorithms, such as the ant colony algorithm, take a long time when solving large-scale problems. In this paper, the MAX-MIN Ant System algorithm (MMAS) is parallelized to solve Traveling Salesman Problem (TSP) based on a Spark cloud computing platform. We combine MMAS with Spark MapReduce to execute the path building and the pheromone operation in a distributed computer cluster. To improve the precision of the solution, local optimization strategy 2-opt is adapted in MMAS. The experimental results show that Spark has a very great accelerating effect on the ant colony algorithm when the city scale of TSP or the number of ants is relatively large.Algorithms2015-07-0383Article10.3390/a80304074074141999-48932015-07-03doi: 10.3390/a8030407Longhui WangYong WangYudong Xie<![CDATA[Mathematics, Vol. 3, Pages 604-614: Topological Integer Additive Set-Sequential Graphs]]>
http://www.mdpi.com/2227-7390/3/3/604
Let \(\mathbb{N}_0\) denote the set of all non-negative integers and \(X\) be any non-empty subset of \(\mathbb{N}_0\). Denote the power set of \(X\) by \(\mathcal{P}(X)\). An integer additive set-labeling (IASL) of a graph \(G\) is an injective function \(f : V (G) \to P(X)\) such that the image of the induced function \(f^+: E(G) \to \mathcal{P}(\mathbb{N}_0)\), defined by \(f^+(uv)=f(u)+f(v)\), is contained in \(\mathcal{P}(X)\), where \(f(u) + f(v)\) is the sumset of \(f(u)\) and \(f(v)\). If the associated set-valued edge function \(f^+\) is also injective, then such an IASL is called an integer additive set-indexer (IASI). An IASL \(f\) is said to be a topological IASL (TIASL) if \(f(V(G))\cup \{\emptyset\}\) is a topology of the ground set \(X\). An IASL is said to be an integer additive set-sequential labeling (IASSL) if \(f(V(G))\cup f^+(E(G))= \mathcal{P}(X)-\{\emptyset\}\). An IASL of a given graph \(G\) is said to be a topological integer additive set-sequential labeling of \(G\), if it is a topological integer additive set-labeling as well as an integer additive set-sequential labeling of \(G\). In this paper, we study the conditions required for a graph \(G\) to admit this type of IASL and propose some important characteristics of the graphs which admit this type of IASLs.Mathematics2015-07-0333Article10.3390/math30306046046142227-73902015-07-03doi: 10.3390/math3030604Sudev NaduvathGermina AugustineChithra Sudev<![CDATA[Technologies, Vol. 3, Pages 162-181: On the Use of Local Search in the Evolution of Neural Networks for the Diagnosis of Breast Cancer]]>
http://www.mdpi.com/2227-7080/3/3/162
With the advancement in the field of Artificial Intelligence, there have been considerable efforts to develop technologies for pattern recognition related to medical diagnosis. Artificial Neural Networks (ANNs), a significant piece of Artificial Intelligence forms the base for most of the marvels in the former field. However, ANNs face the problem of premature convergence at a local minimum and inability to set hyper-parameters (like the number of neurons, learning rate, etc.) while using Back Propagation Algorithm (BPA). In this paper, we have used the Genetic Algorithm (GA) for the evolution of the ANN, which overcomes the limitations of the BPA. Since GA alone cannot fit for a high-dimensional, complex and multi-modal optimization landscape of the ANN, BPA is used as a local search algorithm to aid the evolution. The contributions of GA and BPA in the resultant approach are adjudged to determine the magnitude of local search necessary for optimization, striking a clear balance between exploration and exploitation in the evolution. The algorithm was applied to deal with the problem of Breast Cancer diagnosis. Results showed that under optimal settings, hybrid algorithm performs better than BPA or GA alone.Technologies2015-07-0333Article10.3390/technologies30301621621812227-70802015-07-03doi: 10.3390/technologies3030162Agam GuptaShiva BhallaShishir DwivediNitin VermaRahul Kala<![CDATA[Axioms, Vol. 4, Pages 213-234: Scientific Endeavors of A.M. Mathai: An Appraisal on the Occasion of his Eightieth Birthday, 28 April 2015]]>
http://www.mdpi.com/2075-1680/4/3/213
A.M. Mathai is Emeritus Professor of Mathematics and Statistics at McGill University, Canada. He is currently the Director of the Centre for Mathematical and Statistical Sciences India. His research contributions cover a wide spectrum of topics in mathematics, statistics, physics, astronomy, and biology. He is a Fellow of the Institute of Mathematical Statistics, National Academy of Sciences of India, and a member of the International Statistical Institute. He is a founder of the Canadian Journal of Statistics and the Statistical Society of Canada. He was instrumental in the implementation of the United Nations Basic Space Science Initiative (1991–2012). This paper highlights research results of A.M. Mathai in the period of time from 1962 to 2015. He published over 300 research papers and over 25 books.Axioms2015-07-0343Editorial10.3390/axioms40302132132342075-16802015-07-03doi: 10.3390/axioms4030213Hans HauboldArak Mathai<![CDATA[Information, Vol. 6, Pages 314-338: Mind, Matter, Information and Quantum Interpretations]]>
http://www.mdpi.com/2078-2489/6/3/314
In this paper I give a new information-theoretic analysis of the formalisms and interpretations of quantum mechanics (QM) in general, and of two mainstream interpretations of quantum mechanics in particular: The Copenhagen interpretation and David Bohm’s interpretation of quantum mechanics. Adopting Juan G. Roederer’s reading of the notion of pragmatic information, I argue that pragmatic information is not applicable to the Copenhagen interpretation since the interpretation is primarily concerned with epistemology rather than ontology. However it perfectly fits Bohm’s ontological interpretation of quantum mechanics in the realms of biotic and artificial systems. Viewing Bohm’s interpretation of QM in the context of pragmatic information imposes serious limitations to the qualitative aspect of such an interpretation, making his extension of the notion active information to every level of reality illegitimate. Such limitations lead to the idea that, contrary to Bohm’s claim, mind is not a more subtle aspect of reality via the quantum potential as active information, but the quantum potential as it affects particles in the double-slit experiment represents the non-algorithmic aspect of the mind as a genuine information processing system. This will provide an information-based ground, firstly, for refreshing our views on quantum interpretations and secondly, for a novel qualitative theory of the relationship of mind and matter in which mind-like properties are exclusive attributes of living systems. To this end, I will also take an information-theoretic approach to the notion of intentionality as interpreted by John Searle.Information2015-07-0263Article10.3390/info60303143143382078-24892015-07-02doi: 10.3390/info6030314Reza Maleeh<![CDATA[Computers, Vol. 4, Pages 176-214: A Cloud Based Mobile Dispatching System with Built-in Social CRM Component: Design and Implementation]]>
http://www.mdpi.com/2073-431X/4/3/176
Mobile dispatching applications have become popular for at least two major reasons. The first reason is a more mobile-centric usage pattern, where users relate to apps for fulfilling different needs that they have. In this respect, a vehicle dispatching application for mobile phones is perceived as a modern way of booking a vehicle. The second reason has to do with the advantages that this method has over traditional dispatching systems, such as being able to see the vehicle approaching on a map, being able to rate a driver and the most importantly spurring customer retention. The taxi dispatching business, one of the classes of dispatching businesses, tends to be a medium to lower class fidelity service, where users mostly consider the closest taxi as opposed to quality, which is regarded as being at a relatively consistent level. We propose a new approach for the taxi ordering application , a mobile dispatching system, which allows for a more engaged user base and offers fidelity rewards that are used to enhance the customer retention level based on a built in social customer relationship management (CRM) component. With this approach, we argue that in a business world which is shifting from a consumer-centric marketing to a human-centric model, this apps will allows taxi businesses to better interact with their clients in a more direct and responsible manner. Also this distributed system helps taxi drivers, which can receive orders directly from their clients and will be able to benefit from offering quality services as they can get higher ratings.Computers2015-07-0243Project Report10.3390/computers40301761762142073-431X2015-07-02doi: 10.3390/computers4030176Cosmina IvanRazvan Popa<![CDATA[IJGI, Vol. 4, Pages 1055-1075: Modeling a 3D City Model and Its Levels of Detail as a True 4D Model]]>
http://www.mdpi.com/2220-9964/4/3/1055
The various levels of detail (LODs) of a 3D city model are often stored independently, without links between the representations of the same object, causing inconsistencies, as well as update and maintenance problems. One solution to this problem is to model the LOD as an extra geometric dimension perpendicular to the three spatial ones, resulting in a true 4D model in which a single 4D object (a polychoron) represents a 3D polyhedral object (e.g., a building) at all of its LODs and a multiple-LOD 3D city model is modeled as a 4D cell complex. While such an approach has been discussed before at a conceptual level, our objective in this paper is to describe how it can be realized by appropriately linking existing 3D models of the same object at different LODs. We first present our general methodology to construct such a 4D model, which consists of three steps: (1) finding corresponding 0D–3D cells; (2) creating 1D–4D cells connecting them; and (3) constructing the 4D model. Because of the complex relationships between the objects in different LODs, the creation of the connecting cells can become difficult. We therefore describe four different alternatives to do this, and we discuss the advantages and disadvantages of each in terms of their feasibility in practice and the properties that the resulting 4D model has. We show how the different linking schemes result in objects with different characteristics in several use cases. We also show how our linking method works in practice by implementing the linking of matching cells to construct a 4D model.ISPRS International Journal of Geo-Information2015-07-0243Article10.3390/ijgi4031055105510752220-99642015-07-02doi: 10.3390/ijgi4031055Ken OhoriHugo LedouxFilip BiljeckiJantien Stoter<![CDATA[Symmetry, Vol. 7, Pages 1176-1210: Real Time MODBUS Transmissions and Cryptography Security Designs and Enhancements of Protocol Sensitive Information]]>
http://www.mdpi.com/2073-8994/7/3/1176
Information technology (IT) security has become a major concern due to the growing demand for information and massive development of client/server applications for various types of applications running on modern IT infrastructure. How has security been taken into account and which paradigms are necessary to minimize security issues while increasing efficiency, reducing the influence on transmissions, ensuring protocol independency and achieving substantial performance? We have found cryptography to be an absolute security mechanism for client/server architectures, and in this study, a new security design was developed with the MODBUS protocol, which is considered to offer phenomenal performance for future development and enhancement of real IT infrastructure. This study is also considered to be a complete development because security is tested in almost all ways of MODBUS communication. The computed measurements are evaluated to validate the overall development, and the results indicate a substantial improvement in security that is differentiated from conventional methods.Symmetry2015-07-0273Article10.3390/sym7031176117612102073-89942015-07-02doi: 10.3390/sym7031176Aamir ShahzadMalrey LeeYoung-Keun LeeSuntae KimNaixue XiongJae-Young ChoiYounghwa Cho<![CDATA[Symmetry, Vol. 7, Pages 1164-1175: Hierarchical Clustering Using One-Class Support Vector Machines]]>
http://www.mdpi.com/2073-8994/7/3/1164
This paper presents a novel hierarchical clustering method using support vector machines. A common approach for hierarchical clustering is to use distance for the task. However, different choices for computing inter-cluster distances often lead to fairly distinct clustering outcomes, causing interpretation difficulties in practice. In this paper, we propose to use a one-class support vector machine (OC-SVM) to directly find high-density regions of data. Our algorithm generates nested set estimates using the OC-SVM and exploits the hierarchical structure of the estimated sets. We demonstrate the proposed algorithm on synthetic datasets. The cluster hierarchy is visualized with dendrograms and spanning trees.Symmetry2015-07-0173Article10.3390/sym7031164116411752073-89942015-07-01doi: 10.3390/sym7031164Gyemin Lee<![CDATA[JSAN, Vol. 4, Pages 154-159: Radio/Antenna Mounting System for Wireless Networking under Row-Crop Agriculture Conditions]]>
http://www.mdpi.com/2224-2708/4/3/154
Interest in and deployment of wireless monitoring systems is increasing in many diverse environments, including row-crop agricultural fields. While many studies have been undertaken to evaluate various aspects of wireless monitoring and networking, such as electronic hardware components, data-collection procedures, power management, and communication protocols, little information related to physical deployment issues has been reported. To achieve acceptable wireless transmission capability, the radio/antenna must be positioned properly relative to the ground surface or crop canopy to minimize degradation of the radio signal, usually requiring the mounting of the radio/antenna above the canopy. This results in the presence of obstacles to normal agricultural equipment traffic and production operations and potential damage to the wireless monitoring system. A simple and rugged radio/antenna mounting system was designed which could be subjected to encounters with agricultural equipment without suffering physical damage. The mounting system was deployed and tested, and operated successfully following repeated encounters with various agricultural machines and implements. The radio/antenna mount is simple and inexpensive to fabricate using locally available components.Journal of Sensor and Actuator Networks2015-07-0143Communication10.3390/jsan40301541541592224-27082015-07-01doi: 10.3390/jsan4030154Daniel Fisher<![CDATA[Systems, Vol. 3, Pages 62-80: Dealing with Multi-Level Governance and Wicked Problems in Urban Transportation Systems: The Case of Palermo Municipality]]>
http://www.mdpi.com/2079-8954/3/3/62
Italian New Public Management (NPM) has been mainly characterized by a political orientation toward power decentralization to local governments and privatization of public companies. Nowadays, local utilities in Italy are often run by joint stock companies controlled by public agencies such as Regional and Municipal Administrations. Due to this transformation, these companies must comply with a set of diverse expectations coming from a wide range of stakeholders, related to their financial, competitive and social performance. Such fragmented governance increases the presence of “wicked” problems in the decision-making sphere of these entities. Given this multi-level governance structure, how do these agents influence public services performance? In recent years, coordination and inter-institutional joint action have been identified as possible approaches for dealing with governance fragmentation and wicked problems deriving from it. How can we adapt a performance management perspective in order to help us reform the system and so have a better collaboration between the stakeholders involved? In order to address and discuss these research questions, a case study will be developed. The case concerns AMAT, the local utility providing the public transportation service in the Municipality of Palermo (Italy). The result of this study is a dynamic model including a set of performance indicators that help us in understanding the impact of the governing structure on the system’s performance.Systems2015-06-3033Article10.3390/systems303006262802079-89542015-06-30doi: 10.3390/systems3030062Guido NotoCarmine Bianchi<![CDATA[Econometrics, Vol. 3, Pages 466-493: A New Approach to Model Verification, Falsification and Selection]]>
http://www.mdpi.com/2225-1146/3/3/466
This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.Econometrics2015-06-2933Article10.3390/econometrics30304664664932225-11462015-06-29doi: 10.3390/econometrics3030466Andrew BuckGeorge Lady<![CDATA[Algorithms, Vol. 8, Pages 395-406: Algorithms for Computerized Fetal Heart Rate Diagnosis with Direct Reporting]]>
http://www.mdpi.com/1999-4893/8/3/395
Aims: Since pattern classification of fetal heart rate (FHR) was subjective and enlarged interobserver difference, objective FHR analysis was achieved with computerized FHR diagnosis. Methods: The computer algorithm was composed of an experts’ knowledge system, including FHR analysis and FHR score calculation, and also of an objective artificial neural network system with software. In addition, a FHR frequency spectrum was studied to detect ominous sinusoidal FHR and the loss of baseline variability related to fetal brain damage. The algorithms were installed in a central-computerized automatic FHR monitoring system, which gave the diagnosis rapidly and directly to the attending doctor. Results: Clinically perinatal mortality decreased significantly and no cerebral palsy developed after introduction of the centralized system. Conclusion: The automatic multichannel FHR monitoring system improved the monitoring, increased the objectivity of FHR diagnosis and promoted clinical results.Algorithms2015-06-2983Review10.3390/a80303953954061999-48932015-06-29doi: 10.3390/a8030395Kazuo MaedaYasuaki NoguchiMasaji UtsuTakashi Nagassawa<![CDATA[Mathematics, Vol. 3, Pages 563-603: Singular Bilinear Integrals in Quantum Physics]]>
http://www.mdpi.com/2227-7390/3/3/563
Bilinear integrals of operator-valued functions with respect to spectral measures and integrals of scalar functions with respect to the product of two spectral measures arise in many problems in scattering theory and spectral analysis. Unfortunately, the theory of bilinear integration with respect to a vector measure originating from the work of Bartle cannot be applied due to the singular variational properties of spectral measures. In this work, it is shown how ``decoupled'' bilinear integration may be used to find solutions \(X\) of operator equations \(AX-XB=Y\) with respect to the spectral measure of \(A\) and to apply such representations to the spectral decomposition of block operator matrices. A new proof is given of Peller's characterisation of the space \(L^1((P\otimes Q)_{\mathcal L(\mathcal H)})\) of double operator integrable functions for spectral measures \(P\), \(Q\) acting in a Hilbert space \(\mathcal H\) and applied to the representation of the trace of \(\int_{\Lambda\times\Lambda}\varphi\,d(PTP)\) for a trace class operator \(T\). The method of double operator integrals due to Birman and Solomyak is used to obtain an elementary proof of the existence of Krein's spectral shift function.Mathematics2015-06-2933Article10.3390/math30305635636032227-73902015-06-29doi: 10.3390/math3030563Brian Jefferies<![CDATA[Future Internet, Vol. 7, Pages 214-224: Tracking Positioning Algorithm for Direction of Arrival Based on Direction Lock Loop]]>
http://www.mdpi.com/1999-5903/7/3/214
In order to solve the problem of poor real-time performance, low accuracy and high computational complexity in the traditional process of locating and tracking of Direction of Arrival (DOA) of moving targets, this paper proposes a DOA algorithm based on the Direction Lock Loop (DILL) which adopts Lock Loop structure to realize the estimation and location of DOA and can adjust the direction automatically along with the changes of a signal’s angular variation to track the position of the signal. Meanwhile, to reduce the influence of nonlinearity and noise on its performance, the UKF filter is designed for eliminating interference of the estimated target signal to improve accuracy of the signal tracking and stability of the system. Simulation results prove that the algorithm can not only get a high resolution DOA estimate signal, but can also locate and track multiple mobile targets effectively with enhanced accuracy, efficiency and stability.Future Internet2015-06-2673Article10.3390/fi70302142142241999-59032015-06-26doi: 10.3390/fi7030214Xiu-Zhi ChengDa-Rong ZhuShen ZhangPing He<![CDATA[IJFS, Vol. 3, Pages 177-193: Net Interoffice Accounts of Global Banks: The Role of Domestic Funding]]>
http://www.mdpi.com/2227-7072/3/3/177
Existing literature has identified domestic restrictive monetary policy and deteriorating funding conditions as the predominant factors explaining the increase in net interoffice accounts of global banks, that is, the net liabilities of parent offices due to their related foreign offices. The purpose of this research is twofold. Firstly, it quantifies the responsiveness of net interoffice accounts to variations in different types of domestic funding. Secondly, the paper assesses whether the relationship between net interoffice accounts and domestic policy-steered rates depends on cross-sectional differences in the funding structure of global banks. Using US banks’ balance sheets data collected by the Federal Financial Institutions Examination Council, the results highlight the importance of domestic repo borrowings in explaining net interoffice accounts, especially for larger banks during the crisis. On the other hand, a negative relationship between policy rates and net interoffice accounts is observed only for those global banks with a relatively higher share of repo borrowings.International Journal of Financial Studies2015-06-2633Article10.3390/ijfs30301771771932227-70722015-06-26doi: 10.3390/ijfs3030177Carmela D'Avino<![CDATA[IJGI, Vol. 4, Pages 1033-1054: Hybrid 3D Rendering of Large Map Data for Crisis Management]]>
http://www.mdpi.com/2220-9964/4/3/1033
In this paper we investigate the use of games technologies for the research and the development of 3D representations of real environments captured from GIS information and open source map data. Challenges involved in this area concern the large data-sets to be dealt with. Some existing map data include errors and are not complete, which makes the generation of realistic and accurate 3D environments problematic. The domain of application of our work is crisis management which requires very accurate GIS or map information. We believe the use of creating a 3D virtual environment using real map data whilst correcting and completing the missing data, improves the quality and performance of crisis management decision support system to provide a more natural and intuitive interface for crisis managers. Consequently, we present a case study into issues related to combining multiple large datasets to create an accurate representation of a novel, multi-layered, hybrid real-world maps. The hybrid map generation combines LiDAR, Ordnance Survey, and OpenStreetMap data to generate 3D cities spanning 1 km2. Evaluation of initial visualised scenes is presented. Initial tests consist of a 1 km2 landscape map containing up to 16 million vertices’ and run at an optimal 51.66 frames per-second.ISPRS International Journal of Geo-Information2015-06-2643Article10.3390/ijgi4031033103310542220-99642015-06-26doi: 10.3390/ijgi4031033David TullyAbdennour RhalibiChristopher CarterSud Sudirman<![CDATA[Algorithms, Vol. 8, Pages 380-394: Improving CLOPE’s Profit Value and Stability with an Optimized Agglomerative Approach]]>
http://www.mdpi.com/1999-4893/8/3/380
CLOPE (Clustering with sLOPE) is a simple and fast histogram-based clustering algorithm for categorical data. However, given the same data set with the same input parameter, the clustering results by this algorithm would possibly be different if the transactions are input in a different sequence. In this paper, a hierarchical clustering framework is proposed as an extension of CLOPE to generate stable and satisfactory clustering results based on an optimized agglomerative merge process. The new clustering profit is defined as the merge criteria and the cluster graph structure is proposed to optimize the merge iteration process. The experiments conducted on two datasets both demonstrate that the agglomerative approach achieves stable clustering results with a better profit value, but costs much more time due to the worse complexity.Algorithms2015-06-2683Article10.3390/a80303803803941999-48932015-06-26doi: 10.3390/a8030380Yefeng LiJiajin LeMei Wang<![CDATA[Symmetry, Vol. 7, Pages 1151-1163: Multiple Minimum Support-Based Rare Graph Pattern Mining Considering Symmetry Feature-Based Growth Technique and the Differing Importance of Graph Elements]]>
http://www.mdpi.com/2073-8994/7/3/1151
Frequent graph pattern mining is one of the most interesting areas in data mining, and many researchers have developed a variety of approaches by suggesting efficient, useful mining techniques by integration of fundamental graph mining with other advanced mining works. However, previous graph mining approaches have faced fatal problems that cannot consider important characteristics in the real world because they cannot process both (1) different element importance and (2) multiple minimum support thresholds suitable for each graph element. In other words, graph elements in the real world have not only frequency factors but also their own importance; in addition, various elements composing graphs may require different thresholds according to their characteristics. However, traditional ones do not consider such features. To overcome these issues, we propose a new frequent graph pattern mining method, which can deal with both different element importance and multiple minimum support thresholds. Through the devised algorithm, we can obtain more meaningful graph pattern results with higher importance. We also demonstrate that the proposed algorithm has more outstanding performance compared to previous state-of-the-art approaches in terms of graph pattern generation, runtime, and memory usage.Symmetry2015-06-2673Article10.3390/sym7031151115111632073-89942015-06-26doi: 10.3390/sym7031151Gangin LeeUnil YunHeungmo RyangDonggyu Kim<![CDATA[Information, Vol. 6, Pages 300-313: ANFIS Based Time Series Prediction Method of Bank Cash Flow Optimized by Adaptive Population Activity PSO Algorithm]]>
http://www.mdpi.com/2078-2489/6/3/300
In order to improve the accuracy and real-time of all kinds of information in the cash business, and solve the problem which accuracy and stability is not high of the data linkage between cash inventory forecasting and cash management information in the commercial bank, a hybrid learning algorithm is proposed based on adaptive population activity particle swarm optimization (APAPSO) algorithm combined with the least squares method (LMS) to optimize the adaptive network-based fuzzy inference system (ANFIS) model parameters. Through the introduction of metric function of population diversity to ensure the diversity of population and adaptive changes in inertia weight and learning factors, the optimization ability of the particle swarm optimization (PSO) algorithm is improved, which avoids the premature convergence problem of the PSO algorithm. The simulation comparison experiments are carried out with BP-LMS algorithm and standard PSO-LMS by adopting real commercial banks’ cash flow data to verify the effectiveness of the proposed time series prediction of bank cash flow based on improved PSO-ANFIS optimization method. Simulation results show that the optimization speed is faster and the prediction accuracy is higher.Information2015-06-2463Article10.3390/info60303003003132078-24892015-06-24doi: 10.3390/info6030300Jie-Sheng WangChen-Xu Ning<![CDATA[Information, Vol. 6, Pages 287-299: Robust Sparse Representation for Incomplete and Noisy Data]]>
http://www.mdpi.com/2078-2489/6/3/287
Owing to the robustness of large sparse corruptions and the discrimination of class labels, sparse signal representation has been one of the most advanced techniques in the fields of pattern classification, computer vision, machine learning and so on. This paper investigates the problem of robust face classification when a test sample has missing values. Firstly, we propose a classification method based on the incomplete sparse representation. This representation is boiled down to an l1 minimization problem and an alternating direction method of multipliers is employed to solve it. Then, we provide a convergent analysis and a model extension on incomplete sparse representation. Finally, we conduct experiments on two real-world face datasets and compare the proposed method with the nearest neighbor classifier and the sparse representation-based classification. The experimental results demonstrate that the proposed method has the superiority in classification accuracy, completion of the missing entries and recovery of noise.Information2015-06-2463Article10.3390/info60302872872992078-24892015-06-24doi: 10.3390/info6030287Jiarong ShiXiuyun ZhengWei Yang<![CDATA[Algorithms, Vol. 8, Pages 366-379: Identification of Dual-Rate Sampled Hammerstein Systems with a Piecewise-Linear Nonlinearity Using the Key Variable Separation Technique]]>
http://www.mdpi.com/1999-4893/8/3/366
The identification difficulties for a dual-rate Hammerstein system lie in two aspects. First, the identification model of the system contains the products of the parameters of the nonlinear block and the linear block, and a standard least squares method cannot be directly applied to the model; second, the traditional single-rate discrete-time Hammerstein model cannot be used as the identification model for the dual-rate sampled system. In order to solve these problems, by combining the polynomial transformation technique with the key variable separation technique, this paper converts the Hammerstein system into a dual-rate linear regression model about all parameters (linear-in-parameter model) and proposes a recursive least squares algorithm to estimate the parameters of the dual-rate system. The simulation results verify the effectiveness of the proposed algorithm.Algorithms2015-06-2483Article10.3390/a80303663663791999-48932015-06-24doi: 10.3390/a8030366Ying-Ying WangXiang-Dong WangDong-Qing Wang<![CDATA[Informatics, Vol. 2, Pages 4-19: Human–Information Interaction with Complex Information for Decision-Making]]>
http://www.mdpi.com/2227-9709/2/2/4
Human–information interaction (HII) for simple information and for complex information is different because people’s goals and information needs differ between the two cases. With complex information, comprehension comes from understanding the relationships and interactions within the information and factors outside of a design team’s control. Yet, a design team must consider all these within an HII design in order to maximize the communication potential. This paper considers how simple and complex information requires different design strategies and how those strategies differ.Informatics2015-06-2322Article10.3390/informatics20200044192227-97092015-06-23doi: 10.3390/informatics2020004Michael Albers<![CDATA[Axioms, Vol. 4, Pages 194-212: On T-Characterized Subgroups of Compact Abelian Groups]]>
http://www.mdpi.com/2075-1680/4/2/194
A sequence \(\{ u_n \}_{n\in \omega}\) in abstract additively-written Abelian group \(G\) is called a \(T\)-sequence if there is a Hausdorff group topology on \(G\) relative to which \(\lim_n u_n =0\). We say that a subgroup \(H\) of an infinite compact Abelian group \(X\) is \(T\)-characterized if there is a \(T\)-sequence \(\mathbf{u} =\{ u_n \}\) in the dual group of \(X\), such that \(H=\{ x\in X: \; (u_n, x)\to 1 \}\). We show that a closed subgroup \(H\) of \(X\) is \(T\)-characterized if and only if \(H\) is a \(G_\delta\)-subgroup of \(X\) and the annihilator of \(H\) admits a Hausdorff minimally almost periodic group topology. All closed subgroups of an infinite compact Abelian group \(X\) are \(T\)-characterized if and only if \(X\) is metrizable and connected. We prove that every compact Abelian group \(X\) of infinite exponent has a \(T\)-characterized subgroup, which is not an \(F_{\sigma}\)-subgroup of \(X\), that gives a negative answer to Problem 3.3 in Dikranjan and Gabriyelyan (Topol. Appl. 2013, 160, 2427–2442).Axioms2015-06-1942Article10.3390/axioms40201941942122075-16802015-06-19doi: 10.3390/axioms4020194Saak Gabriyelyan<![CDATA[Computation, Vol. 3, Pages 326-335: Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint]]>
http://www.mdpi.com/2079-3197/3/2/326
Computation of the non-central chi square probability density function is encountered in diverse fields of applied statistics and engineering. The distribution is commonly computed as a Poisson mixture of central chi square densities, where the terms of the sum are computed starting with the integer nearest the non-centrality parameter. However, for computation of the values in either tail region these terms are not the most significant and starting with them results in an increased computational load without a corresponding increase in accuracy. The most significant terms are shown to be a function of both the non-centrality parameter, the degree of freedom and the point of evaluation. A computationally simple approximate solution to the location of the most significant terms as well as the exact solution based on a Newton–Raphson iteration is presented. A quadratic approximation of the interval of summation is also developed in order to meet a requisite number of significant digits of accuracy. Computationally efficient recursions are used over these improved intervals. The method provides a means of computing the non-central chi square probability density function to a requisite accuracy as a Poisson mixture over all domains of interest.Computation2015-06-1932Article10.3390/computation30203263263352079-31972015-06-19doi: 10.3390/computation3020326Paul Gendron<![CDATA[Algorithms, Vol. 8, Pages 336-365: MAKHA—A New Hybrid Swarm Intelligence Global Optimization Algorithm]]>
http://www.mdpi.com/1999-4893/8/2/336
The search for efficient and reliable bio-inspired optimization methods continues to be an active topic of research due to the wide application of the developed methods. In this study, we developed a reliable and efficient optimization method via the hybridization of two bio-inspired swarm intelligence optimization algorithms, namely, the Monkey Algorithm (MA) and the Krill Herd Algorithm (KHA). The hybridization made use of the efficient steps in each of the two original algorithms and provided a better balance between the exploration/diversification steps and the exploitation/intensification steps. The new hybrid algorithm, MAKHA, was rigorously tested with 27 benchmark problems and its results were compared with the results of the two original algorithms. MAKHA proved to be considerably more reliable and more efficient in tested problems.Algorithms2015-06-1982Article10.3390/a80203363363651999-48932015-06-19doi: 10.3390/a8020336Ahmed KhalilSeif-Eddeen FateenAdrián Bonilla-Petriciolet<![CDATA[Symmetry, Vol. 7, Pages 1122-1150: Symmetry-Breaking as a Paradigm to Design Highly-Sensitive Sensor Systems]]>
http://www.mdpi.com/2073-8994/7/2/1122
A large class of dynamic sensors have nonlinear input-output characteristics, often corresponding to a bistable potential energy function that controls the evolution of the sensor dynamics. These sensors include magnetic field sensors, e.g., the simple fluxgate magnetometer and the superconducting quantum interference device (SQUID), ferroelectric sensors and mechanical sensors, e.g., acoustic transducers, made with piezoelectric materials. Recently, the possibilities offered by new technologies and materials in realizing miniaturized devices with improved performance have led to renewed interest in a new generation of inexpensive, compact and low-power fluxgate magnetometers and electric-field sensors. In this article, we review the analysis of an alternative approach: a symmetry-based design for highly-sensitive sensor systems. The design incorporates a network architecture that produces collective oscillations induced by the coupling topology, i.e., which sensors are coupled to each other. Under certain symmetry groups, the oscillations in the network emerge via an infinite-period bifurcation, so that at birth, they exhibit a very large period of oscillation. This characteristic renders the oscillatory wave highly sensitive to symmetry-breaking effects, thus leading to a new detection mechanism. Model equations and bifurcation analysis are discussed in great detail. Results from experimental works on networks of fluxgate magnetometers are also included.Symmetry2015-06-1972Review10.3390/sym7021122112211502073-89942015-06-19doi: 10.3390/sym7021122Antonio PalaciosVisarath InPatrick Longhini<![CDATA[Systems, Vol. 3, Pages 60-61: Introduction to the Special Issue “Dynamic Decision-Making in Controlled Experiments”]]>
http://www.mdpi.com/2079-8954/3/2/60
While research so far has mostly focused on static (one-shot) decisions, it is obvious that many (or even most) crucial real-life decisions are dynamic. [...]Systems2015-06-1832Editorial10.3390/systems302006060612079-89542015-06-18doi: 10.3390/systems3020060Andreas GrößlerHendrik Stouten<![CDATA[Algorithms, Vol. 8, Pages 321-335: Time Domain Simulation of Sound Waves Using Smoothed Particle Hydrodynamics Algorithm with Artificial Viscosity]]>
http://www.mdpi.com/1999-4893/8/2/321
Smoothed particle hydrodynamics (SPH), as a Lagrangian, meshfree method, is supposed to be useful in solving acoustic problems, such as combustion noise, bubble acoustics, etc., and has been gradually used in sound wave computation. However, unphysical oscillations in the sound wave simulation cannot be ignored. In this paper, an artificial viscosity term is added into the standard SPH algorithm used for solving linearized acoustic wave equations. SPH algorithms with or without artificial viscosity are both built to compute sound propagation and interference in the time domain. Then, the effects of the smoothing kernel function, particle spacing and Courant number on the SPH algorithms of sound waves are discussed. After comparing SPH simulation results with theoretical solutions, it is shown that the result of the SPH algorithm with the artificial viscosity term added attains good agreement with the theoretical solution by effectively reducing unphysical oscillations. In addition, suitable computational parameters of SPH algorithms are proposed through analyzing the sound pressure errors for simulating sound waves.Algorithms2015-06-1782Article10.3390/a80203213213351999-48932015-06-17doi: 10.3390/a8020321Xu LiTao ZhangYong Zhang<![CDATA[Robotics, Vol. 4, Pages 223-252: Learning Task Knowledge from Dialog and Web Access]]>
http://www.mdpi.com/2218-6581/4/2/223
We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it). KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon). We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.Robotics2015-06-1742Article10.3390/robotics40202232232522218-65812015-06-17doi: 10.3390/robotics4020223Vittorio PereraRobin SoetensThomas KollarMehdi SamadiYichao SunDaniele NardiRené van de MolengraftManuela Veloso<![CDATA[JSAN, Vol. 4, Pages 67-153: Directional Medium Access Control (MAC) Protocols in Wireless Ad Hoc and Sensor Networks: A Survey]]>
http://www.mdpi.com/2224-2708/4/2/67
This survey paper presents the state-of-the-art directional medium access control (MAC) protocols in wireless ad hoc and sensor networks (WAHSNs). The key benefits of directional antennas over omni-directional antennas are longer communication range, less multipath interference, more spatial reuse, more secure communications, higher throughput and reduced latency. However, directional antennas lead to single-/multi-channel directional hidden/exposed terminals, deafness and neighborhood, head-of-line blocking, and MAC-layer capture which need to be overcome. Addressing these problems and benefits for directional antennas to MAC protocols leads to many classes of directional MAC protocols in WAHSNs. These classes of directional MAC protocols presented in this survey paper include single-channel, multi-channel, cooperative and cognitive directional MACs. Single-channel directional MAC protocols can be classified as contention-based or non-contention-based or hybrid-based, while multi-channel directional MAC protocols commonly use a common control channel for control packets/tones and one or more data channels for directional data transmissions. Cooperative directional MAC protocols improve throughput in WAHSNs via directional multi-rate/single-relay/multiple-relay/two frequency channels/polarization, while cognitive directional MAC protocols leverage on conventional directional MAC protocols with new twists to address dynamic spectrum access. All of these directional MAC protocols are the pillars for the design of future directional MAC protocols in WAHSNs.Journal of Sensor and Actuator Networks2015-06-1642Article10.3390/jsan4020067671532224-27082015-06-16doi: 10.3390/jsan4020067David WongQian ChenFrancois Chin<![CDATA[Econometrics, Vol. 3, Pages 443-465: Bayesian Approach to Disentangling Technical and Environmental Productivity]]>
http://www.mdpi.com/2225-1146/3/2/443
This paper models the firm’s production process as a system of simultaneous technologies for desirable and undesirable outputs. Desirable outputs are produced by transforming inputs via the conventional transformation function, whereas (consistent with the material balance condition) undesirable outputs are by-produced via the so-called “residual generation technology”. By separating the production of undesirable outputs from that of desirable outputs, not only do we ensure that undesirable outputs are not modeled as inputs and thus satisfy costly disposability, but we are also able to differentiate between the traditional (desirable-output-oriented) technical productivity and the undesirable-output-oriented environmental, or so-called “green”, productivity. To measure the latter, we derive a Solow-type Divisia environmental productivity index which, unlike conventional productivity indices, allows crediting the ceteris paribus reduction in undesirable outputs. Our index also provides a meaningful way to decompose environmental productivity into environmental technological and efficiency changes.Econometrics2015-06-1632Article10.3390/econometrics30204434434652225-11462015-06-16doi: 10.3390/econometrics3020443Emir MalikovSubal KumbhakarEfthymios Tsionas<![CDATA[Mathematics, Vol. 3, Pages 527-562: The Schwartz Space: Tools for Quantum Mechanics and Infinite Dimensional Analysis]]>
http://www.mdpi.com/2227-7390/3/2/527
An account of the Schwartz space of rapidly decreasing functions as a topological vector space with additional special structures is presented in a manner that provides all the essential background ideas for some areas of quantum mechanics along with infinite-dimensional distribution theory.Mathematics2015-06-1632Article10.3390/math30205275275622227-73902015-06-16doi: 10.3390/math3020527Jeremy BecnelAmbar Sengupta<![CDATA[Robotics, Vol. 4, Pages 194-222: Deliberation on Design Strategies of Automatic Harvesting Systems: A Survey]]>
http://www.mdpi.com/2218-6581/4/2/194
In Asia, decreasing farmer and labor populations due to various factors is a serious problem that leads to increases in labor costs, higher harvesting input energy consumption and less resource utilization. To solve these problems, researchers are engaged in providing long term and low-tech alternatives in terms of mechanization and automation of agriculture by way of efficient, low cost and easy to use solutions. This paper reviews various design strategies in recognition and picking systems, as well as developments in fruit harvesting robots during the past 30 years in several countries. The main objectives of this paper are to gather all information on fruit harvesting robots; focus on the technical developments so far achieved in picking devices; highlight the problems still to be solved; and discuss the future prospects of fruit harvesting robots.Robotics2015-06-1642Review10.3390/robotics40201941942222218-65812015-06-16doi: 10.3390/robotics4020194Shivaji Bachche<![CDATA[Symmetry, Vol. 7, Pages 1080-1121: An Extended Detailed Investigation of First and Second Order Supersymmetries for Off-Shell N = 2 and N = 4 Supermultiplets]]>
http://www.mdpi.com/2073-8994/7/2/1080
This paper investigates the d = 4, N = 4 Abelian, global Super-Yang Mills system (SUSY-YM). It is shown how the N = 2 Fayet Hypermultiplet (FH) and N = 2 vector multiplet (VM) are embedded within. The central charges and internal symmetries provide a plethora of information as to further symmetries of the Lagrangian. Several of these symmetries are calculated to second order. It is hoped that investigations such as these may yield avenues to help solve the auxiliary field closure problem for d = 4, N = 4, SUSY-YM and the d = 4, N = 2 Fayet-Hypermultiplet, without using an infinite number of auxiliary fields.Symmetry2015-06-1672Article10.3390/sym7021080108011212073-89942015-06-16doi: 10.3390/sym7021080Sylvester GatesJames ParkerVincent RodgersLeo RodriguezKory Stiffler<![CDATA[IJGI, Vol. 4, Pages 1013-1032: An Open Source WebGIS Application for Civic Education on Peace and Conflict]]>
http://www.mdpi.com/2220-9964/4/2/1013
By developing an interactive open source-based WebGIS information portal on war and peace for the online services of the Federal Agency for Civic Education, the Bonn International Center for Conversion (BICC) translates scientific knowledge into easily understandable and subsumable up-to-date information for the general public and young scholars. By aggregating globally scattered data and information on various peace- and conflict-related topics as well as presenting their spatial visualization through interactive maps, BICC contributes to a better understanding of peace and conflict processes. Users are invited to explore the relationship of various variables and their decisive roles in such processes.ISPRS International Journal of Geo-Information2015-06-1542Article10.3390/ijgi4021013101310322220-99642015-06-15doi: 10.3390/ijgi4021013Lars Wirkus<![CDATA[Algorithms, Vol. 8, Pages 309-320: An Optimal Eighth-Order Derivative-Free Family of Potra-Pták’s Method]]>
http://www.mdpi.com/1999-4893/8/2/309
In this paper, we present a new three-step derivative-free family based on Potra-Pták’s method for solving nonlinear equations numerically. In terms of computational cost, each member of the proposed family requires only four functional evaluations per full iteration to achieve optimal eighth-order convergence. Further, computational results demonstrate that the proposed methods are highly efficient as compared with many well-known methods.Algorithms2015-06-1582Article10.3390/a80203093093201999-48932015-06-15doi: 10.3390/a8020309Munish KansalVinay KanwarSaurabh Bhatia<![CDATA[Mathematics, Vol. 3, Pages 510-526: Effective Summation and Interpolation of Series by Self-Similar Root Approximants]]>
http://www.mdpi.com/2227-7390/3/2/510
We describe a simple analytical method for effective summation of series, including divergent series. The method is based on self-similar approximation theory resulting in self-similar root approximants. The method is shown to be general and applicable to different problems, as is illustrated by a number of examples. The accuracy of the method is not worse, and in many cases better, than that of Padé approximants, when the latter can be defined.Mathematics2015-06-1532Article10.3390/math30205105105262227-73902015-06-15doi: 10.3390/math3020510Simon GluzmanVyacheslav Yukalov<![CDATA[Symmetry, Vol. 7, Pages 1061-1079: Symmetry Extraction in High Sensitivity Melanoma Diagnosis]]>
http://www.mdpi.com/2073-8994/7/2/1061
Melanoma diagnosis depends on the experience of doctors. Symmetry is one of the most important factors to measure, since asymmetry shows an uncontrolled growth of cells, leading to melanoma cancer. A system for melanoma detection in diagnosing melanocytic diseases with high sensitivity is proposed here. Two different sets of features are extracted based on the importance of the ABCD rule and symmetry evaluation to develop a new architecture. Support Vector Machines are used to classify the extracted sets by using both an alternative labeling method and a structure divided into two different classifiers which prioritize sensitivity. Although feature extraction is based on former works, the novelty lies in the importance given to symmetry and the proposed architecture, which combines two different feature sets to obtain a high sensitivity, prioritizing the medical aspect of diagnosis. In particular, a database provided by Hospital Universitario de Gran Canaria Doctor Negrín was tested, obtaining a sensitivity of 100% and a specificity of 66.66% using a leave-one-out validation method. These results show that 66.66% of biopsies would be avoided if this system is applied to lesions which are difficult to classify by doctors.Symmetry2015-06-1572Article10.3390/sym7021061106110792073-89942015-06-15doi: 10.3390/sym7021061Elyoenai Guerra-SeguraCarlos Travieso-GonzálezJesús Alonso-HernándezAntonio Ravelo-GarcíaGregorio Carretero<![CDATA[Electronics, Vol. 4, Pages 359-379: HMC-MAC Protocol for High Data Rate Wireless Sensor Networks]]>
http://www.mdpi.com/2079-9292/4/2/359
Using multiple channels in wireless sensor networks helps increase the overall throughput and avoid interferences. In addition, introducing multi-interface nodes further helps in increasing the packet delivery rate for those specific nodes. In this paper, we evaluate a channel allocation method based on neighborhood discovery up to 3 hops and a hybrid MAC protocol designed for high data rate wireless sensor networks. We propose a network segmentation that takes into consideration the existence of multi-interface sink in order to further increase to packet delivery ratio. Our protocol, called HMC-MAC, uses Time Devision Multiple Access (TDMA) for sequencing nodes activity, and Carrier Sense Multiple Access (CSMA)/CA with Frequency Devision Multiple Access (FDMA) for simultaneous data exchange. We evaluated our method using NS2 simulator and results show that HMC-MAC protocol improves the overall network performance compared to other protocols especially with high data rate and burst traffic.Electronics2015-06-1542Article10.3390/electronics40203593593792079-92922015-06-15doi: 10.3390/electronics4020359Gerard ChalhoubRana DiabMichel Misson<![CDATA[Electronics, Vol. 4, Pages 339-358: Use of Three-Level Power Converters in Wind-Driven Permanent-Magnet Synchronous Generators with Unbalanced Loads]]>
http://www.mdpi.com/2079-9292/4/2/339
This paper describes the design and implementation of three-level power converters for wind-driven permanent-magnet synchronous generators with unbalanced loads. To increase voltage stress and reduce current harmonics in the electrical power generated by a wind generator, a three-phase, three-level rectifier is used. Because a synchronous rotating frame is used on the AC-input side, the use of a neutral-point-clamped controller is proposed to increase the power factor to unity and reduce current harmonics. Furthermore, a novel six-leg inverter is proposed for transferring energy from the DC voltage to a three-phase, four-wire AC source with a constant voltage and a constant frequency. The power converters also contain output transformers and filters for power buffering and filtering, respectively. All three output phase voltages are fed back to control the inverter output during load variations. A digital signal processor is used as the core control device for implementing a 1.5 kV, 75 kW drive system. Experimental data show that the power factor is successfully increased to unity and the total current harmonic distortion is 3.2% on the AC-input side. The entire system can attain an efficiency of 91%, and the voltage error between the upper and lower capacitors is approximately zero. Experimental results that confirm the high performance of the proposed system are presented.Electronics2015-06-1542Article10.3390/electronics40203393393582079-92922015-06-15doi: 10.3390/electronics4020339Ming-Hung Chen<![CDATA[Mathematics, Vol. 3, Pages 487-509: The Fractional Orthogonal Difference with Applications]]>
http://www.mdpi.com/2227-7390/3/2/487
This paper is a follow-up of a previous paper of the author published in Mathematics journal in 2015, which treats the so-called continuous fractional orthogonal derivative. In this paper, we treat the discrete case using the fractional orthogonal difference. The theory is illustrated with an application of a fractional differentiating filter. In particular, graphs are presented of the absolutel value of the modulus of the frequency response. These make clear that for a good insight into the behavior of a fractional differentiating filter, one has to look for the modulus of its frequency response in a log-log plot, rather than for plots in the time domain.Mathematics2015-06-1232Article10.3390/math30204874875092227-73902015-06-12doi: 10.3390/math3020487Enno Diekema<![CDATA[IJGI, Vol. 4, Pages 989-1012: Modelling of Building Interiors with Mobile Phone Sensor Data]]>
http://www.mdpi.com/2220-9964/4/2/989
Creating as-built plans of building interiors is a challenging task. In this paper we present a semi-automatic modelling system for creating residential building interior plans and their integration with existing map data to produce building models. Taking a set of imprecise measurements made with an interactive mobile phone room mapping application, the system performs spatial adjustments in accordance with soft and hard constraints imposed on the building plan geometry. The approach uses an optimisation model that exploits a high accuracy building outline, such as can be found in topographic map data, and the building topology to improve the quality of interior measurements and generate a standardised output. We test our system on building plans of five residential homes. Our evaluation shows that the approach enables construction of accurate interior plans from imprecise measurements. The experiments report an average accuracy of 0.24 m, close to the 0.20 m recommended by the CityGML LoD4 specification.ISPRS International Journal of Geo-Information2015-06-1242Article10.3390/ijgi402098998910122220-99642015-06-12doi: 10.3390/ijgi4020989Julian RosserJeremy MorleyGavin Smith<![CDATA[IJGI, Vol. 4, Pages 974-988: Routing in Dense Human Crowds Using Smartphone Movement Data and Optical Aerial Imagery]]>
http://www.mdpi.com/2220-9964/4/2/974
In this paper, we propose a navigation approach for smartphones that enables visitors of major events to avoid crowded areas or narrow streets and to navigate out of dense crowds quickly. Two types of sensor data are integrated. Real-time optical images acquired and transmitted by an airborne camera system are used to compute an estimation of a crowd density map. For this purpose, a patch-based approach with a Gabor filter bank for texture classification in combination with an interest point detector and a smoothing function is applied. Furthermore, the crowd density is estimated based on location and movement speed of in situ smartphone measurements. This information allows for the enhancement of the overall crowd density layer. The composed density information is input to a least-cost routing workflow. Two possible use cases are presented, namely (i) an emergency application and (ii) a basic routing application. A prototypical implementation of the system is conducted as proof of concept. Our approach is capable of increasing the security level for major events. Visitors are able to avoid dense crowds by routing around them, while security and rescue forces are able to find the fastest way into the crowd.ISPRS International Journal of Geo-Information2015-06-1242Article10.3390/ijgi40209749749882220-99642015-06-12doi: 10.3390/ijgi4020974Florian HillenOliver MeynbergBernhard Höfle<![CDATA[Future Internet, Vol. 7, Pages 196-213: Self-Optimization of Handover Parameters for Long-Term Evolution with Dual Wireless Mobile Relay Nodes]]>
http://www.mdpi.com/1999-5903/7/2/196
In recent years, train passengers have been transferring increasing amounts of data using mobile devices. Wireless networks with mobile relay nodes support broadband wireless communications for passengers of such vehicles using backhaul links. However, the mobility management entity reuses the handover of existing user equipment, resulting in the handover of the Long-Term Evolution network being unsuitable for user equipment within the cabins of vehicles traveling at high speed. In this paper, we propose a self-optimizing handover hysteresis scheme with dual mobile relay nodes for wireless networks in high-speed mobile environments. The proposed mechanism tunes the hysteresis and cell individual offset handover parameters based on the velocity of the vehicle and the handover performance indicator, which affects the handover triggering decision and performance. The results of simulations conducted in which the performance of the proposed scheme was compared to that of an existing scheme show that the proposed scheme can reduce the number of radio link failures and service interruptions during handover procedures.Future Internet2015-06-1172Article10.3390/fi70201961962131999-59032015-06-11doi: 10.3390/fi7020196Battulga DavaasambuuKeping YuTakuro Sato<![CDATA[Algorithms, Vol. 8, Pages 292-308: Training Artificial Neural Networks by a Hybrid PSO-CS Algorithm]]>
http://www.mdpi.com/1999-4893/8/2/292
Presenting a satisfactory and efficient training algorithm for artificial neural networks (ANN) has been a challenging task in the supervised learning area. Particle swarm optimization (PSO) is one of the most widely used algorithms due to its simplicity of implementation and fast convergence speed. On the other hand, Cuckoo Search (CS) algorithm has been proven to have a good ability for finding the global optimum; however, it has a slow convergence rate. In this study, a hybrid algorithm based on PSO and CS is proposed to make use of the advantages of both PSO and CS algorithms. The proposed hybrid algorithm is employed as a new training method for feedforward neural networks (FNNs). To investigate the performance of the proposed algorithm, two benchmark problems are used and the results are compared with those obtained from FNNs trained by original PSO and CS algorithms. The experimental results show that the proposed hybrid algorithm outperforms both PSO and CS in training FNNs.Algorithms2015-06-1182Article10.3390/a80202922923081999-48932015-06-11doi: 10.3390/a8020292Jeng-Fung ChenQuang DoHo-Nien Hsieh<![CDATA[Symmetry, Vol. 7, Pages 1043-1060: Kinematic Skeleton Based Control of a Virtual Simulator for Military Training]]>
http://www.mdpi.com/2073-8994/7/2/1043
Virtual simulation technology has been considered as a highly efficient and cost-effective solution for a soldier training system, and evolved into diverse combinations of hardware and software. To maximize the virtual reality effect within a restricted space, a locomotion interface such as an omni-directional treadmill is introduced as a major component of a virtual simulator, therefore real time interaction between human and the virtual simulator becomes very important. Displacement and heading changes of the trainee are crucial information to control the virtual simulator when we implement highly reactive motion control for the omni-directional treadmill and interaction control of the virtual contents. This paper proposes a control parameter estimation algorithm for the virtual training simulator by using two types of motion capture sensors and presents the experimental results. Kinematic joint positions are analyzed to estimate the trainee’s location and velocity for feedback and feedforward control of the omni-directional treadmill. The accuracy of two approaches is evaluated by comparing with the reference system, which gives a ground truth value.Symmetry2015-06-1172Article10.3390/sym7021043104310602073-89942015-06-11doi: 10.3390/sym7021043Soyeon LeeSangjoon ParkKyoil ChungChoongho Cho<![CDATA[Electronics, Vol. 4, Pages 329-338: Unified Multi-Layer among Software Defined Multi-Domain Optical Networks (Invited)]]>
http://www.mdpi.com/2079-9292/4/2/329
The software defined networking (SDN) enabled by OpenFlow protocol has gained popularity which can enable the network to be programmable and accommodate both fixed and flexible bandwidth services. In this paper, we present a unified multi-layer (UML) architecture with multiple controllers and a dynamic orchestra plane (DOP) for software defined multi-domain optical networks. The proposed architecture can shield the differences among various optical devices from multi-vendors and the details of connecting heterogeneous networks. The cross-domain services with on-demand bandwidth can be deployed via unified interfaces provided by the dynamic orchestra plane. Additionally, the globalization strategy and practical capture of signal processing are presented based on the architecture. The overall feasibility and efficiency of the proposed architecture is experimentally verified on the control plane of our OpenFlow-based testbed. The performance of globalization strategy under heavy traffic load scenario is also quantitatively evaluated based on UML architecture compared with other strategies in terms of blocking probability, average hops, and average resource consumption. Electronics2015-06-1142Article10.3390/electronics40203293293382079-92922015-06-11doi: 10.3390/electronics4020329Hui YangYadi CuiJie Zhang<![CDATA[Information, Vol. 6, Pages 275-286: ODQ: A Fluid Office Document Query Language]]>
http://www.mdpi.com/2078-2489/6/2/275
Fluid office documents, as semi-structured data often represented by Extensible Markup Language (XML) are important parts of Big Data. These office documents have different formats, and their matching Application Programming Interfaces (APIs) depend on developing platform and versions, which causes difficulty in custom development and information retrieval from them. To solve this problem, we have been developing an office document query (ODQ) language which provides a uniform method to retrieve content from documents with different formats and versions. ODQ builds common document model ontology to conceal the format details of documents and provides a uniform operation interface to handle office documents with different formats. The results show that ODQ has advantages in format independence, and can facilitate users in developing documents processing systems with good interoperability.Information2015-06-1162Article10.3390/info60202752752862078-24892015-06-11doi: 10.3390/info6020275Xuhong LiuNing LiYunmei ShiXia Hou<![CDATA[Information, Vol. 6, Pages 258-274: Efficiency and Privacy Enhancement for a Track and Trace System of RFID-Based Supply Chains]]>
http://www.mdpi.com/2078-2489/6/2/258
One of the major applications of Radio Frequency Identification (RFID) technology is in supply chain management as it promises to provide real-time visibility based on the function of track and trace. However, such an RFID-based track and trace system raises new security and privacy challenges due to the restricted resource of tags. In this paper, we refine three privacy related models (i.e., the privacy, path unlinkability, and tag unlinkability) of RFID-based track and trace systems, and clarify the relations among these privacy models. Specifically, we have proven that privacy is equivalent to path unlinkability and tag unlinkability implies privacy. Our results simplify the privacy concept and protocol design for RFID-based track and trace systems. Furthermore, we propose an efficient track and trace scheme, Tracker+, which allows for authentic and private identification of RFID-tagged objects in supply chains. In the Tracker+, no computational ability is required for tags, but only a few bytes of storage (such as EPC Class 1 Gen 2 tags) are needed to store the tag state. Indeed, Tracker+ reduces the memory requirements for each tag by one group element compared to the Tracker presented in other literature. Moreover, Tracker+ provides privacy against supply chain inside attacks.Information2015-06-1162Article10.3390/info60202582582742078-24892015-06-11doi: 10.3390/info6020258Xunjun ChenYuelong ZhuJiguo LiYamin WenZheng Gong<![CDATA[Computation, Vol. 3, Pages 299-325: On Roof Geometry for Urban Wind Energy Exploitation in High-Rise Buildings]]>
http://www.mdpi.com/2079-3197/3/2/299
The European program HORIZON2020 aims to have 20% of electricity produced by renewable sources. The building sector represents 40% of the European Union energy consumption. Reducing energy consumption in buildings is therefore a priority for energy efficiency. The present investigation explores the most adequate roof shapes compatible with the placement of different types of small wind energy generators on high-rise buildings for urban wind energy exploitation. The wind flow around traditional state-of-the-art roof shapes is considered. In addition, the influence of the roof edge on the wind flow on high-rise buildings is analyzed. These geometries are investigated, both qualitatively and quantitatively, and the turbulence intensity threshold for horizontal axis wind turbines is considered. The most adequate shapes for wind energy exploitation are identified, studying vertical profiles of velocity, turbulent kinetic energy and turbulence intensity. Curved shapes are the most interesting building roof shapes from the wind energy exploitation point of view, leading to the highest speed-up and the lowest turbulence intensity.Computation2015-06-1032Article10.3390/computation30202992993252079-31972015-06-10doi: 10.3390/computation3020299Francisco Toja-SilvaCarlos PeraltaOscar Lopez-GarciaJorge NavarroIgnacio Cruz<![CDATA[Robotics, Vol. 4, Pages 169-193: How? Why? What? Where? When? Who? Grounding Ontology in the Actions of a Situated Social Agent]]>
http://www.mdpi.com/2218-6581/4/2/169
Robotic agents are spreading, incarnated as embodied entities, exploring the tangible world and interacting with us, or as virtual agents crawling over the web, parsing and generating data. In both cases, they require: (i) processes to acquire information; (ii) structures to model and store information as usable knowledge; (iii) reasoning systems to interpret the information; and (iv) finally, ways to express their interpretations. The H5W (How, Why, What, Where, When, Who) framework is a conceptualization of the problems faced by any agent situated in a social environment, which has defined several robotic studies. We introduce the H5W framework, through a description of its underlying neuroscience and the psychological considerations it embodies, we then demonstrate a specific implementation of the framework. We will focus on the motivation and implication of the pragmatic decisions we have taken. We report the numerous studies that have relied upon this technical implementation as a proof of its robustness and polyvalence; moreover, we conduct an additional validation of its applicability to the natural language domain by designing an information exchange task as a benchmark.Robotics2015-06-1042Article10.3390/robotics40201691691932218-65812015-06-10doi: 10.3390/robotics4020169Stephane LalleePaul Verschure<![CDATA[Future Internet, Vol. 7, Pages 184-195: Utilizing the ECHO Model in the Veterans Health Affairs System: Guidelines for Setup, Operations and Preliminary Findings]]>
http://www.mdpi.com/1999-5903/7/2/184
Background: In 2011, the Veterans Health Administration (VHA) consulted with the Project ECHO (Extension for Community Healthcare Outcomes) team at the University of New Mexico, Albuquerque, to reproduce their successful model within the VHA. Methods: The VHA launched SCAN-ECHO (Specialty Care Access Network-Extension for Community Healthcare Outcomes), a multisite videoconferencing system to conduct live clinical consultations between specialists at a VHA Medical Center (hospital) and primary care providers stationed at satellite VHA CBOCs (Community-Based Outpatient Clinic). Results: Analysis of the first three years rendered a mean attendee satisfaction of 89.53% and a consultation satisfaction score of 88.10%. About half of the SCAN-ECHO consultations resulted in patients receiving their treatment from their local primary care providers; the remaining half were referred to the VHA Medical Center when the treatment involved equipment or services not available at the CBOCs (e.g., MRI, surgery). Conclusion: This paper details the setup, operation logistics and preliminary findings, suggesting that SCAN-ECHO is a viable model for providing quality specialty clinical consultation service, prompter access to care, reduced commutes and continuing education. Additionally, the use of a secured Internet-based videoconferencing system that supports connectivity to multiple (mobile) devices could expand the utilization of this service.Future Internet2015-06-0872Project Report10.3390/fi70201841841951999-59032015-06-08doi: 10.3390/fi7020184Herschel KnappSanjog Pangarkar<![CDATA[Information, Vol. 6, Pages 246-257: An Approach to an Intersection Traffic Delay Study Based on Shift-Share Analysis]]>
http://www.mdpi.com/2078-2489/6/2/246
Intersection traffic delay research has traditionally placed greater emphasis on the study of through and left-turning vehicles than right-turning ones, which often renders existing methods or models inapplicable to intersections with heavy pedestrian and non-motorized traffic. In the meantime, there is also a need for understanding the relations between different types of delay and how they each contribute to the total delay of the entire intersection. In order to address these issues, this paper first examines models that focus on through and left-turn traffic delays, taking into account the presence of heavy mixed traffic flows that are prevalent in developing countries, then establishes a model for calculating right-turn traffic delay and, last, proposes an approach to analyzing how much each of the three types of traffic delay contributes to the total delay of the intersection, based on the application of shift-share analysis (SSA), which has been applied extensively in the field of economics.Information2015-06-0862Article10.3390/info60202462462572078-24892015-06-08doi: 10.3390/info6020246Jianfeng XiWei LiShengli WangChuanjiu Wang<![CDATA[Mathematics, Vol. 3, Pages 481-486: The Complement of Binary Klein Quadric as a Combinatorial Grassmannian]]>
http://www.mdpi.com/2227-7390/3/2/481
Given a hyperbolic quadric of PG(5, 2), there are 28 points off this quadric and 56 lines skew to it. It is shown that the (286; 563)-configuration formed by these points and lines is isomorphic to the combinatorial Grassmannian of type G2(8). It is also pointed out that a set of seven points of G2(8) whose labels share a mark corresponds to a Conwell heptad of PG(5, 2). Gradual removal of Conwell heptads from the (286; 563)-configuration yields a nested sequence of binomial configurations identical with part of that found to be associated with Cayley-Dickson algebras (arXiv:1405.6888).Mathematics2015-06-0832Letter10.3390/math30204814814862227-73902015-06-08doi: 10.3390/math3020481Metod Saniga<![CDATA[Symmetry, Vol. 7, Pages 1040-1042: Symmetry Best Paper Award 2015]]>
http://www.mdpi.com/2073-8994/7/2/1040
In order to recognize outstanding papers that have been published in Symmetry, the Editorial Board established an annual “Symmetry Best Paper Award”. We are pleased to announce the first “Symmetry Best Paper Award” for 2015. Nominations were selected by the Editorial Board Office, with all research articles published in 2013 and 2014 eligible for consideration. Following review and voting by the Symmetry Best Paper Award Committee, the following three papers have won Symmetry Best Paper Awards for 2015:Symmetry2015-06-0872Editorial10.3390/sym7021040104010422073-89942015-06-08doi: 10.3390/sym7021040Sergei Odintsov<![CDATA[Symmetry, Vol. 7, Pages 1028-1039: Development of Patient Status-Based Dynamic Access System for Medical Information Systems]]>
http://www.mdpi.com/2073-8994/7/2/1028
Recently, the hospital information system environment using IT communication technology and utilization of medical information has been increasing. In the medical field, the medical information system only supports the transfer of patient information to medical staff through an electronic health record, without information about patient status. Hence, it needs a method of real-time monitoring for the patient. Also, in this environment, a secure method in approaching healthcare through various smart devices is required. Therefore, in this paper, in order to classify the status of the patients, we propose a dynamic approach of the medical information system in a hospital information environment using the dynamic access control method. Also, we applied the symmetric method of AES (Advanced Encryption Standard). This was the best encryption algorithm for sending and receiving biological information. We can define usefulness as the dynamic access application service based on the final result of the proposed system. The proposed system is expected to provide a new solution for a convenient medical information system.Symmetry2015-06-0872Article10.3390/sym7021028102810392073-89942015-06-08doi: 10.3390/sym7021028Chang JeongVathana BanKwon YoonSu Joo<![CDATA[Risks, Vol. 3, Pages 219-233: Multiscale Analysis of the Predictability of Stock Returns]]>
http://www.mdpi.com/2227-9091/3/2/219
Due to the strong complexity of financial markets, economics does not have a unified theory of price formation in financial markets. The most common assumption is the Efficient-Market Hypothesis, which has been attacked by a number of researchers, using different tools. There were varying degrees to which these tools complied with the formal definitions of efficiency and predictability. In our earlier work, we analysed the predictability of stock returns at two time scales using the entropy rate, which can be directly linked to the mathematical definition of predictability. Nonetheless, none of the above-mentioned studies allow any general understanding of how the financial markets work, beyond disproving the Efficient-Market Hypothesis. In our previous study, we proposed the Maximum Entropy Production Principle, which uses the entropy rate to create a general principle underlying the price formation processes. Both of these studies show that the predictability of price changes is higher at the transaction level intraday scale than the scale of daily returns, but ignore all scales in between. In this study we extend these ideas using the multiscale entropy analysis framework to enhance our understanding of the predictability of price formation processes at various time scales.Risks2015-06-0832Article10.3390/risks30202192192332227-90912015-06-08doi: 10.3390/risks3020219Paweł Fiedor<![CDATA[Administrative Sciences, Vol. 5, Pages 90-124: The Balanced Scorecard: Fashion or Virus?]]>
http://www.mdpi.com/2076-3387/5/2/90
The Balanced Scorecard (BSC) is a one of the most popular and contagious management ideas of our time. In previous research the diffusion and institutionalization of the BSC has been viewed through different theoretical lenses, most notably the management fashion perspective. Recently the virus perspective has been introduced as an alternative theory of how management ideas spread, but so far no study has applied this theory in the context of the BSC. In this paper we show that the fashion and virus perspectives provide complementary insights into the diffusion and institutionalization of the BSC. The fashion perspective is particularly well suited for explaining the infectiousness of the BSC and the ways in which organizations are exposed to the BSC idea. The virus theory can better explain how the BSC idea is implemented as a practice in organizations, and the different trajectories that the BSC idea may take in different contexts. A combination of these two perspectives provides a fuller picture of the diffusion and institutionalization of the BSC.Administrative Sciences2015-06-0852Article10.3390/admsci5020090901242076-33872015-06-08doi: 10.3390/admsci5020090Dag MadsenKåre Slåtten<![CDATA[Mathematics, Vol. 3, Pages 444-480: Sinc-Approximations of Fractional Operators: A Computing Approach]]>
http://www.mdpi.com/2227-7390/3/2/444
We discuss a new approach to represent fractional operators by Sinc approximation using convolution integrals. A spin off of the convolution representation is an effective inverse Laplace transform. Several examples demonstrate the application of the method to different practical problems.Mathematics2015-06-0532Article10.3390/math30204444444802227-73902015-06-05doi: 10.3390/math3020444Gerd BaumannFrank Stenger<![CDATA[Symmetry, Vol. 7, Pages 1017-1027: Supersymmetric Displaced Number States]]>
http://www.mdpi.com/2073-8994/7/2/1017
We introduce, generate and study a family of supersymmetric displaced number states (SDNS) that can be considered generalized coherent states of the supersymmetric harmonic oscillator. The family is created from the seminal supersymmetric boson-fermion entangling annihilation operator introduced by Aragone and Zypman and later expanded by Kornbluth and Zypman. Using the momentum representation, the states are obtained analytically in compact form as displaced supersymmetric number states. We study their position-momentum uncertainties, and their bunchiness by classifying them according to their Mandel Q-parameter in phase space. We were also able to find closed form analytical representations in the space and number basis.Symmetry2015-06-0572Article10.3390/sym7021017101710272073-89942015-06-05doi: 10.3390/sym7021017Fredy Zypman<![CDATA[Technologies, Vol. 3, Pages 142-161: Automated Segmentation of MS Lesions in MR Images Based on an Information Theoretic Clustering and Contrast Transformations]]>
http://www.mdpi.com/2227-7080/3/2/142
Magnetic Resonance Imaging (MRI) plays a significant role in the current characterization and diagnosis of multiple sclerosis (MS) in radiological imaging. However, early detection of MS lesions from MRI still remains a challenging problem. In the present work, an information theoretic approach to cluster the voxels in MS lesions for automatic segmentation of lesions of various sizes in multi-contrast (T1, T2, PD-weighted) MR images, is applied. For accurate detection of MS lesions of various sizes, the skull-stripped brain data are rescaled and histogram manipulated prior to mapping the multi-contrast data to pseudo-color images. For automated segmentation of multiple sclerosis (MS) lesions in multi-contrast MRI, the improved jump method (IJM) clustering method has been enhanced via edge suppression for improved segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF) and MS lesions if present. From this preliminary clustering, a pseudo-color to grayscale conversion is designed to equalize the intensities of the normal brain tissues, leaving the MS lesions as outliers. Binary discrete and 8-bit fuzzy labels are then assigned to segment the MS lesions throughout the full brain. For validation of the proposed method, three brains, with mild, moderate and severe hyperintense MS lesions labeled as ground truth, were selected. The MS lesions of mild, moderate and severe categories were detected with a sensitivity of 80%, and 96%, and 94%, and with the corresponding Dice similarity coefficient (DSC) of 0.5175, 0.8739, and 0.8266 respectively. The MS lesions can also be clearly visualized in a transparent pseudo-color computer rendered 3D brain.Technologies2015-06-0532Article10.3390/technologies30201421421612227-70802015-06-05doi: 10.3390/technologies3020142Jason HillKevin MatlockBrian NutterSunanda Mitra<![CDATA[Economies, Vol. 3, Pages 100-127: Understanding Cultural Geography as a Pseudo-Diffusion Process: The Case of the Veneto Region]]>
http://www.mdpi.com/2227-7099/3/2/100
In this paper, we study the cultural geography of the Veneto Region on the basis of a pseudo-diffusion approach to the analysis of the inherent semantic spatial data. We find somewhat surprising results, and, in particular, that Venice, indisputably the Region’s cultural hub in terms of concentration of activities and facilities, global visibility and attraction of resources, plays a marginal role in determining the momentum of cultural initiative at the regional level as of 2007 data. The areas with the greater momentum are relatively marginal ones but characterized by a strong presence of design-oriented companies that are actively engaging in culture-driven innovation in a context of gradually horizontally-integrated clusters. Our findings call for a revision of the traditional policy approaches that identify centralities in terms of concentration of activities and facilities based on past dynamics, and to design policies accordingly. We argue in favour of a more forward-looking, evidence-based approach.Economies2015-06-0432Article10.3390/economies30201001001272227-70992015-06-04doi: 10.3390/economies3020100Guido FerilliPier SaccoMassimo BuscemaGiorgio Blessi<![CDATA[Symmetry, Vol. 7, Pages 994-1016: Multi-Classifier Based on a Query-by-Singing/Humming System]]>
http://www.mdpi.com/2073-8994/7/2/994
With the increase in the number of music files on various devices, it can be difficult to locate a desired file, especially when the title of the song or the name of the singer is not known. We propose a new query-by-singing/humming (QbSH) system that can find music files that match what the user is singing or humming. This research is novel in the following three ways: first, the Fourier descriptor (FD) method is proposed as the first classifier; it transforms the humming or music waveform into the frequency domain. Second, quantized dynamic time warping (QDTW) using symmetrical search space and quantized linear scaling (QLS) are used as the second and third classifiers, respectively, which increase the accuracy of the QbSH system compared to the conventional DTW and LS methods. Third, five classifiers, which include the three already mentioned along with the conventional DTW using symmetrical search space and LS methods, are combined using score level fusion, which further enhances performance. Experimental results with the 2009 MIR-QbSH corpus and the AFA MIDI 100 databases show that the proposed method outperforms those using a single classifier and other fusion methods.Symmetry2015-06-0472Article10.3390/sym702099499410162073-89942015-06-04doi: 10.3390/sym7020994Gi NamKang Park<![CDATA[Information, Vol. 6, Pages 228-245: CIMS: A Context-Based Intelligent Multimedia System for Ubiquitous Cloud Computing]]>
http://www.mdpi.com/2078-2489/6/2/228
Mobile users spend a tremendous amount of time surfing multimedia contents over the Internet to pursue their interests. A resource-constrained smart device demands more intensive computing tasks and lessens the battery life. To address the resource limitations (i.e., memory, lower maintenance cost, easier access, computing tasks) in mobile devices, mobile cloud computing is needed. Several approaches have been proposed to confront the challenges of mobile cloud computing, but difficulties still remain. However, in the coming years, context collecting, processing, and interchanging the results on a heavy network will cause vast computations and reduce the battery life in mobiles. In this paper, we propose a “context-based intelligent multimedia system” (CIMS) for ubiquitous cloud computing. The main goal of this research is to lessen the computing percentage, storage complexities, and battery life for mobile users by using pervasive cloud computing. Moreover, to reduce the computing and storage concerns in mobiles, the cloud server collects several groups of user profiles with similarities by executing K-means clustering on users’ data (context and multimedia contents). The distribution process conveys real-time notifications to smartphone users, according to what is stated in his/her profile. We considered a mobile cloud offloading system, which decides the offloading actions to/from cloud servers. Context-aware decision-making (CAD) customizes the mobile device performance with different specifications such as short response time and lesser energy consumption. The analysis says that our CIMS takes advantage of cost-effective features to produce high-quality information for mobile (or smart device) users in real time. Moreover, our CIMS lessens the computation and storage complexities for mobile users, as well as cloud servers. Simulation analysis suggests that our approach is more efficient than existing domains.Information2015-06-0462Article10.3390/info60202282282452078-24892015-06-04doi: 10.3390/info6020228Abhilash SreeramaneniHyungjin ImWon KangChan KohJong Park<![CDATA[Information, Vol. 6, Pages 212-227: Identifying Travel Mode with GPS Data Using Support Vector Machines and Genetic Algorithm]]>
http://www.mdpi.com/2078-2489/6/2/212
Travel mode identification is one of the essential steps in travel information detection with Global Positioning System (GPS) survey data. This paper presents a Support Vector Classification (SVC) model for travel mode identification with GPS data. Genetic algorithm (GA) is employed for optimizing the parameters in the model. The travel modes of walking, bicycle, subway, bus, and car are recognized in this model. The results indicate that the developed model shows a high level of accuracy for mode identification. The estimation results also present GA’s contribution to the optimization of the model. The findings can be used to identify travel mode based on GPS survey data, which will significantly enhance the efficiency and accuracy of travel survey and data processing. By providing crucial trip information, the results also contribute to the modeling and analyzing of travel behavior and are readily applicable to a wide range of transportation practices.Information2015-06-0462Article10.3390/info60202122122272078-24892015-06-04doi: 10.3390/info6020212Fang ZongYu BaiXiao WangYixin YuanYanan He<![CDATA[Risks, Vol. 3, Pages 183-218: A Two-Account Life Insurance Model for Scenario-Based Valuation Including Event Risk]]>
http://www.mdpi.com/2227-9091/3/2/183
Using a two-account model with event risk, we model life insurance contracts taking into account both guaranteed and non-guaranteed payments in participating life insurance as well as in unit-linked insurance. Here, event risk is used as a generic term for life insurance events, such as death, disability, etc. In our treatment of participating life insurance, we have special focus on the bonus schemes “consolidation” and “additional benefits”, and one goal is to formalize how these work and interact. Another goal is to describe similarities and differences between participating life insurance and unit-linked insurance. By use of a two-account model, we are able to illustrate general concepts without making the model too abstract. To allow for complicated financial markets without dramatically increasing the mathematical complexity, we focus on economic scenarios. We illustrate the use of our model by conducting scenario analysis based on Monte Carlo simulation, but the model applies to scenarios in general and to worst-case and best-estimate scenarios in particular. In addition to easy computations, our model offers a common framework for the valuation of life insurance payments across product types. This enables comparison of participating life insurance products and unit-linked insurance products, thus building a bridge between the two different ways of formalizing life insurance products. Finally, our model distinguishes itself from the existing literature by taking into account the Markov model for the state of the policyholder and, hereby, facilitating event risk.Risks2015-06-0432Article10.3390/risks30201831832182227-90912015-06-04doi: 10.3390/risks3020183Ninna JensenKristian Schomacker<![CDATA[IJGI, Vol. 4, Pages 957-973: The Spatiotemporal Dynamics of Forest–Heathland Communities over 60 Years in Fontainebleau, France]]>
http://www.mdpi.com/2220-9964/4/2/957
According to the EU Habitats Directive, heathlands are “natural habitats of community interest”. Heathland management aims at conserving these habitats threatened by various changes, including successional processes leading to forest vegetation. We investigate the dynamics of woody species to the detriment of heathland over a period of 60 years in the Fontainebleau forest and we examine the effects of soil types, soil depth and topography parameters on heathland stability. We assess changes in forest cover between 1946 and 2003 by comparing vegetation maps derived from aerial photographs coupled to GIS analyses. The results show the loss of more than 75% of heathland during 1946–2003 due to tree colonisation of abandoned heathland. We detected differences in the dynamics of colonisation between coniferous and deciduous trees. The colonisation of heathland by coniferous species was faster over the last 20 years of our study period. Tree encroachment was faster in north-facing areas and in areas of acidic luvisols. While this dynamic was very slow in acid sandstone soils, heathland stability was more important in shallow soils on flat and south facing areas. Our study has the potential to assist land managers in selecting those heathland areas that will be easier to conserve and/or to restore by focusing on areas and spatial conditions that prevent forest colonisation and hence favour the long-term stability of heathland.ISPRS International Journal of Geo-Information2015-06-0342Article10.3390/ijgi40209579579732220-99642015-06-03doi: 10.3390/ijgi4020957Samira MobaiedNathalie MachonArnault LalanneBernard Riera<![CDATA[Electronics, Vol. 4, Pages 311-328: An Improved Asymmetric Cascaded Multilevel D–STATCOM with Enhanced Hybrid Modulation]]>
http://www.mdpi.com/2079-9292/4/2/311
Problems related to power quality, which in the last years were responsible only for small losses in low-voltage distribution systems, are now causing damage to power apparatuses and financial losses also in medium-voltage systems. The necessity of a better quality of power supply encourages the development of new specific custom power devices directly connected in medium-voltage distribution systems. It is well know that the multilevel converters are capable of being installed directly in the medium voltage, and presents several advantages when compared with conventional two-level converters. Some topologies, like the asymmetric cascaded multilevel converter, presents difficulties in regulating the voltages of all isolated dc-link capacitors. In this context, this article presents an asymmetric nineteen-level D–STATCOM (Distribution Static Synchronous Compensator) with a reactive power and dc-link regulation control loops for generic cascaded multilevel converters in order to improve the power quality in medium-voltage distribution systems. The performance of the proposed control method for a multilevel D–STATCOM is presented and evaluated in a downscaled prototype.Electronics2015-06-0342Article10.3390/electronics40203113113282079-92922015-06-03doi: 10.3390/electronics4020311Weder NunesLucas EncarnaçãoMauricio Aredes<![CDATA[Computation, Vol. 3, Pages 285-298: Computational Recognition of RNA Splice Sites by Exact Algorithms for the Quadratic Traveling Salesman Problem]]>
http://www.mdpi.com/2079-3197/3/2/285
One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM) models and permuted variable length Markov (PVLM) models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP), the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP). Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.Computation2015-06-0332Article10.3390/computation30202852852982079-31972015-06-03doi: 10.3390/computation3020285Anja FischerFrank FischerGerold JägerJens KeilwagenPaul MolitorIvo Grosse<![CDATA[Computation, Vol. 3, Pages 274-284: Effects of a Sprinkler on Evacuation Dynamics in Fire]]>
http://www.mdpi.com/2079-3197/3/2/274
A fire in an enclosed space, such as a room in a building, is generally called a compartment fire. To prevent the compartment fire, a sprinkler for first-aid fire-fighting is installed in rooms. However, it is difficult to determine the degree to which smoke generation and the fire spreading will be inhibited when sprinklers are on. In particular, demonstrating evacuation behavior assuming an actual fire is impossible. In this study, we evaluated an effectiveness of the sprinkler by numerical simulations. To consider evacuation dynamics, a real-coded cellular automata (RCA) was used, where we can freely set the direction and velocity of an evacuee based on a floor field model. To consider the situation in the room fire, we used a simulator called Fire Dynamics Simulator (FDS). Two cases with and without the sprinkler were compared to see the validity of the sprinkler on evacuation dynamics. The effect of smoke and the expansion of the fire-spreading region were discussed. Results show that, since the fire-spreading region disappears when the sprinkler is actuated, the evacuation time decreases. Even though the sprinkler is actuated, the smoke generated at the beginning of a fire diffuses inside the whole room. However, the duration of evacuees being overwhelmed by smoke is less, because the amount of smoke generated by the pyrolysis reaction is much decreased.Computation2015-06-0332Article10.3390/computation30202742742842079-31972015-06-03doi: 10.3390/computation3020274Kazuhiro YamamotoYuki TakeuchiShinnosuke Nishiki<![CDATA[Risks, Vol. 3, Pages 164-182: The Impact of Reinsurance Strategies on Capital Requirements for Premium Risk in Insurance]]>
http://www.mdpi.com/2227-9091/3/2/164
New risk-based solvency requirements for insurance companies across European markets have been introduced by Solvency II and will come in force from 1 January 2016. These requirements, derived by a Standard Formula or an Internal Model, will be by far more risk-sensitive than the required solvency margin provided by the current legislation. In this regard, a Partial Internal Model for Premium Risk is developed here for a multi-line Non-Life insurer. We follow a classical approach based on a Collective Risk Model properly extended in order to consider not only the volatility of aggregate claim amounts but also expense volatility. To measure the effect of risk mitigation, suitable reinsurance strategies are pursued. We analyze how naïve coverage as conventional Quota Share and Excess of Loss reinsurance may modify the exact moments of the distribution of technical results. Furthermore, we investigate how alternative choices of commission rates in proportional treaties may affect the variability of distribution. Numerical results are also figured out in the last part of the paper with evidence of different effects for small and large companies. The main reasons for these differences are pointed out.Risks2015-06-0332Article10.3390/risks30201641641822227-90912015-06-03doi: 10.3390/risks3020164Gian ClementeNino SavelliDiego Zappa<![CDATA[Symmetry, Vol. 7, Pages 976-993: Consistent Positive Co-Variation between Fluctuating Asymmetry and Sexual Trait Size: A Challenge to the Developmental Instability-Sexual Selection Hypothesis]]>
http://www.mdpi.com/2073-8994/7/2/976
The developmental instability (DI)-sexual selection hypothesis proposes that large size and symmetry in secondary sexual traits are favored by sexual selection because they reveal genetic quality. A critical prediction of this hypothesis is that there should exist negative correlations between trait fluctuating asymmetry (FA) and size of condition dependent sexual traits; condition dependent traits should reveal an organism’s overall health and vigor, and be influenced by a multitude of genetic loci. Here, we tested for the predicted negative FA-size correlations in the male sex comb of Drosophila bipectinata. Among field-caught males from five widely separated geographic localities, FA-size correlations were consistently positive, despite evidence that sex comb size is condition dependent. After controlling for trait size, FA was significantly negatively correlated with body size within several populations, indicating that developmental instability in the comb may reveal individual genetic quality. We suggest the possibility that condition dependent traits in some cases tap into independent units of the genome (a restricted set of genes), rather than signaling overall genetic properties of the organism. There were pronounced among-population differences in both comb FA and size, and these traits were positively correlated across populations, recapitulating the within-population patterns. We conclude that the results are inconsistent with the DI-sexual selection hypothesis, and discuss potential reasons for positive FA-size co-variation in sexual traits.Symmetry2015-06-0372Article10.3390/sym70209769769932073-89942015-06-03doi: 10.3390/sym7020976Michal PolakKassie HookerFrances Tyler<![CDATA[Symmetry, Vol. 7, Pages 962-975: Characterizations of Network Structures Using Eigenmode Analysis]]>
http://www.mdpi.com/2073-8994/7/2/962
We introduced an analysis to identify structural characterization of two-dimensional regular and amorphous networks. The analysis was shown to be reliable to determine the global network rigidity and can also identify local floppy regions in the mixture of rigid and floppy regions. The eigenmode analysis explores the structural properties of various networks determined by eigenvalue spectra. It is useful to determine the general structural stability of networks that the traditional Maxwell counting scheme based on the statistics of nodes (degrees of freedom) and bonds (constraints) does not provide. A visual characterization scheme was introduced to examine the local structure characterization of the networks. The eigenmode analysis is under development for various practical applications on more general network structures characterized by coordination numbers and nodal connectivity such as graphenes and proteins.Symmetry2015-06-0372Article10.3390/sym70209629629752073-89942015-06-03doi: 10.3390/sym7020962Youngho ParkSangil Hyun<![CDATA[Symmetry, Vol. 7, Pages 949-961: Conservation Laws and Exact Solutions of a Generalized Zakharov–Kuznetsov Equation]]>
http://www.mdpi.com/2073-8994/7/2/949
In this paper, we study a generalized Zakharov–Kuznetsov equation in three variables, which has applications in the nonlinear development of ion-acoustic waves in a magnetized plasma. Conservation laws for this equation are constructed for the first time by using the new conservation theorem of Ibragimov. Furthermore, new exact solutions are obtained by employing the Lie symmetry method along with the simplest equation method.Symmetry2015-06-0372Article10.3390/sym70209499499612073-89942015-06-03doi: 10.3390/sym7020949Dimpho MothibiChaudry Khalique<![CDATA[Future Internet, Vol. 7, Pages 170-183: Output from Statistical Predictive Models as Input to eLearning Dashboards]]>
http://www.mdpi.com/1999-5903/7/2/170
We describe how statistical predictive models might play an expanded role in educational analytics by giving students automated, real-time information about what their current performance means for eventual success in eLearning environments. We discuss how an online messaging system might tailor information to individual students using predictive analytics. The proposed system would be data-driven and quantitative; e.g., a message might furnish the probability that a student will successfully complete the certificate requirements of a massive open online course. Repeated messages would prod underperforming students and alert instructors to those in need of intervention. Administrators responsible for accreditation or outcomes assessment would have ready documentation of learning outcomes and actions taken to address unsatisfactory student performance. The article’s brief introduction to statistical predictive models sets the stage for a description of the messaging system. Resources and methods needed to develop and implement the system are discussed.Future Internet2015-06-0272Article10.3390/fi70201701701831999-59032015-06-02doi: 10.3390/fi7020170Marlene Smith<![CDATA[Algorithms, Vol. 8, Pages 280-291: Model Equivalence-Based Identification Algorithm for Equation-Error Systems with Colored Noise]]>
http://www.mdpi.com/1999-4893/8/2/280
For equation-error autoregressive (EEAR) systems, this paper proposes an identification algorithm by means of the model equivalence transformation. The basic idea is to eliminate the autoregressive term in the model using the model transformation, to estimate the parameters of the converted system and further to compute the parameter estimates of the original system using the comparative coefficient way and the model equivalence principle. For comparison, the recursive generalized least squares algorithm is given simply. The simulation results verify that the proposed algorithm is effective and can produce more accurate parameter estimates.Algorithms2015-06-0282Article10.3390/a80202802802911999-48932015-06-02doi: 10.3390/a8020280Dandan MengFeng Ding<![CDATA[Systems, Vol. 3, Pages 45-59: Integration of Standardized Management Systems: A Dilemma?]]>
http://www.mdpi.com/2079-8954/3/2/45
The growing proliferation of management systems standards (MSSs), and their individualized implementation, is a real problem faced by organizations. On the other hand, MSSs are aimed at improving efficiency and effectiveness of organizational responses in order to satisfy the requirements, needs and expectations of the stakeholders. Each organization has its own identity and this is an issue that cannot be neglected; hence, two possible approaches can be attended. First, continue with the implementation of individualized management systems (MSs); or, integrate the several MSSs versus related MSs into an integrated management system (IMS). Therefore, in this context, organizations are faced with a dilemma, as a result of the increasing proliferation and diversity of MSSs. This paper takes into account the knowledge gained through a case study conducted in the context of a Portuguese company and unveils some of the advantages and disadvantages of integration. A methodology is also proposed and presented to support organizations in developing and structuring the integration process of their individualized MSs, and consequently minimize problems that are generators of inefficiencies, value destruction and loss of competitiveness. The obtained results provide relevant information that can support Top Management decision in solving that dilemma and consequently promote a successful integration, including a better control of business risks associated to MSSs requirements and enhancing sustainable performance, considering the context in which organizations operate.Systems2015-06-0232Article10.3390/systems302004545592079-89542015-06-02doi: 10.3390/systems3020045Manuel RebeloGilberto SantosRui Silva<![CDATA[Symmetry, Vol. 7, Pages 935-948: Design of a Distributed Personal Information Access Control Scheme for Secure Integrated Payment in NFC]]>
http://www.mdpi.com/2073-8994/7/2/935
At the center of core technologies for a future cyber world, such as Internet of Things (IoT) or big data, is a context-rich system that offers services by using situational information. The field where context-rich systems were first introduced is near-field communication (NFC)-based electronic payments. Near-field Communication (NFC) integrated payment services collect the payment information of the credit card and the location information to generate patterns in the user’s consumption or movement through big data technology. Based on such pattern information, tailored services, such as advertisement, are offered to users. However, there is difficulty in controlling access to personal information, as there is a collaborative relationship focused on the trusted service manager (TSM) that is close knit to shared personal information. Moreover, in the case of Hadoop, among the many big data analytical technologies, it offers access control functions, but not a way to authorize the processing of personal information, making it impossible to grant authority between service providers to process information. As such, this paper proposes a key generation and distribution method, as well as a secure communication protocol. The analysis has shown that the efficiency was greater for security and performance compared to relation works.Symmetry2015-06-0272Article10.3390/sym70209359359482073-89942015-06-02doi: 10.3390/sym7020935Jungho KangJong ParkSangkee Suk<![CDATA[Symmetry, Vol. 7, Pages 843-934: Analyzing Fluctuating Asymmetry with Geometric Morphometrics: Concepts, Methods, and Applications]]>
http://www.mdpi.com/2073-8994/7/2/843
Approximately two decades after the first pioneering analyses, the study of shape asymmetry with the methods of geometric morphometrics has matured and is a burgeoning field. New technology for data collection and new methods and software for analysis are widely available and have led to numerous applications in plants and animals, including humans. This review summarizes the concepts and morphometric methods for studying asymmetry of shape and size. After a summary of mathematical and biological concepts of symmetry and asymmetry, a section follows that explains the methods of geometric morphometrics and how they can be used to analyze asymmetry of biological structures. Geometric morphometric analyses not only tell how much asymmetry there is, but also provide information about the patterns of covariation in the structure under study. Such patterns of covariation in fluctuating asymmetry can provide valuable insight about the developmental basis of morphological integration, and have become important tools for evolutionary developmental biology. The genetic basis of fluctuating asymmetry has been studied from empirical and theoretical viewpoints, but serious challenges remain in this area. There are many promising areas for further research that are only little explored at present.Symmetry2015-06-0272Review10.3390/sym70208438439342073-89942015-06-02doi: 10.3390/sym7020843Christian Klingenberg<![CDATA[Administrative Sciences, Vol. 5, Pages 88-89: Administrative Sciences Introduces Double Blind Peer-Review]]>
http://www.mdpi.com/2076-3387/5/2/88
Peer-review has become, and remains, the cornerstone on which scientific publication is built. Despite criticism, it remains almost the only method by which acceptance into the corpus of research literature is based. [...]Administrative Sciences2015-06-0152Editorial10.3390/admsci502008888892076-33872015-06-01doi: 10.3390/admsci5020088Martyn RittmanJoseph Roberts<![CDATA[IJGI, Vol. 4, Pages 942-956: Integrating Free and Open Source Solutions into Geospatial Science Education]]>
http://www.mdpi.com/2220-9964/4/2/942
While free and open source software becomes increasingly important in geospatial research and industry, open science perspectives are generally less reflected in universities’ educational programs. We present an example of how free and open source software can be incorporated into geospatial education to promote open and reproducible science. Since 2008 graduate students at North Carolina State University have the opportunity to take a course on geospatial modeling and analysis that is taught with both proprietary and free and open source software. In this course, students perform geospatial tasks simultaneously in the proprietary package ArcGIS and the free and open source package GRASS GIS. By ensuring that students learn to distinguish between geospatial concepts and software specifics, students become more flexible and stronger spatial thinkers when choosing solutions for their independent work in the future. We also discuss ways to continually update and improve our publicly available teaching materials for reuse by teachers, self-learners and other members of the GIS community. Only when free and open source software is fully integrated into geospatial education, we will be able to encourage a culture of openness and, thus, enable greater reproducibility in research and development applications.ISPRS International Journal of Geo-Information2015-06-0142Article10.3390/ijgi40209429429562220-99642015-06-01doi: 10.3390/ijgi4020942Vaclav PetrasAnna PetrasovaBrendan HarmonRoss MeentemeyerHelena Mitasova<![CDATA[IJGI, Vol. 4, Pages 928-941: Mapping of Asbestos Cement Roofs and Their Weathering Status Using Hyperspectral Aerial Images]]>
http://www.mdpi.com/2220-9964/4/2/928
The aims of this study were: (i) the mapping of asbestos cement roofs in an urban area; and (ii) the development of a spectral index related to the roof weathering status. Aerial images were collected through the Multispectral Infrared and Visible Imaging Spectrometer (MIVIS) sensor, which acquires data in 102 channels from the visible to the thermal infrared spectral range. An image based supervised classification was performed using the Spectral Angle Mapper (SAM) algorithm. The SAM was trained through a set of pixels selected on roofs of different materials. The map showed an average producer’s accuracy (PA) of 86% and a user’s accuracy (UA) of 89% for the asbestos cement class. A novel spectral index, the “Index of Surface Deterioration” (ISD), was defined based on measurements collected with a portable spectroradiometer on asbestos cement roofs that were characterized by different weathering statuses. The ISD was then calculated on the MIVIS images, allowing the distinction of two weathering classes (i.e., high and low). The asbestos cement map was handled in a Geographic Information System (GIS) in order to supply the municipalities with the cadastral references of each property having an asbestos cement roof. This tool can be purposed for municipalities as an aid to prioritize asbestos removal, based on roof weathering status.ISPRS International Journal of Geo-Information2015-06-0142Article10.3390/ijgi40209289289412220-99642015-06-01doi: 10.3390/ijgi4020928Chiara CiliaCinzia PanigadaMicol RossiniGabriele CandianiMonica PepeRoberto Colombo<![CDATA[Econometrics, Vol. 3, Pages 412-442: Strategic Interaction Model with Censored Strategies]]>
http://www.mdpi.com/2225-1146/3/2/412
In this paper, we develop a new model of a static game of incomplete information with a large number of players. The model has two key distinguishing features. First, the strategies are subject to threshold effects, and can be interpreted as dependent censored random variables. Second, in contrast to most of the existing literature, our inferential theory relies on a large number of players, rather than a large number of independent repetitions of the same game. We establish existence and uniqueness of the pure strategy equilibrium, and prove that the censored equilibrium strategies satisfy a near-epoch dependence property. We then show that the normal maximum likelihood and least squares estimators of this censored model are consistent and asymptotically normal. Our model can be useful in a wide variety of settings, including investment, R&amp;D, labor supply, and social interaction applications.Econometrics2015-06-0132Article10.3390/econometrics30204124124422225-11462015-06-01doi: 10.3390/econometrics3020412Nazgul Jenish<![CDATA[IJFS, Vol. 3, Pages 162-176: An Improved Valuation Model for Technology Companies]]>
http://www.mdpi.com/2227-7072/3/2/162
This paper estimates some of the parameters of the Schwartz and Moon (2001)) model using cross-sectional data. Stochastic costs, future financing, capital expenditures and depreciation are taken into account. Some special conditions are also set: the speed of adjustment parameters are equal; the implied half-life of the sales growth process is linked to analyst forecasts; and the risk-adjustment parameter is inferred from the company’s observed stock price beta. The model is illustrated in the valuation of Google, Amazon, eBay, Facebook and Yahoo. The improved model is far superior to the Schwartz and Moon (2001) model.International Journal of Financial Studies2015-06-0132Article10.3390/ijfs30201621621762227-70722015-06-01doi: 10.3390/ijfs3020162Ako Doffou<![CDATA[JRFM, Vol. 8, Pages 266-284: Network Analysis of the Shanghai Stock Exchange Based on Partial Mutual Information]]>
http://www.mdpi.com/1911-8074/8/2/266
Analyzing social systems, particularly financial markets, using a complex network approach has become one of the most popular fields within econophysics. A similar trend is currently appearing within the econometrics and finance communities, as well. In this study, we present a state-of-the-artmethod for analyzing the structure and risk within stockmarkets, treating them as complex networks using model-free, nonlinear dependency measures based on information theory. This study is the first network analysis of the stockmarket in Shanghai using a nonlinear network methodology. Further, it is often assumed that markets outside the United States and Western Europe are inherently riskier. We find that the Chinese stock market is not structurally risky, contradicting this popular opinion. We use partial mutual information to create filtered networks representing the Shanghai stock exchange, comparing them to networks based on Pearson’s correlation. Consequently, we discuss the structure and characteristics of both the presented methods and the Shanghai stock exchange. This paper provides an insight into the cutting edge methodology designed for analyzing complex financial networks, as well as analyzing the structure of the market in Shanghai and, as such, is of interest to both researchers and financial analysts.Journal of Risk and Financial Management2015-06-0182Article10.3390/jrfm80202662662841911-80742015-06-01doi: 10.3390/jrfm8020266Tao YouPaweł FiedorArtur Hołda<![CDATA[Algorithms, Vol. 8, Pages 271-279: Dynamics and Fractal Dimension of Steffensen-Type Methods]]>
http://www.mdpi.com/1999-4893/8/2/271
In this paper, the dynamical behavior of different optimal iterative schemes for solving nonlinear equations with increasing order, is studied. The tendency of the complexity of the Julia set is analyzed and referred to the fractal dimension. In fact, this fractal dimension can be shown to be a powerful tool to compare iterative schemes that estimate the solution of a nonlinear equation. Based on the box-counting algorithm, several iterative derivative-free methods of different convergence orders are compared.Algorithms2015-06-0182Article10.3390/a80202712712791999-48932015-06-01doi: 10.3390/a8020271Francisco ChicharroAlicia CorderoJuan Torregrosa<![CDATA[Mathematics, Vol. 3, Pages 428-443: The 1st Law of Thermodynamics for the Mean Energy of a Closed Quantum System in the Aharonov-Vaidman Gauge]]>
http://www.mdpi.com/2227-7390/3/2/428
The Aharonov-Vaidman gauge additively transforms the mean energy of a quantum mechanical system into a weak valued system energy. In this paper, the equation of motion of this weak valued energy is used to provide a mathematical statement of an extended 1st Law of Thermodynamics that is applicable to the mean energy of a closed quantum system when the mean energy is expressed in the Aharonov-Vaidman gauge, i.e., when the system’s energy is weak valued. This is achieved by identifying the generalized heat and work exchange terms that appear in the equation of motion for weak valued energy. The complex valued contributions of the additive gauge term to these generalized exchange terms are discussed and this extended 1st Law is shown to subsume the usual 1st Law that is applicable for the mean energy of a closed quantum system. It is found that the gauge transformation introduces an additional energy uncertainty exchange term that—while it is neither a heat nor a work exchange term—is necessary for the conservation of weak valued energy. A spin-1/2 particle in a uniform magnetic field is used to illustrate aspects of the theory. It is demonstrated for this case that the extended 1st Law implies the existence of a gauge potential ω and that it generates a non-vanishing gauge field F. It is also shown for this case that the energy uncertainty exchange accumulated during the evolution of the system along a closed evolutionary cycle C in an associated parameter space is a geometric phase. This phase is equal to both the path integral of ω along C and the integral of the flux of F through the area enclosed by C.Mathematics2015-06-0132Article10.3390/math30204284284432227-73902015-06-01doi: 10.3390/math3020428Allen Parks<![CDATA[JRFM, Vol. 8, Pages 227-265: Dependency Relations among International Stock Market Indices]]>
http://www.mdpi.com/1911-8074/8/2/227
We develop networks of international stock market indices using information and correlation based measures. We use 83 stock market indices of a diversity of countries, as well as their single day lagged values, to probe the correlation and the flow of information from one stock index to another taking into account different operating hours. Additionally, we apply the formalism of partial correlations to build the dependency network of the data, and calculate the partial Transfer Entropy to quantify the indirect influence that indices have on one another. We find that Transfer Entropy is an effective way to quantify the flow of information between indices, and that a high degree of information flow between indices lagged by one day coincides to same day correlation between them.Journal of Risk and Financial Management2015-05-2982Article10.3390/jrfm80202272272651911-80742015-05-29doi: 10.3390/jrfm8020227Leonidas JuniorAsher MullokandovDror Kenett<![CDATA[Symmetry, Vol. 7, Pages 815-842: Cosmological Probes for Supersymmetry]]>
http://www.mdpi.com/2073-8994/7/2/815
The multi-parameter character of supersymmetric dark-matter models implies the combination of their experimental studies with astrophysical and cosmological probes. The physics of the early Universe provides nontrivial effects of non-equilibrium particles and primordial cosmological structures. Primordial black holes (PBHs) are a profound signature of such structures that may arise as a cosmological consequence of supersymmetric (SUSY) models. SUSY-based mechanisms of baryosynthesis can lead to the possibility of antimatter domains in a baryon asymmetric Universe. In the context of cosmoparticle physics, which studies the fundamental relationship of the micro- and macro-worlds, the development of SUSY illustrates the main principles of this approach, as the physical basis of the modern cosmology provides cross-disciplinary tests in physical and astronomical studies.Symmetry2015-05-2972Review10.3390/sym70208158158422073-89942015-05-29doi: 10.3390/sym7020815Maxim Khlopov<![CDATA[Electronics, Vol. 4, Pages 303-310: A Theoretical Study on the Operation Principle of Hybrid Solar Cells]]>
http://www.mdpi.com/2079-9292/4/2/303
In this work, the operation mechanism of hybrid solar cells is studied explicitly. The excitation, diffusion and dissociation of singlet and triplet excitons and charge transport of free charge carriers are studied and their corresponding rates are calculated for a flexible P3HT:SiNW hybrid solar cell. The rates are found to be faster for singlet than triplet excitons. Possible loss mechanisms in hybrid solar cells have also been highlighted.Electronics2015-05-2942Article10.3390/electronics40203033033102079-92922015-05-29doi: 10.3390/electronics4020303Monishka NarayanJai Singh<![CDATA[Symmetry, Vol. 7, Pages 788-814: Supersymmetry with Radiatively-Driven Naturalness: Implications for WIMP and Axion Searches]]>
http://www.mdpi.com/2073-8994/7/2/788
By insisting on naturalness in both the electroweak and quantum chromodynamics (QCD) sectors of the minimal supersymmetric standard model (MSSM), the portrait for dark matter production is seriously modified from the usual weakly interacting massive particle (WIMP) miracle picture. In supersymmetry (SUSY) models with radiatively-driven naturalness (radiative natural SUSY or radiative natural SUSY (RNS)) which include a Dine–Fischler–Srednicki–Zhitnitsky (DFSZ)-like solution to the strong charge-conjugation-parity (CP) and SUSY \(\mu\) problems, dark matter is expected to be an admixture of both axions and higgsino-like WIMPs. The WIMP/axion abundance calculation requires simultaneous solution of a set of coupled Boltzmann equations which describe quasi-stable axinos and saxions. In most of parameter space, axions make up the dominant contribution of dark matter although regions of WIMP dominance also occur. We show the allowed range of Peccei-Quinn (PQ) scale \(f_a\) and compare to the values expected to be probed by the axion dark matter search experiment (ADMX) axion detector in the near future. We also show WIMP detection rates, which are suppressed from usual expectations, because now WIMPs comprise only a fraction of the total dark matter. Nonetheless, ton-scale noble liquid detectors should be able to probe the entirety of RNS parameter space. Indirect WIMP detection rates are less propitious since they are reduced by the square of the depleted WIMP abundance.Symmetry2015-05-2872Article10.3390/sym70207887888142073-89942015-05-28doi: 10.3390/sym7020788Kyu BaeHoward BaerVernon BargerMichael SavoyHasan Serce<![CDATA[Symmetry, Vol. 7, Pages 774-787: Flexible Polyhedral Surfaces with Two Flat Poses]]>
http://www.mdpi.com/2073-8994/7/2/774
We present three types of polyhedral surfaces, which are continuously flexible and have not only an initial pose, where all faces are coplanar, but pass during their self-motion through another pose with coplanar faces (“flat pose”). These surfaces are examples of so-called rigid origami, since we only admit exact flexions, i.e., each face remains rigid during the motion; only the dihedral angles vary. We analyze the geometry behind Miura-ori and address Kokotsakis’ example of a flexible tessellation with the particular case of a cyclic quadrangle. Finally, we recall Bricard’s octahedra of Type 3 and their relation to strophoids.Symmetry2015-05-2772Article10.3390/sym70207747747872073-89942015-05-27doi: 10.3390/sym7020774Hellmuth Stachel<![CDATA[Future Internet, Vol. 7, Pages 152-169: Receiver-Triggered Handshake Protocol for DTN in Disaster Area]]>
http://www.mdpi.com/1999-5903/7/2/152
When a disaster hits a wide area, communication services for public use will be rendered unavailable. This will make it difficult to confirm the safety of people in the disaster area. A solution to this problem is to form delay/disruption tolerant networks (DTN) using mobile terminals of victims, those of rescuers, who serve as information carriers, and terminals (servers) in shelters. In this paper, we propose using a receiver-triggered handshake protocol for communication between these terminals. We have developed the bundle layer protocol for this handshake method. The proposed method has been implemented on a network simulator to build an evaluation environment. The disaster area has been modeled on an area around Shinjuku Station in Tokyo. Victims are randomly distributed in the area. We have compared the proposed method with Epidemic Routing and Spray and Wait in terms of the delivery rate at which messages reach their destinations, and the length of time taken for messages to reach their destinations. We have found that the delivery rate of the three methods are, more or less, the same, but that the proposed method is superior to the other two methods in terms of storage usage and battery consumption of terminals, and the number of bundles generated in the network.Future Internet2015-05-2772Article10.3390/fi70201521521691999-59032015-05-27doi: 10.3390/fi7020152Ryoma YamashitaKazumasa Takami<![CDATA[Computation, Vol. 3, Pages 262-273: LES: Unsteady Atmospheric Turbulent Layer Inlet. A Precursor Method Application and Its Quality Check]]>
http://www.mdpi.com/2079-3197/3/2/262
The motivation of this work is to bridge the gap between experimental approaches in wind tunnel testing and numerical computations, in the field of structural design against strong winds. This paper focuses on the generation of an unsteady flow field, representative of a natural wind field, but still compatible with Computational Fluid Dynamics inlet requirements. A simple and “naive” procedure is explained, and the results are in good agreement with some international standards.Computation2015-05-2632Article10.3390/computation30202622622732079-31972015-05-26doi: 10.3390/computation3020262Julien Berthaut-GerentesDidier Delaunay<![CDATA[Mathematics, Vol. 3, Pages 412-427: Subordination Principle for a Class of Fractional Order Differential Equations]]>
http://www.mdpi.com/2227-7390/3/2/412
The fractional order differential equation \(u'(t)=Au(t)+\gamma D_t^{\alpha} Au(t)+f(t), \ t&gt;0\), \(u(0)=a\in X\) is studied, where \(A\) is an operator generating a strongly continuous one-parameter semigroup on a Banach space \(X\), \(D_t^{\alpha}\) is the Riemann–Liouville fractional derivative of order \(\alpha \in (0,1)\), \(\gamma&gt;0\) and \(f\) is an \(X\)-valued function. Equations of this type appear in the modeling of unidirectional viscoelastic flows. Well-posedness is proven, and a subordination identity is obtained relating the solution operator of the considered problem and the \(C_{0}\)-semigroup, generated by the operator \(A\). As an example, the Rayleigh–Stokes problem for a generalized second-grade fluid is considered.Mathematics2015-05-2632Article10.3390/math30204124124272227-73902015-05-26doi: 10.3390/math3020412Emilia Bazhlekova<![CDATA[Algorithms, Vol. 8, Pages 248-270: On String Matching with Mismatches]]>
http://www.mdpi.com/1999-4893/8/2/248
In this paper, we consider several variants of the pattern matching with mismatches problem. In particular, given a text \(T=t_1 t_2\cdots t_n\) and a pattern \(P=p_1p_2\cdots p_m\), we investigate the following problems: (1) pattern matching with mismatches: for every \(i, 1\leq i \leq n-m+1\) output, the distance between \(P\) and \(t_i t_{i+1}\cdots t_{i+m-1}\); and (2) pattern matching with \(k\) mismatches: output those positions \(i\) where the distance between \(P\) and \(t_i t_{i+1}\cdots t_{i+m-1}\) is less than a given threshold \(k\). The distance metric used is the Hamming distance. We present some novel algorithms and techniques for solving these problems. We offer deterministic, randomized and approximation algorithms. We consider variants of these problems where there could be wild cards in either the text or the pattern or both. We also present an experimental evaluation of these algorithms. The source code is available at http://www.engr.uconn.edu/\(\sim\)man09004/kmis.zip.Algorithms2015-05-2682Article10.3390/a80202482482701999-48932015-05-26doi: 10.3390/a8020248Marius NicolaeSanguthevar Rajasekaran<![CDATA[IJGI, Vol. 4, Pages 900-927: Mapping the Socio-Economic and Ecological Resilience of Japanese Coral Reefscapes across a Decade]]>
http://www.mdpi.com/2220-9964/4/2/900
Shallow coral reefs threatened by climate change must be spatio-temporally analyzed in terms of their protection of coastal human populations. This study combines Japanese spatio-temporal gradients of population/asset and coral buffering exposure to stress-inducing and stress-mitigating factors so that the socio-economic and ecological (SEE) resilience tied to coral reefscapes can be regionally mapped (1200 km) at a fine resolution (1 arcsec) over a decade (11 years). Fuzzy logic was employed to associated environmental factors based on the related population/asset/coral buffering responses, as found in the literature. Once the factors were weighted according to their resilience contributions, temporally static patterns were evident: (1) a negative correlation occurs between coral buffering resilience and latitude; (2) the least resilient islands are low-lying, deprived of wide reef barriers, and located on the eastern and southern boundaries of the Nansei archipelago; (3) the southwestern-most, middle and northeastern-most islands have the same SEE resilience; and (4) Sekisei Lagoon islands have a very high coral buffering resilience. To overcome uncertainty, future studies should focus on the socio-ecological adaptive capacity, fine-scale ecological processes (such as coral and fish functional groups) and the prediction of the flood risks in the coming decades.ISPRS International Journal of Geo-Information2015-05-2642Article10.3390/ijgi40209009009272220-99642015-05-26doi: 10.3390/ijgi4020900Antoine CollinKazuo NadaokaLawrence Bernardo<![CDATA[Symmetry, Vol. 7, Pages 726-773: Probing Majorana Neutrinos and their CP Violation in Decays of Charged Scalar Mesons π, K, D, Ds, B, Bc]]>
http://www.mdpi.com/2073-8994/7/2/726
Some of the outstanding questions of particle physics today concern the neutrino sector, in particular whether there are more neutrinos than those already known and whether they are Dirac or Majorana particles. There are different ways to explore these issues. In this article we describe neutrino-mediated decays of charged pseudoscalar mesons such as π± ,K± and B±, in scenarios where extra neutrinos are heavy and can be on their mass shell. We discuss semileptonic and leptonic decays of such kinds. We investigate possible ways of using these decays in order to distinguish between the Dirac and Majorana character of neutrinos. Further, we argue that there are significant possibilities of detecting CP violation in such decays when there are at least two almost degenerate Majorana neutrinos involved. This latter type of scenario fits well into the known neutrino minimal standard model (MSM) which could simultaneously explain the Dark Matter and Baryon Asymmetry of the Universe.Symmetry2015-05-2672Article10.3390/sym70207267267732073-89942015-05-26doi: 10.3390/sym7020726Gorazd CvetičClaudio DibChoong KimJilberto Zamora-Saá<![CDATA[Mathematics, Vol. 3, Pages 398-411: Implicit Fractional Differential Equations via the Liouville–Caputo Derivative]]>
http://www.mdpi.com/2227-7390/3/2/398
We study an initial value problem for an implicit fractional differential equation with the Liouville–Caputo fractional derivative. By using fixed point theory and an approximation method, we obtain some existence and uniqueness results.Mathematics2015-05-2532Article10.3390/math30203983984112227-73902015-05-25doi: 10.3390/math3020398Juan NietoAbelghani OuahabVenktesh Venktesh