sensors-logo

Journal Browser

Journal Browser

Special Issue "Intelligent Systems for Exploiting Real Sensed Data in Smart City Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (8 October 2018).

Special Issue Editor

Prof. Dr. Enrique Alba
E-Mail Website
Guest Editor
ITIS Software, Universidad de Málaga, E.T.S.I. Informática (3-2-12), B. Louis Pasteur s/n, 29071 Málaga, Spain
Interests: smart cities; holistic vision of city applications; smart sensors and actuators; software quality for smart city; intelligent systems; security by design; complex optimization; parallel computing systems; smart mobility; prediction; machine learning; data science; artificial intelligence; big data; app and web services for complex services; metaheuristics; bioinspired techniques; multiobjective; dynamic algorithms; decentralized techniques; parallel algorithms; HPC; hybrid algorithms; theory; synergies between theory and practice
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Smart cities are all over the place today. The initiatives to build better services for our citizens involve city managers, companies, research centers, and academia. One paramount part of the work in smart cities is linked to the utilization of sensors, either static (for measuring vehicles, energy consumption, air pollution) and dynamic ones (smartphones, wearables, floating cars, etc.). Open data and big data are also key components of the cyber-physical system; they represent a needed layer of information for the design, implementation, and exploitation of new desktop and app services towards a smarter city.

In this Special Issue we welcome applications using real sensed data of a city to create innovative original city services in the standard six axes of smart cities: Economy, People, Living, Mobility, Environment and Governance. We expect new final applications showing unseen efficacy and efficiency to deliver high quality services to managers and citizens based in intelligent systems. Thus, we encourage submissions using machine learning, bio-inspired algorithms, fuzzy logic, recurrent neural networks, extensions to traditional techniques, and artificial intelligence in general. We also welcome the utilization of new technologies to run these services, such as FIWARE, cloud computing, IoT techniques, microservices, spark, blockchain, and all the related ecosystem of tools helping intelligent algorithms to reach ambitious goals to transform our society for the good.

We welcome submissions with a holistic vision, not just focusing in one problem, but on its relations to other problems and their impact in the city. Both research papers and review articles will be considered. If you are interested in contributing to this Special Issue, we would very much appreciate receiving the tentative title of your contribution by email first.

Prof. Dr. Enrique Alba
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Exploiting Mobile Edge Computing for Enhancing Vehicular Applications in Smart Cities
Sensors 2019, 19(5), 1073; https://doi.org/10.3390/s19051073 - 02 Mar 2019
Cited by 5 | Viewed by 1524
Abstract
Mobile edge computing (MEC) has been recently proposed to bring computing capabilities closer to mobile endpoints, with the aim of providing low latency and real-time access to network information via applications and services. Several attempts have been made to integrate MEC in intelligent [...] Read more.
Mobile edge computing (MEC) has been recently proposed to bring computing capabilities closer to mobile endpoints, with the aim of providing low latency and real-time access to network information via applications and services. Several attempts have been made to integrate MEC in intelligent transportation systems (ITS), including new architectures, communication frameworks, deployment strategies and applications. In this paper, we explore existing architecture proposals for integrating MEC in vehicular environments, which would allow the evolution of the next generation ITS in smart cities. Moreover, we classify the desired applications into four major categories. We rely on a MEC architecture with three layers to propose a data dissemination protocol, which can be utilized by traffic safety and travel convenience applications in vehicular networks. Furthermore, we provide a simulation-based prototype to evaluate the performance of our protocol. Simulation results show that our proposed protocol can significantly improve the performance of data dissemination in terms of data delivery, communication overhead and delay. In addition, we highlight challenges and open issues to integrate MEC in vehicular networking environments for further research. Full article
Show Figures

Figure 1

Article
Model Driven Development Applied to Complex Event Processing for Near Real-Time Open Data
Sensors 2018, 18(12), 4125; https://doi.org/10.3390/s18124125 - 24 Nov 2018
Cited by 4 | Viewed by 1418
Abstract
Nowadays, data are being produced like never before because the use of the Internet of Things, social networks, and communication in general are increasing exponentially. Many of these data, especially those from public administrations, are freely offered using the open data concept where [...] Read more.
Nowadays, data are being produced like never before because the use of the Internet of Things, social networks, and communication in general are increasing exponentially. Many of these data, especially those from public administrations, are freely offered using the open data concept where data are published to improve their reutilisation and transparency. Initially, the data involved information that is not updated continuously such as budgets, tourist information, office information, pharmacy information, etc. This kind of information does not change during large periods of time, such as days, weeks or months. However, when open data are produced near to real-time such as air quality sensors or people counters, suitable methodologies and tools are lacking to identify, consume, and analyse them. This work presents a methodology to tackle the analysis of open data sources using Model-Driven Development (MDD) and Complex Event Processing (CEP), which help users to raise the abstraction level utilised to manage and analyse open data sources. That means that users can manage heterogeneous and complex technology by using domain concepts defined by a model that could be used to generate specific code. Thus, this methodology is supported by a domain-specific language (DSL) called OpenData2CEP, which includes a metamodel, a graphical concrete syntax, and a model-to-text transformation to specific platforms, such as complex event processing engines. Finally, the methodology and the DSL have been applied to two near real-time contexts: the analysis of air quality for citizens’ proposals and the analysis of earthquake data. Full article
Show Figures

Figure 1

Article
BiPred: A Bilevel Evolutionary Algorithm for Prediction in Smart Mobility
Sensors 2018, 18(12), 4123; https://doi.org/10.3390/s18124123 - 24 Nov 2018
Cited by 7 | Viewed by 1405
Abstract
This article develops the design, installation, exploitation, and final utilization of intelligent techniques, hardware, and software for understanding mobility in a modern city. We focus on a smart-campus initiative in the University of Malaga as the scenario for building this cyber–physical system at [...] Read more.
This article develops the design, installation, exploitation, and final utilization of intelligent techniques, hardware, and software for understanding mobility in a modern city. We focus on a smart-campus initiative in the University of Malaga as the scenario for building this cyber–physical system at a low cost, and then present the details of a new proposed evolutionary algorithm used for better training machine-learning techniques: BiPred. We model and solve the task of reducing the size of the dataset used for learning about campus mobility. Our conclusions show an important reduction of the required data to learn mobility patterns by more than 90%, while improving (at the same time) the precision of the predictions of theapplied machine-learning method (up to 15%). All this was done along with the construction of a real system in a city, which hopefully resulted in a very comprehensive work in smart cities using sensors. Full article
Show Figures

Figure 1

Article
Sensing Urban Transportation Events from Multi-Channel Social Signals with the Word2vec Fusion Model
Sensors 2018, 18(12), 4093; https://doi.org/10.3390/s18124093 - 22 Nov 2018
Cited by 9 | Viewed by 1280
Abstract
Social sensors perceive the real world through social media and online web services, which have the advantages of low cost and large coverage over traditional physical sensors. In intelligent transportation researches, sensing and analyzing such social signals provide a new path to monitor, [...] Read more.
Social sensors perceive the real world through social media and online web services, which have the advantages of low cost and large coverage over traditional physical sensors. In intelligent transportation researches, sensing and analyzing such social signals provide a new path to monitor, control and optimize transportation systems. However, current research is largely focused on using single channel online social signals to extract and sense traffic information. Clearly, sensing and exploiting multi-channel social signals could effectively provide deeper understanding of traffic incidents. In this paper, we utilize cross-platform online data, i.e., Sina Weibo and News, as multi-channel social signals, then we propose a word2vec-based event fusion (WBEF) model for sensing, detecting, representing, linking and fusing urban traffic incidents. Thus, each traffic incident can be comprehensively described from multiple aspects, and finally the whole picture of unban traffic events can be obtained and visualized. The proposed WBEF architecture was trained by about 1.15 million multi-channel online data from Qingdao (a coastal city in China), and the experiments show our method surpasses the baseline model, achieving an 88.1% F1 score in urban traffic incident detection. The model also demonstrates its effectiveness in the open scenario test. Full article
Show Figures

Figure 1

Article
Identifying Stops and Moves in WiFi Tracking Data
Sensors 2018, 18(11), 4039; https://doi.org/10.3390/s18114039 - 19 Nov 2018
Cited by 1 | Viewed by 1162
Abstract
There are multiple methods for tracking individuals, but the classical ones such as using GPS or video surveillance systems do not scale or have large costs. The need for large-scale tracking, for thousands or even millions of individuals, over large areas such as [...] Read more.
There are multiple methods for tracking individuals, but the classical ones such as using GPS or video surveillance systems do not scale or have large costs. The need for large-scale tracking, for thousands or even millions of individuals, over large areas such as cities, requires the use of alternative techniques. WiFi tracking is a scalable solution that has gained attention recently. This method permits unobtrusive tracking of large crowds, at a reduced cost. However, extracting knowledge from the data gathered through WiFi tracking is not simple, due to the low positional accuracy and the dependence on signals generated by the tracked device, which are irregular and sparse. To facilitate further data analysis, we can partition individual trajectories into periods of stops and moves. This abstraction level is fundamental, and it opens the way for answering complex questions about visited locations or even social behavior. Determining stops and movements has been previously addressed for tracking data gathered using GPS. GPS trajectories have higher positional accuracy at a fixed, higher frequency as compared to trajectories obtained through WiFi. However, even with the increase in accuracy, the problem, of separating traces in periods of stops and movements, remains similar to the one we encountered for WiFi tracking. In this paper, we study three algorithms for determining stops and movements for GPS-based datasets and explore their applicability to WiFi-based data. We propose possible improvements to the best-performing algorithm considering the specifics of WiFi tracking data. Full article
Show Figures

Figure 1

Article
Sensor Fusion for Recognition of Activities of Daily Living
Sensors 2018, 18(11), 4029; https://doi.org/10.3390/s18114029 - 19 Nov 2018
Cited by 26 | Viewed by 1817
Abstract
Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far [...] Read more.
Activity of daily living (ADL) is a significant predictor of the independence and functional capabilities of an individual. Measurements of ADLs help to indicate one’s health status and capabilities of quality living. Recently, the most common ways to capture ADL data are far from automation, including a costly 24/7 observation by a designated caregiver, self-reporting by the user laboriously, or filling out a written ADL survey. Fortunately, ubiquitous sensors exist in our surroundings and on electronic devices in the Internet of Things (IoT) era. We proposed the ADL Recognition System that utilizes the sensor data from a single point of contact, such as smartphones, and conducts time-series sensor fusion processing. Raw data is collected from the ADL Recorder App constantly running on a user’s smartphone with multiple embedded sensors, including the microphone, Wi-Fi scan module, heading orientation of the device, light proximity, step detector, accelerometer, gyroscope, magnetometer, etc. Key technologies in this research cover audio processing, Wi-Fi indoor positioning, proximity sensing localization, and time-series sensor data fusion. By merging the information of multiple sensors, with a time-series error correction technique, the ADL Recognition System is able to accurately profile a person’s ADLs and discover his life patterns. This paper is particularly concerned with the care for the older adults who live independently. Full article
Show Figures

Figure 1

Article
A Hybrid Approach to Short-Term Load Forecasting Aimed at Bad Data Detection in Secondary Substation Monitoring Equipment
Sensors 2018, 18(11), 3947; https://doi.org/10.3390/s18113947 - 14 Nov 2018
Cited by 4 | Viewed by 1255
Abstract
Bad data as a result of measurement errors in secondary substation (SS) monitoring equipment is difficult to detect and negatively affects power system state estimation performance by both increasing the computational burden and jeopardizing the state estimation accuracy. In this paper a short-term [...] Read more.
Bad data as a result of measurement errors in secondary substation (SS) monitoring equipment is difficult to detect and negatively affects power system state estimation performance by both increasing the computational burden and jeopardizing the state estimation accuracy. In this paper a short-term load forecasting (STLF) hybrid strategy based on singular spectrum analysis (SSA) in combination with artificial neural networks (ANN), is presented. This STLF approach is aimed at detecting, identifying and eliminating and/or correcting such bad data before it is provided to the state estimator. This approach is developed to improve the accuracy of the load forecasts and it is tested against real power load data provided by electricity suppliers. Depending on the week considered, mean absolute percentage error (MAPE) values which range from 1.6% to 3.4% are achieved for STLF. Different systematic errors, such as gain and offset error levels and outliers, are successfully detected with a hit rate of 98%, and the corresponding measurements are corrected before they are sent to the control center for state estimation purposes. Full article
Show Figures

Figure 1

Article
Streaming MASSIF: Cascading Reasoning for Efficient Processing of IoT Data Streams
Sensors 2018, 18(11), 3832; https://doi.org/10.3390/s18113832 - 08 Nov 2018
Cited by 10 | Viewed by 1540
Abstract
In the Internet of Things (IoT), multiple sensors and devices are generating heterogeneous streams of data. To perform meaningful analysis over multiple of these streams, stream processing needs to support expressive reasoning capabilities to infer implicit facts and temporal reasoning to capture temporal [...] Read more.
In the Internet of Things (IoT), multiple sensors and devices are generating heterogeneous streams of data. To perform meaningful analysis over multiple of these streams, stream processing needs to support expressive reasoning capabilities to infer implicit facts and temporal reasoning to capture temporal dependencies. However, current approaches cannot perform the required reasoning expressivity while detecting time dependencies over high frequency data streams. There is still a mismatch between the complexity of processing and the rate data is produced in volatile domains. Therefore, we introduce Streaming MASSIF, a Cascading Reasoning approach performing expressive reasoning and complex event processing over high velocity streams. Cascading Reasoning is a vision that solves the problem of expressive reasoning over high frequency streams by introducing a hierarchical approach consisting of multiple layers. Each layer minimizes the processed data and increases the complexity of the data processing. Cascading Reasoning is a vision that has not been fully realized. Streaming MASSIF is a layered approach allowing IoT service to subscribe to high-level and temporal dependent concepts in volatile data streams. We show that Streaming MASSIF is able to handle high velocity streams up to hundreds of events per second, in combination with expressive reasoning and complex event processing. Streaming MASSIF realizes the Cascading Reasoning vision and is able to combine high expressive reasoning with high throughput of processing. Furthermore, we formalize semantically how the different layers in our Cascading Reasoning Approach collaborate. Full article
Show Figures

Figure 1

Article
Real Evaluations Tractability using Continuous Goal-Directed Actions in Smart City Applications
Sensors 2018, 18(11), 3818; https://doi.org/10.3390/s18113818 - 07 Nov 2018
Viewed by 1426
Abstract
One of the most important challenges of Smart City Applications is to adapt the system to interact with non-expert users. Robot imitation frameworks aim to simplify and reduce times of robot programming by allowing users to program directly through action demonstrations. In classical [...] Read more.
One of the most important challenges of Smart City Applications is to adapt the system to interact with non-expert users. Robot imitation frameworks aim to simplify and reduce times of robot programming by allowing users to program directly through action demonstrations. In classical robot imitation frameworks, actions are modelled using joint or Cartesian space trajectories. They accurately describe actions where geometrical characteristics are relevant, such as fixed trajectories from one pose to another. Other features, such as visual ones, are not always well represented with these pure geometrical approaches. Continuous Goal-Directed Actions (CGDA) is an alternative to these conventional methods, as it encodes actions as changes of any selected feature that can be extracted from the environment. As a consequence of this, the robot joint trajectories for execution must be fully computed to comply with this feature-agnostic encoding. This is achieved using Evolutionary Algorithms (EA), which usually requires too many evaluations to perform this evolution step in the actual robot. The current strategies involve performing evaluations in a simulated environment, transferring only the final joint trajectory to the actual robot. Smart City applications involve working in highly dynamic and complex environments, where having a precise model is not always achievable. Our goal is to study the tractability of performing these evaluations directly in a real-world scenario. Two different approaches to reduce the number of evaluations using EA, are proposed and compared. In the first approach, Particle Swarm Optimization (PSO)-based methods have been studied and compared within the CGDA framework: naïve PSO, Fitness Inheritance PSO (FI-PSO), and Adaptive Fuzzy Fitness Granulation with PSO (AFFG-PSO). The second approach studied the introduction of geometrical and velocity constraints within the CGDA framework. The effects of both approaches were analyzed and compared in the “wax” and “paint” actions, two CGDA commonly studied use cases. Results from this paper depict an important reduction in the number of required evaluations. Full article
Show Figures

Figure 1

Article
Volunteers in the Smart City: Comparison of Contribution Strategies on Human-Centered Measures
Sensors 2018, 18(11), 3707; https://doi.org/10.3390/s18113707 - 31 Oct 2018
Cited by 3 | Viewed by 1247
Abstract
Provision of smart city services often relies on users contribution, e.g., of data, which can be costly for the users in terms of privacy. Privacy risks, as well as unfair distribution of benefits to the users, should be minimized as they undermine user [...] Read more.
Provision of smart city services often relies on users contribution, e.g., of data, which can be costly for the users in terms of privacy. Privacy risks, as well as unfair distribution of benefits to the users, should be minimized as they undermine user participation, which is crucial for the success of smart city applications. This paper investigates privacy, fairness, and social welfare in smart city applications by means of computer simulations grounded on real-world data, i.e., smart meter readings and participatory sensing. We generalize the use of public good theory as a model for resource management in smart city applications, by proposing a design principle that is applicable across application scenarios, where provision of a service depends on user contributions. We verify its applicability by showing its implementation in two scenarios: smart grid and traffic congestion information system. Following this design principle, we evaluate different classes of algorithms for resource management, with respect to human-centered measures, i.e., privacy, fairness and social welfare, and identify algorithm-specific trade-offs that are scenario independent. These results could be of interest to smart city application designers to choose a suitable algorithm given a scenario-specific set of requirements, and to users to choose a service based on an algorithm that matches their privacy preferences. Full article
Show Figures

Figure 1

Article
Real-Time Traffic Risk Detection Model Using Smart Mobile Device
Sensors 2018, 18(11), 3686; https://doi.org/10.3390/s18113686 - 30 Oct 2018
Cited by 4 | Viewed by 1314
Abstract
Automatically recognizing dangerous situations for a vehicle and quickly sharing this information with nearby vehicles is the most essential technology for road safety. In this paper, we propose a real-time deceleration pattern-based traffic risk detection system using smart mobile devices. Our system detects [...] Read more.
Automatically recognizing dangerous situations for a vehicle and quickly sharing this information with nearby vehicles is the most essential technology for road safety. In this paper, we propose a real-time deceleration pattern-based traffic risk detection system using smart mobile devices. Our system detects a dangerous situation through machine learning on the deceleration patterns of a driver by considering the vehicle’s headway distance. In order to estimate the vehicle’s headway distance, we introduce a practical vehicle detection method that exploits the shadows on the road and the taillights of the vehicle. For deceleration pattern analysis, the proposed system leverages three machine learning models: neural network, random forest, and clustering. Based on these learning models, we propose two types of decision models to make the final decisions on dangerous situations, and suggest three types of improvements to continuously enhance the traffic risk detection model. Finally, we analyze the accuracy of the proposed model based on actual driving data collected by driving on Seoul city roadways and the Gyeongbu expressway. We also propose an optimal solution for traffic risk detection by analyzing the performance between the proposed decision models and the improvement techniques. Full article
Show Figures

Figure 1

Article
Commissioning of the Controlled and Automatized Testing Facility for Human Behavior and Control (CASITA)
Sensors 2018, 18(9), 2829; https://doi.org/10.3390/s18092829 - 27 Aug 2018
Cited by 5 | Viewed by 1207
Abstract
Human behavior is one of the most challenging aspects in the understanding of building physics. The need to evaluate it requires controlled environments and facilities in which researchers can test their methods. In this paper, we present the commissioning of the Controlled and [...] Read more.
Human behavior is one of the most challenging aspects in the understanding of building physics. The need to evaluate it requires controlled environments and facilities in which researchers can test their methods. In this paper, we present the commissioning of the Controlled and Automatized Testing Facility for Human Behavior (CASITA). This is a controlled space emulation of an office or flat, with more than 20 environmental sensors, 5 electrical meters, and 10 actuators. Our contribution shown in this paper is the development of an infrastructure-Artificial Intelligence (AI) model pair that is perfectly integrated for the study of a variety of human energy use aspects. This facility will help to perform studies about human behavior in a controlled space. To verify this, we have tested this emulation for 60 days, in which equipment was turned on and off, the settings of the conditioning system were modified remotely, and lighting operation was similar to that in real behaviors. This period of commissioning generated 74.4 GB of raw data including high-frequency measurements. This work has shown that CASITA performs beyond expectations and that sensors and actuators could enable research on a variety of disciplines related to building physics and human behavior. Also, we have tested the PROPHET software, which was previously used in other disciplines and found that it could be an excellent complement to CASITA for experiments that require the prediction of several pertinent variables in a given study. Our contribution has also been to proof that this package is an ideal “soft” addition to the infrastructure. A case study forecasting energy consumption has been performed, concluding that the facility and the software PROPHET have a great potential for research and an outstanding accuracy. Full article
Show Figures

Figure 1

Article
A Deep CNN-LSTM Model for Particulate Matter (PM2.5) Forecasting in Smart Cities
Sensors 2018, 18(7), 2220; https://doi.org/10.3390/s18072220 - 10 Jul 2018
Cited by 174 | Viewed by 6666
Abstract
In modern society, air pollution is an important topic as this pollution exerts a critically bad influence on human health and the environment. Among air pollutants, Particulate Matter (PM2.5) consists of suspended particles with a diameter equal to or less than [...] Read more.
In modern society, air pollution is an important topic as this pollution exerts a critically bad influence on human health and the environment. Among air pollutants, Particulate Matter (PM2.5) consists of suspended particles with a diameter equal to or less than 2.5 μm. Sources of PM2.5 can be coal-fired power generation, smoke, or dusts. These suspended particles in the air can damage the respiratory and cardiovascular systems of the human body, which may further lead to other diseases such as asthma, lung cancer, or cardiovascular diseases. To monitor and estimate the PM2.5 concentration, Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) are combined and applied to the PM2.5 forecasting system. To compare the overall performance of each algorithm, four measurement indexes, Mean Absolute Error (MAE), Root Mean Square Error (RMSE) Pearson correlation coefficient and Index of Agreement (IA) are applied to the experiments in this paper. Compared with other machine learning methods, the experimental results showed that the forecasting accuracy of the proposed CNN-LSTM model (APNet) is verified to be the highest in this paper. For the CNN-LSTM model, its feasibility and practicability to forecast the PM2.5 concentration are also verified in this paper. The main contribution of this paper is to develop a deep neural network model that integrates the CNN and LSTM architectures, and through historical data such as cumulated hours of rain, cumulated wind speed and PM2.5 concentration. In the future, this study can also be applied to the prevention and control of PM2.5. Full article
Show Figures

Figure 1

Back to TopTop