sensors-logo

Journal Browser

Journal Browser

Special Issue "Artificial Intelligence Methods for Smart Cities"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 February 2023 | Viewed by 5459

Special Issue Editors

Prof. Dr. Salvatore Carta
E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, Via Ospedale 72, 09124 Cagliari, Italy
Interests: artificial intelligence; deep learning, recommender system, financial forecasting, anomalies detection
Special Issues, Collections and Topics in MDPI journals
Dr. Silvio Barra
E-Mail Website
Guest Editor
Department of Electrical and Information Technologies Engineering, University of Naples, “Federico II”, Corso Umberto I, 40, 80138 Naples, Italy
Interests: pattern recognition; biometrics; image processing; financial forecasting; deep learning
Special Issues, Collections and Topics in MDPI journals
Dr. Alessandro Sebastian Podda
E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: artificial intelligence; deep learning; information security; financial forecasting; blockchain; smart contracts
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, smart cities are steadily becoming more prominent, promising to improve the daily life of citizens and support their habits by proposing subject-oriented services. In urban scenarios, this improvement is mainly directed to people and vehicles, which may take advantage from surveillance systems providing services and guaranteeing safe living environments.

Novel artificial intelligence methods, techniques, and systems—particularly those based on machine/deep learning, computer vision and internet of things—are emerging to solve real-life problems, ranging from video surveillance, road safety and traffic monitoring, and the prevention of accidents or critical events, to intelligent transportation and the management of public services. In fact, AI-based approaches, also leveraging on IoT or cloud networks, may serve as underlying methods to tackle this wide range of problems.

This Issue aims to gather works that propose systems, approaches, solutions, and experimental results that have the ambition to contribute in an original and highly innovative way to all topics related to smart cities, in order to stimulate and increase the scientific production in this rapidly growing area of research.

Prof. Dr. Salvatore Carta
Dr. Silvio Barra
Dr. Alessandro Sebastian Podda
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence methods for smart cities
  • Anomalies detection
  • People/objects detection
  • Gait analysis in uncontrolled scenarios
  • Background/foreground segmentations in urban scenarios
  • Vehicle/drone tracking
  • Video surveillance in smart cities
  • Crowd analysis
  • Traffic light management
  • Smart cities databases
  • Car reidentification
  • License plate recognition
  • Object trajectory estimation/prediction
  • Biometric recognition/verification in smart environments
  • IoT architectures and applications in smart cities and smart environments
  • Intelligent transportation systems

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Tracking Missing Person in Large Crowd Gathering Using Intelligent Video Surveillance
Sensors 2022, 22(14), 5270; https://doi.org/10.3390/s22145270 - 14 Jul 2022
Viewed by 426
Abstract
Locating a missing child or elderly person in a large gathering through face recognition in videos is still challenging because of various dynamic factors. In this paper, we present an intelligent mechanism for tracking missing persons in an unconstrained large gathering scenario of [...] Read more.
Locating a missing child or elderly person in a large gathering through face recognition in videos is still challenging because of various dynamic factors. In this paper, we present an intelligent mechanism for tracking missing persons in an unconstrained large gathering scenario of Al-Nabawi Mosque, Madinah, KSA. The proposed mechanism in this paper is unique in two aspects. First, there are various proposals existing in the literature that deal with face detection and recognition in high-quality images of a large crowd but none of them tested tracking of a missing person in low resolution images of a large gathering scenario. Secondly, our proposed mechanism is unique in the sense that it employs four phases: (a) report missing person online through web and mobile app based on spatio-temporal features; (b) geo fence set estimation for reducing search space; (c) face detection using the fusion of Viola Jones cascades LBP, CART, and HAAR to optimize the results of the localization of face regions; and (d) face recognition to find a missing person based on the profile image of reported missing person. The overall results of our proposed intelligent tracking mechanism suggest good performance when tested on a challenging dataset of 2208 low resolution images of large crowd gathering. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Article
Fatigue Driving Detection Method Based on Combination of BP Neural Network and Time Cumulative Effect
Sensors 2022, 22(13), 4717; https://doi.org/10.3390/s22134717 - 22 Jun 2022
Cited by 1 | Viewed by 464
Abstract
Fatigue driving has always received a lot of attention, but few studies have focused on the fact that human fatigue is a cumulative process over time, and there are no models available to reflect this phenomenon. Furthermore, the problem of incorrect detection due [...] Read more.
Fatigue driving has always received a lot of attention, but few studies have focused on the fact that human fatigue is a cumulative process over time, and there are no models available to reflect this phenomenon. Furthermore, the problem of incorrect detection due to facial expression is still not well addressed. In this article, a model based on BP neural network and time cumulative effect was proposed to solve these problems. Experimental data were used to carry out this work and validate the proposed method. Firstly, the Adaboost algorithm was applied to detect faces, and the Kalman filter algorithm was used to trace the face movement. Then, a cascade regression tree-based method was used to detect the 68 facial landmarks and an improved method combining key points and image processing was adopted to calculate the eye aspect ratio (EAR). After that, a BP neural network model was developed and trained by selecting three characteristics: the longest period of continuous eye closure, number of yawns, and percentage of eye closure time (PERCLOS), and then the detection results without and with facial expressions were discussed and analyzed. Finally, by introducing the Sigmoid function, a fatigue detection model considering the time accumulation effect was established, and the drivers’ fatigue state was identified segment by segment through the recorded video. Compared with the traditional BP neural network model, the detection accuracies of the proposed model without and with facial expressions increased by 3.3% and 8.4%, respectively. The number of incorrect detections in the awake state also decreased obviously. The experimental results show that the proposed model can effectively filter out incorrect detections caused by facial expressions and truly reflect that driver fatigue is a time accumulating process. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Article
Real-Time Abnormal Object Detection for Video Surveillance in Smart Cities
Sensors 2022, 22(10), 3862; https://doi.org/10.3390/s22103862 - 19 May 2022
Cited by 1 | Viewed by 747
Abstract
With the adaptation of video surveillance in many areas for object detection, monitoring abnormal behavior in several cameras requires constant human tracking for a single camera operative, which is a tedious task. In multiview cameras, accurately detecting different types of guns and knives [...] Read more.
With the adaptation of video surveillance in many areas for object detection, monitoring abnormal behavior in several cameras requires constant human tracking for a single camera operative, which is a tedious task. In multiview cameras, accurately detecting different types of guns and knives and classifying them from other video surveillance objects in real-time scenarios is difficult. Most detecting cameras are resource-constrained devices with limited computational capacities. To mitigate this problem, we proposed a resource-constrained lightweight subclass detection method based on a convolutional neural network to classify, locate, and detect different types of guns and knives effectively and efficiently in a real-time environment. In this paper, the detection classifier is a multiclass subclass detection convolutional neural network used to classify object frames into different sub-classes such as abnormal and normal. The achieved mean average precision by the best state-of-the-art framework to detect either a handgun or a knife is 84.21% or 90.20% on a single camera view. After extensive experiments, the best precision obtained by the proposed method for detecting different types of guns and knives was 97.50% on the ImageNet dataset and IMFDB, 90.50% on the open-image dataset, 93% on the Olmos dataset, and 90.7% precision on the multiview cameras. This resource-constrained device has shown a satisfactory result, with a precision score of 85.5% for detection in a multiview camera. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Article
Vision-Based Pedestrian’s Crossing Risky Behavior Extraction and Analysis for Intelligent Mobility Safety System
Sensors 2022, 22(9), 3451; https://doi.org/10.3390/s22093451 - 30 Apr 2022
Viewed by 552
Abstract
Crosswalks present a major threat to pedestrians, but we lack dense behavioral data to investigate the risks they face. One of the breakthroughs is to analyze potential risky behaviors of the road users (e.g., near-miss collision), which can provide clues to take actions [...] Read more.
Crosswalks present a major threat to pedestrians, but we lack dense behavioral data to investigate the risks they face. One of the breakthroughs is to analyze potential risky behaviors of the road users (e.g., near-miss collision), which can provide clues to take actions such as deployment of additional safety infrastructures. In order to capture these subtle potential risky situations and behaviors, the use of vision sensors makes it easier to study and analyze potential traffic risks. In this study, we introduce a new approach to obtain the potential risky behaviors of vehicles and pedestrians from CCTV cameras deployed on the roads. This study has three novel contributions: (1) recasting CCTV cameras for surveillance to contribute to the study of the crossing environment; (2) creating one sequential process from partitioning video to extracting their behavioral features; and (3) analyzing the extracted behavioral features and clarifying the interactive moving patterns by the crossing environment. These kinds of data are the foundation for understanding road users’ risky behaviors, and further support decision makers for their efficient decisions in improving and making a safer road environment. We validate the feasibility of this model by applying it to video footage collected from crosswalks in various conditions in Osan City, Republic of Korea. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Article
Map-Matching-Based Localization Using Camera and Low-Cost GPS for Lane-Level Accuracy
Sensors 2022, 22(7), 2434; https://doi.org/10.3390/s22072434 - 22 Mar 2022
Cited by 1 | Viewed by 555
Abstract
For self-driving systems or autonomous vehicles (AVs), accurate lane-level localization is a important for performing complex driving maneuvers. Classical GNSS-based methods are usually not accurate enough to have lane-level localization to support the AV’s maneuvers. LiDAR-based localization can provide accurate localization. However, the [...] Read more.
For self-driving systems or autonomous vehicles (AVs), accurate lane-level localization is a important for performing complex driving maneuvers. Classical GNSS-based methods are usually not accurate enough to have lane-level localization to support the AV’s maneuvers. LiDAR-based localization can provide accurate localization. However, the price of LiDARs is still one of the big issues preventing this kind of solution from becoming wide-spread commodity. Therefore, in this work, we propose a low-cost solution for lane-level localization using a vision-based system and a low-cost GPS to achieve high precision lane-level localization. Experiments in real-world and real-time demonstrate that the proposed method achieves good lane-level localization accuracy, outperforming solutions based on only GPS. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Article
Re-Orienting Smartphone-Collected Car Motion Data Using Least-Squares Estimation and Machine Learning
Sensors 2022, 22(4), 1606; https://doi.org/10.3390/s22041606 - 18 Feb 2022
Viewed by 980
Abstract
Smartphone sensors can collect data in many different contexts. They make it feasible to obtain large amounts of data at little or no cost because most people own mobile phones. In this work, we focus on collecting motion data in the car using [...] Read more.
Smartphone sensors can collect data in many different contexts. They make it feasible to obtain large amounts of data at little or no cost because most people own mobile phones. In this work, we focus on collecting motion data in the car using a smartphone. Motion sensors, such as accelerometers and gyroscopes, can help obtain information about the vehicle’s dynamics. However, the different positioning of the smartphone in the car leads to difficulty interpreting the sensed data due to an unknown orientation, making the collection useless. Thus, we propose an approach to automatically re-orient smartphone data collected in the car to a standardized orientation (i.e., with zero yaw, roll, and pitch angles with respect to the vehicle). We use a combination of a least-square plane approximation and a Machine Learning model to infer the relative orientation angles. Then we populate rotation matrices and perform the data rotation. We trained the model by collecting data using a vehicle physics simulator. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Review

Jump to: Research

Review
Artificial Intelligence Applications and Self-Learning 6G Networks for Smart Cities Digital Ecosystems: Taxonomy, Challenges, and Future Directions
Sensors 2022, 22(15), 5750; https://doi.org/10.3390/s22155750 - 01 Aug 2022
Viewed by 826
Abstract
The recent upsurge of smart cities’ applications and their building blocks in terms of the Internet of Things (IoT), Artificial Intelligence (AI), federated and distributed learning, big data analytics, blockchain, and edge-cloud computing has urged the design of the upcoming 6G network generation, [...] Read more.
The recent upsurge of smart cities’ applications and their building blocks in terms of the Internet of Things (IoT), Artificial Intelligence (AI), federated and distributed learning, big data analytics, blockchain, and edge-cloud computing has urged the design of the upcoming 6G network generation, due to their stringent requirements in terms of the quality of services (QoS), availability, and dependability to satisfy a Service-Level-Agreement (SLA) for the end users. Industries and academia have started to design 6G networks and propose the use of AI in its protocols and operations. Published papers on the topic discuss either the requirements of applications via a top-down approach or the network requirements in terms of agility, performance, and energy saving using a down-top perspective. In contrast, this paper adopts a holistic outlook, considering the applications, the middleware, the underlying technologies, and the 6G network systems towards an intelligent and integrated computing, communication, coordination, and decision-making ecosystem. In particular, we discuss the temporal evolution of the wireless network generations’ development to capture the applications, middleware, and technological requirements that led to the development of the network generation systems from 1G to AI-enabled 6G and its employed self-learning models. We provide a taxonomy of the technology-enabled smart city applications’ systems and present insights into those systems for the realization of a trustworthy and efficient smart city ecosystem. We propose future research directions in 6G networks for smart city applications. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Back to TopTop