sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence Methods for Smart Cities

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 33622

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: human-computer interaction; persuasive computing; recommender systems; machine learning; deep neural networks; time series
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Information Technologies Engineering, University of Naples, “Federico II”, Corso Umberto I, 40, 80138 Naples, Italy
Interests: pattern recognition; biometrics; image processing; financial forecasting; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: artificial intelligence; deep learning; information security; financial forecasting; blockchain; smart contracts
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Today, smart cities are steadily becoming more prominent, promising to improve the daily life of citizens and support their habits by proposing subject-oriented services. In urban scenarios, this improvement is mainly directed to people and vehicles, which may take advantage from surveillance systems providing services and guaranteeing safe living environments.

Novel artificial intelligence methods, techniques, and systems—particularly those based on machine/deep learning, computer vision and internet of things—are emerging to solve real-life problems, ranging from video surveillance, road safety and traffic monitoring, and the prevention of accidents or critical events, to intelligent transportation and the management of public services. In fact, AI-based approaches, also leveraging on IoT or cloud networks, may serve as underlying methods to tackle this wide range of problems.

This Issue aims to gather works that propose systems, approaches, solutions, and experimental results that have the ambition to contribute in an original and highly innovative way to all topics related to smart cities, in order to stimulate and increase the scientific production in this rapidly growing area of research.

Prof. Dr. Salvatore Carta
Dr. Silvio Barra
Dr. Alessandro Sebastian Podda
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence methods for smart cities
  • Anomalies detection
  • People/objects detection
  • Gait analysis in uncontrolled scenarios
  • Background/foreground segmentations in urban scenarios
  • Vehicle/drone tracking
  • Video surveillance in smart cities
  • Crowd analysis
  • Traffic light management
  • Smart cities databases
  • Car reidentification
  • License plate recognition
  • Object trajectory estimation/prediction
  • Biometric recognition/verification in smart environments
  • IoT architectures and applications in smart cities and smart environments
  • Intelligent transportation systems

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 2265 KiB  
Article
A Novel FDLSR-Based Technique for View-Independent Vehicle Make and Model Recognition
by Sobia Hayee, Fawad Hussain and Muhammad Haroon Yousaf
Sensors 2023, 23(18), 7920; https://doi.org/10.3390/s23187920 - 15 Sep 2023
Viewed by 743
Abstract
Vehicle make and model recognition (VMMR) is an important aspect of intelligent transportation systems (ITS). In VMMR systems, surveillance cameras capture vehicle images for real-time vehicle detection and recognition. These captured images pose challenges, including shadows, reflections, changes in weather and illumination, occlusions, [...] Read more.
Vehicle make and model recognition (VMMR) is an important aspect of intelligent transportation systems (ITS). In VMMR systems, surveillance cameras capture vehicle images for real-time vehicle detection and recognition. These captured images pose challenges, including shadows, reflections, changes in weather and illumination, occlusions, and perspective distortion. Another significant challenge in VMMR is the multiclass classification. This scenario has two main categories: (a) multiplicity and (b) ambiguity. Multiplicity concerns the issue of different forms among car models manufactured by the same company, while the ambiguity problem arises when multiple models from the same manufacturer have visually similar appearances or when vehicle models of different makes have visually comparable rear/front views. This paper introduces a novel and robust VMMR model that can address the above-mentioned issues with accuracy comparable to state-of-the-art methods. Our proposed hybrid CNN model selects the best descriptive fine-grained features with the help of Fisher Discriminative Least Squares Regression (FDLSR). These features are extracted from a deep CNN model fine-tuned on the fine-grained vehicle datasets Stanford-196 and BoxCars21k. Using ResNet-152 features, our proposed model outperformed the SVM and FC layers in accuracy by 0.5% and 4% on Stanford-196 and 0.4 and 1% on BoxCars21k, respectively. Moreover, this model is well-suited for small-scale fine-grained vehicle datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

12 pages, 2616 KiB  
Communication
Sensor Fusion Approach for Multiple Human Motion Detection for Indoor Surveillance Use-Case
by Ali Abbasi, Sandro Queirós, Nuno M. C. da Costa, Jaime C. Fonseca and João Borges
Sensors 2023, 23(8), 3993; https://doi.org/10.3390/s23083993 - 14 Apr 2023
Cited by 1 | Viewed by 1909
Abstract
Multi-human detection and tracking in indoor surveillance is a challenging task due to various factors such as occlusions, illumination changes, and complex human-human and human-object interactions. In this study, we address these challenges by exploring the benefits of a low-level sensor fusion approach [...] Read more.
Multi-human detection and tracking in indoor surveillance is a challenging task due to various factors such as occlusions, illumination changes, and complex human-human and human-object interactions. In this study, we address these challenges by exploring the benefits of a low-level sensor fusion approach that combines grayscale and neuromorphic vision sensor (NVS) data. We first generate a custom dataset using an NVS camera in an indoor environment. We then conduct a comprehensive study by experimenting with different image features and deep learning networks, followed by a multi-input fusion strategy to optimize our experiments with respect to overfitting. Our primary goal is to determine the best input feature types for multi-human motion detection using statistical analysis. We find that there is a significant difference between the input features of optimized backbones, with the best strategy depending on the amount of available data. Specifically, under a low-data regime, event-based frames seem to be the preferred input feature type, while higher data availability benefits the combined use of grayscale and optical flow features. Our results demonstrate the potential of sensor fusion and deep learning techniques for multi-human tracking in indoor surveillance, although it is acknowledged that further studies are needed to confirm our findings. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

25 pages, 1784 KiB  
Article
Trajectory Clustering-Based Anomaly Detection in Indoor Human Movement
by Doi Thi Lan and Seokhoon Yoon
Sensors 2023, 23(6), 3318; https://doi.org/10.3390/s23063318 - 21 Mar 2023
Cited by 6 | Viewed by 2589
Abstract
Human movement anomalies in indoor spaces commonly involve urgent situations, such as security threats, accidents, and fires. This paper proposes a two-phase framework for detecting indoor human trajectory anomalies based on density-based spatial clustering of applications with noise (DBSCAN). The first phase of [...] Read more.
Human movement anomalies in indoor spaces commonly involve urgent situations, such as security threats, accidents, and fires. This paper proposes a two-phase framework for detecting indoor human trajectory anomalies based on density-based spatial clustering of applications with noise (DBSCAN). The first phase of the framework groups datasets into clusters. In the second phase, the abnormality of a new trajectory is checked. A new metric called the longest common sub-sequence using indoor walking distance and semantic label (LCSS_IS) is proposed to calculate the similarity between trajectories, extending from the longest common sub-sequence (LCSS). Moreover, a DBSCAN cluster validity index (DCVI) is proposed to improve the trajectory clustering performance. The DCVI is used to choose the epsilon parameter for DBSCAN. The proposed method is evaluated using two real trajectory datasets: MIT Badge and sCREEN. The experimental results show that the proposed method effectively detects human trajectory anomalies in indoor spaces. With the MIT Badge dataset, the proposed method achieves 89.03% in terms of F1-score for hypothesized anomalies and above 93% for all synthesized anomalies. In the sCREEN dataset, the proposed method also achieves impressive results in F1-score on synthesized anomalies: 89.92% for rare location visit anomalies (τ = 0.5) and 93.63% for other anomalies. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

25 pages, 13341 KiB  
Article
Tracking Missing Person in Large Crowd Gathering Using Intelligent Video Surveillance
by Adnan Nadeem, Muhammad Ashraf, Nauman Qadeer, Kashif Rizwan, Amir Mehmood, Ali AlZahrani, Fazal Noor and Qammer H. Abbasi
Sensors 2022, 22(14), 5270; https://doi.org/10.3390/s22145270 - 14 Jul 2022
Cited by 5 | Viewed by 3772
Abstract
Locating a missing child or elderly person in a large gathering through face recognition in videos is still challenging because of various dynamic factors. In this paper, we present an intelligent mechanism for tracking missing persons in an unconstrained large gathering scenario of [...] Read more.
Locating a missing child or elderly person in a large gathering through face recognition in videos is still challenging because of various dynamic factors. In this paper, we present an intelligent mechanism for tracking missing persons in an unconstrained large gathering scenario of Al-Nabawi Mosque, Madinah, KSA. The proposed mechanism in this paper is unique in two aspects. First, there are various proposals existing in the literature that deal with face detection and recognition in high-quality images of a large crowd but none of them tested tracking of a missing person in low resolution images of a large gathering scenario. Secondly, our proposed mechanism is unique in the sense that it employs four phases: (a) report missing person online through web and mobile app based on spatio-temporal features; (b) geo fence set estimation for reducing search space; (c) face detection using the fusion of Viola Jones cascades LBP, CART, and HAAR to optimize the results of the localization of face regions; and (d) face recognition to find a missing person based on the profile image of reported missing person. The overall results of our proposed intelligent tracking mechanism suggest good performance when tested on a challenging dataset of 2208 low resolution images of large crowd gathering. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

18 pages, 2960 KiB  
Article
Fatigue Driving Detection Method Based on Combination of BP Neural Network and Time Cumulative Effect
by Jian Chen, Ming Yan, Feng Zhu, Jing Xu, Hai Li and Xiaoguang Sun
Sensors 2022, 22(13), 4717; https://doi.org/10.3390/s22134717 - 22 Jun 2022
Cited by 7 | Viewed by 2602
Abstract
Fatigue driving has always received a lot of attention, but few studies have focused on the fact that human fatigue is a cumulative process over time, and there are no models available to reflect this phenomenon. Furthermore, the problem of incorrect detection due [...] Read more.
Fatigue driving has always received a lot of attention, but few studies have focused on the fact that human fatigue is a cumulative process over time, and there are no models available to reflect this phenomenon. Furthermore, the problem of incorrect detection due to facial expression is still not well addressed. In this article, a model based on BP neural network and time cumulative effect was proposed to solve these problems. Experimental data were used to carry out this work and validate the proposed method. Firstly, the Adaboost algorithm was applied to detect faces, and the Kalman filter algorithm was used to trace the face movement. Then, a cascade regression tree-based method was used to detect the 68 facial landmarks and an improved method combining key points and image processing was adopted to calculate the eye aspect ratio (EAR). After that, a BP neural network model was developed and trained by selecting three characteristics: the longest period of continuous eye closure, number of yawns, and percentage of eye closure time (PERCLOS), and then the detection results without and with facial expressions were discussed and analyzed. Finally, by introducing the Sigmoid function, a fatigue detection model considering the time accumulation effect was established, and the drivers’ fatigue state was identified segment by segment through the recorded video. Compared with the traditional BP neural network model, the detection accuracies of the proposed model without and with facial expressions increased by 3.3% and 8.4%, respectively. The number of incorrect detections in the awake state also decreased obviously. The experimental results show that the proposed model can effectively filter out incorrect detections caused by facial expressions and truly reflect that driver fatigue is a time accumulating process. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

21 pages, 12276 KiB  
Article
Real-Time Abnormal Object Detection for Video Surveillance in Smart Cities
by Palash Yuvraj Ingle and Young-Gab Kim
Sensors 2022, 22(10), 3862; https://doi.org/10.3390/s22103862 - 19 May 2022
Cited by 27 | Viewed by 6248
Abstract
With the adaptation of video surveillance in many areas for object detection, monitoring abnormal behavior in several cameras requires constant human tracking for a single camera operative, which is a tedious task. In multiview cameras, accurately detecting different types of guns and knives [...] Read more.
With the adaptation of video surveillance in many areas for object detection, monitoring abnormal behavior in several cameras requires constant human tracking for a single camera operative, which is a tedious task. In multiview cameras, accurately detecting different types of guns and knives and classifying them from other video surveillance objects in real-time scenarios is difficult. Most detecting cameras are resource-constrained devices with limited computational capacities. To mitigate this problem, we proposed a resource-constrained lightweight subclass detection method based on a convolutional neural network to classify, locate, and detect different types of guns and knives effectively and efficiently in a real-time environment. In this paper, the detection classifier is a multiclass subclass detection convolutional neural network used to classify object frames into different sub-classes such as abnormal and normal. The achieved mean average precision by the best state-of-the-art framework to detect either a handgun or a knife is 84.21% or 90.20% on a single camera view. After extensive experiments, the best precision obtained by the proposed method for detecting different types of guns and knives was 97.50% on the ImageNet dataset and IMFDB, 90.50% on the open-image dataset, 93% on the Olmos dataset, and 90.7% precision on the multiview cameras. This resource-constrained device has shown a satisfactory result, with a precision score of 85.5% for detection in a multiview camera. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

24 pages, 9630 KiB  
Article
Vision-Based Pedestrian’s Crossing Risky Behavior Extraction and Analysis for Intelligent Mobility Safety System
by Byeongjoon Noh, Hansaem Park, Sungju Lee and Seung-Hee Nam
Sensors 2022, 22(9), 3451; https://doi.org/10.3390/s22093451 - 30 Apr 2022
Cited by 1 | Viewed by 2737
Abstract
Crosswalks present a major threat to pedestrians, but we lack dense behavioral data to investigate the risks they face. One of the breakthroughs is to analyze potential risky behaviors of the road users (e.g., near-miss collision), which can provide clues to take actions [...] Read more.
Crosswalks present a major threat to pedestrians, but we lack dense behavioral data to investigate the risks they face. One of the breakthroughs is to analyze potential risky behaviors of the road users (e.g., near-miss collision), which can provide clues to take actions such as deployment of additional safety infrastructures. In order to capture these subtle potential risky situations and behaviors, the use of vision sensors makes it easier to study and analyze potential traffic risks. In this study, we introduce a new approach to obtain the potential risky behaviors of vehicles and pedestrians from CCTV cameras deployed on the roads. This study has three novel contributions: (1) recasting CCTV cameras for surveillance to contribute to the study of the crossing environment; (2) creating one sequential process from partitioning video to extracting their behavioral features; and (3) analyzing the extracted behavioral features and clarifying the interactive moving patterns by the crossing environment. These kinds of data are the foundation for understanding road users’ risky behaviors, and further support decision makers for their efficient decisions in improving and making a safer road environment. We validate the feasibility of this model by applying it to video footage collected from crosswalks in various conditions in Osan City, Republic of Korea. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

12 pages, 6624 KiB  
Article
Map-Matching-Based Localization Using Camera and Low-Cost GPS for Lane-Level Accuracy
by Rahmad Sadli, Mohamed Afkir, Abdenour Hadid, Atika Rivenq and Abdelmalik Taleb-Ahmed
Sensors 2022, 22(7), 2434; https://doi.org/10.3390/s22072434 - 22 Mar 2022
Cited by 10 | Viewed by 2978
Abstract
For self-driving systems or autonomous vehicles (AVs), accurate lane-level localization is a important for performing complex driving maneuvers. Classical GNSS-based methods are usually not accurate enough to have lane-level localization to support the AV’s maneuvers. LiDAR-based localization can provide accurate localization. However, the [...] Read more.
For self-driving systems or autonomous vehicles (AVs), accurate lane-level localization is a important for performing complex driving maneuvers. Classical GNSS-based methods are usually not accurate enough to have lane-level localization to support the AV’s maneuvers. LiDAR-based localization can provide accurate localization. However, the price of LiDARs is still one of the big issues preventing this kind of solution from becoming wide-spread commodity. Therefore, in this work, we propose a low-cost solution for lane-level localization using a vision-based system and a low-cost GPS to achieve high precision lane-level localization. Experiments in real-world and real-time demonstrate that the proposed method achieves good lane-level localization accuracy, outperforming solutions based on only GPS. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

12 pages, 3418 KiB  
Article
Re-Orienting Smartphone-Collected Car Motion Data Using Least-Squares Estimation and Machine Learning
by Enrico Bassetti, Alessio Luciani and Emanuele Panizzi
Sensors 2022, 22(4), 1606; https://doi.org/10.3390/s22041606 - 18 Feb 2022
Viewed by 3032
Abstract
Smartphone sensors can collect data in many different contexts. They make it feasible to obtain large amounts of data at little or no cost because most people own mobile phones. In this work, we focus on collecting motion data in the car using [...] Read more.
Smartphone sensors can collect data in many different contexts. They make it feasible to obtain large amounts of data at little or no cost because most people own mobile phones. In this work, we focus on collecting motion data in the car using a smartphone. Motion sensors, such as accelerometers and gyroscopes, can help obtain information about the vehicle’s dynamics. However, the different positioning of the smartphone in the car leads to difficulty interpreting the sensed data due to an unknown orientation, making the collection useless. Thus, we propose an approach to automatically re-orient smartphone data collected in the car to a standardized orientation (i.e., with zero yaw, roll, and pitch angles with respect to the vehicle). We use a combination of a least-square plane approximation and a Machine Learning model to infer the relative orientation angles. Then we populate rotation matrices and perform the data rotation. We trained the model by collecting data using a vehicle physics simulator. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 7610 KiB  
Review
Artificial Intelligence Applications and Self-Learning 6G Networks for Smart Cities Digital Ecosystems: Taxonomy, Challenges, and Future Directions
by Leila Ismail and Rajkumar Buyya
Sensors 2022, 22(15), 5750; https://doi.org/10.3390/s22155750 - 01 Aug 2022
Cited by 17 | Viewed by 5181
Abstract
The recent upsurge of smart cities’ applications and their building blocks in terms of the Internet of Things (IoT), Artificial Intelligence (AI), federated and distributed learning, big data analytics, blockchain, and edge-cloud computing has urged the design of the upcoming 6G network generation, [...] Read more.
The recent upsurge of smart cities’ applications and their building blocks in terms of the Internet of Things (IoT), Artificial Intelligence (AI), federated and distributed learning, big data analytics, blockchain, and edge-cloud computing has urged the design of the upcoming 6G network generation, due to their stringent requirements in terms of the quality of services (QoS), availability, and dependability to satisfy a Service-Level-Agreement (SLA) for the end users. Industries and academia have started to design 6G networks and propose the use of AI in its protocols and operations. Published papers on the topic discuss either the requirements of applications via a top-down approach or the network requirements in terms of agility, performance, and energy saving using a down-top perspective. In contrast, this paper adopts a holistic outlook, considering the applications, the middleware, the underlying technologies, and the 6G network systems towards an intelligent and integrated computing, communication, coordination, and decision-making ecosystem. In particular, we discuss the temporal evolution of the wireless network generations’ development to capture the applications, middleware, and technological requirements that led to the development of the network generation systems from 1G to AI-enabled 6G and its employed self-learning models. We provide a taxonomy of the technology-enabled smart city applications’ systems and present insights into those systems for the realization of a trustworthy and efficient smart city ecosystem. We propose future research directions in 6G networks for smart city applications. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Smart Cities)
Show Figures

Figure 1

Back to TopTop