sensors-logo

Journal Browser

Journal Browser

Special Issue "Smart Sensors and Devices in Artificial Intelligence"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 30 June 2020.

Special Issue Editors

Prof. Dr. Dan Zhang
Website
Guest Editor
Department of Mechanical Engineering, Lassonde School of Engineering, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
Interests: robotics and mechatronics; high performance parallel robotic machine development; sustainable/green manufacturing systems; micro/nano manipulation and MEMS devices (sensors), micro mobile robots and control of multi-robot cooperation, intelligent servo control system for the MEMS based high-performance micro-robot; web-based remote manipulation; rehabilitation robot and rescue robot
Special Issues and Collections in MDPI journals
Prof. Dr. Xuechao Duan
Website
Guest Editor
Institute on Mechatronics, Xidian University, 710071, No.2 Taibai Rd, Xi’an, China
Interests: parallel robots; mechatronics; intelligent control; design optimization

Special Issue Information

Dear Colleagues,

Sensors are eyes or/and ears of an intelligent system, such as UAV, AGV and robots. With the development of material, signal processing and multidisciplinary interactions, more and more smart sensors are proposed and fabricated under increasing demands for homes, industry and military fields. Networks of sensors will be able to enhance the ability to obtain huge amounts of information (big data) and improve precision, which also mirrors the developmental tendency of modern sensors. Moreover, artificial intelligence is a novel impetus for sensors and networks, which gets sensors to learn and think and feed more efficient results back.

This Special Issue welcomes new research results from academia and industry, on the subject of “Smart Sensors and Networks”, especially sensing technologies utilizing Artificial Intelligence. The Special Issue topics include, but are not limited to:

  • smart sensors
  • biosensors
  • sensor network
  • sensor data fusion
  • artificial intelligence
  • deep learning
  • mechatronics devices for sensors
  • applications of sensors for robotics and mechatronics devices

The Special Issue also welcome excellent extended papers invited from the 2018 2nd International Conference on Artificial Intelligence Applications and Technologies (AIAAT 2018) and 2019 3rd International Conference on Artificial Intelligence Applications and Technologies (AIAAT 2019).

Prof. Dr. Dan Zhang
Prof. Dr. Xuechao Duan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart sensors
  • biosensor
  • sensor network
  • sensor data fusion
  • artificial intelligence
  • deep learning
  • robotics
  • mechatronics devices

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Real-Time Queue Length Detection with Roadside LiDAR Data
Sensors 2020, 20(8), 2342; https://doi.org/10.3390/s20082342 - 20 Apr 2020
Abstract
Real-time queue length information is an important input for many traffic applications. This paper presents a novel method for real-time queue length detection with roadside LiDAR data. Vehicles on the road were continuously tracked with the LiDAR data processing procedures (including background filtering, [...] Read more.
Real-time queue length information is an important input for many traffic applications. This paper presents a novel method for real-time queue length detection with roadside LiDAR data. Vehicles on the road were continuously tracked with the LiDAR data processing procedures (including background filtering, point clustering, object classification, lane identification and object association). A detailed method to identify the vehicle at the end of the queue considering the occlusion issue and package loss issue was documented in this study. The proposed method can provide real-time queue length information. The performance of the proposed queue length detection method was evaluated with the ground-truth data collected from three sites in Reno, Nevada. Results show the proposed method can achieve an average of 98% accuracy at the six investigated sites. The errors in the queue length detection were also diagnosed. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
A Clamping Force Estimation Method Based on a Joint Torque Disturbance Observer Using PSO-BPNN for Cable-Driven Surgical Robot End-Effectors
Sensors 2019, 19(23), 5291; https://doi.org/10.3390/s19235291 - 01 Dec 2019
Abstract
The ability to sense external force is an important technique for force feedback, haptics and safe interaction control in minimally-invasive surgical robots (MISRs). Moreover, this ability plays a significant role in the restricting refined surgical operations. The wrist joints of surgical robot end-effectors [...] Read more.
The ability to sense external force is an important technique for force feedback, haptics and safe interaction control in minimally-invasive surgical robots (MISRs). Moreover, this ability plays a significant role in the restricting refined surgical operations. The wrist joints of surgical robot end-effectors are usually actuated by several long-distance wire cables. Its two forceps are each actuated by two cables. The scope of force sensing includes multidimensional external force and one-dimensional clamping force. This paper focuses on one-dimensional clamping force sensing method that do not require any internal force sensor integrated in the end-effector’s forceps. A new clamping force estimation method is proposed based on a joint torque disturbance observer (JTDO) for a cable-driven surgical robot end-effector. The JTDO essentially considers the variations in cable tension between the actual cable tension and the estimated cable tension using a Particle Swarm Optimization Back Propagation Neural Network (PSO-BPNN) under free motion. Furthermore, a clamping force estimator is proposed based on the forceps’ JTDO and their mechanical relations. According to comparative analyses in experimental studies, the detection resolutions of collision force and clamping force were 0.11 N. The experimental results verify the feasibility and effectiveness of the proposed clamping force sensing method. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Fuzzy Functional Dependencies as a Method of Choice for Fusion of AIS and OTHR Data
Sensors 2019, 19(23), 5166; https://doi.org/10.3390/s19235166 - 26 Nov 2019
Abstract
Maritime situational awareness at over-the-horizon (OTH) distances in exclusive economic zones can be achieved by deploying networks of high-frequency OTH radars (HF-OTHR) in coastal countries along with exploiting automatic identification system (AIS) data. In some regions the reception of AIS messages can be [...] Read more.
Maritime situational awareness at over-the-horizon (OTH) distances in exclusive economic zones can be achieved by deploying networks of high-frequency OTH radars (HF-OTHR) in coastal countries along with exploiting automatic identification system (AIS) data. In some regions the reception of AIS messages can be unreliable and with high latency. This leads to difficulties in properly associating AIS data to OTHR tracks. Long history records about the previous whereabouts of vessels based on both OTHR tracks and AIS data can be maintained in order to increase the chances of fusion. If the quantity of data increases significantly, data cleaning can be done in order to minimize system requirements. This process is performed prior to fusing AIS data and observed OTHR tracks. In this paper, we use fuzzy functional dependencies (FFDs) in the context of data fusion from AIS and OTHR sources. The fuzzy logic approach has been shown to be a promising tool for handling data uncertainty from different sensors. The proposed method is experimentally evaluated for fusing AIS data and the target tracks provided by the OTHR installed in the Gulf of Guinea. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning
Sensors 2019, 19(22), 5035; https://doi.org/10.3390/s19225035 - 18 Nov 2019
Abstract
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems [...] Read more.
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Distributed Reliable and Efficient Transmission Task Assignment for WSNs
Sensors 2019, 19(22), 5028; https://doi.org/10.3390/s19225028 - 18 Nov 2019
Cited by 1
Abstract
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level [...] Read more.
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level view of the network and an evaluation model for transmission in WSNs. To deliver better fault tolerance, TRETA dynamically adjusts in event-driven mode. Aiming to solve the reliable and efficient distributed task allocation problem in WSNs, two distributed task assignments for WSNs based on TRETA are proposed. In the former, the sink assigns reliability to all cluster heads according to the reliability requirements, so the cluster head performs local task allocation according to the assigned phase target reliability constraints. Simulation results show the reduction of the communication cost and latency of task allocation compared to centralized task assignments. Like the latter, the global view is obtained by fetching local views from multiple sink nodes, as well as multiple sinks having a consistent comprehensive view for global optimization. The way to respond to local task allocation requirements without the need to communicate with remote nodes overcomes the disadvantages of centralized task allocation in large-scale sensor networks with significant communication overheads and considerable delay, and has better scalability. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices
Sensors 2019, 19(20), 4456; https://doi.org/10.3390/s19204456 - 14 Oct 2019
Abstract
Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle’s smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world [...] Read more.
Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle’s smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Robust Visual Tracking Using Structural Patch Response Map Fusion Based on Complementary Correlation Filter and Color Histogram
Sensors 2019, 19(19), 4178; https://doi.org/10.3390/s19194178 - 26 Sep 2019
Cited by 1
Abstract
A part-based strategy has been applied to visual tracking with demonstrated success in recent years. Different from most existing part-based methods that only employ one type of tracking representation model, in this paper, we propose an effective complementary tracker based on structural patch [...] Read more.
A part-based strategy has been applied to visual tracking with demonstrated success in recent years. Different from most existing part-based methods that only employ one type of tracking representation model, in this paper, we propose an effective complementary tracker based on structural patch response fusion under correlation filter and color histogram models. The proposed method includes two component trackers with complementary merits to adaptively handle illumination variation and deformation. To identify and take full advantage of reliable patches, we present an adaptive hedge algorithm to hedge the responses of patches into a more credible one in each component tracker. In addition, we design different loss metrics of tracked patches in two components to be applied in the proposed hedge algorithm. Finally, we selectively combine the two component trackers at the response maps level with different merging factors according to the confidence of each component tracker. Extensive experimental evaluations on OTB2013, OTB2015, and VOT2016 datasets show outstanding performance of the proposed algorithm contrasted with some state-of-the-art trackers. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
Design of a Purely Mechanical Sensor-Controller Integrated System for Walking Assistance on an Ankle-Foot Exoskeleton
Sensors 2019, 19(14), 3196; https://doi.org/10.3390/s19143196 - 19 Jul 2019
Abstract
Propulsion during push-off (PO) is a key factor to realize human locomotion. Through the detection of real-time gait stage, assistance could be provided to the human body at the proper time. In most cases, ankle-foot exoskeletons consist of electronic sensors, microprocessors, and actuators. [...] Read more.
Propulsion during push-off (PO) is a key factor to realize human locomotion. Through the detection of real-time gait stage, assistance could be provided to the human body at the proper time. In most cases, ankle-foot exoskeletons consist of electronic sensors, microprocessors, and actuators. Although these three essential elements contribute to fulfilling the function of the detection, control, and energy injection, they result in a huge system that reduces the wearing comfort. To simplify the sensor-controller system and reduce the mass of the exoskeleton, we designed a smart clutch in this paper, which is a sensor-controller integrated system that comprises a sensing part and an executing part. With a spring functioning as an actuator, the whole exoskeleton system is completely made up of mechanical parts and has no external power source. By controlling the engagement of the actuator based on the signal acquired from the sensing part, the proposed clutch enables the ankle-foot exoskeleton (AFE) to provide additional ankle torque during PO, and allows free rotation of the ankle joint during swing phase, thus reducing the metabolic cost of the human body. There are two striking advantages of the designed clutch. On the one hand, the clutch is lightweight and reliable—it resists the possible shock during walking since there is no circuit connection or power in the system. On the other hand, the detection of gait relies on the contact states between human feet and the ground, so the clutch is universal and does not need to be customized for individuals. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Open AccessArticle
A Hybrid CNN–LSTM Algorithm for Online Defect Recognition of CO2 Welding
Sensors 2018, 18(12), 4369; https://doi.org/10.3390/s18124369 - 10 Dec 2018
Cited by 7
Abstract
At present, realizing high-quality automatic welding through online monitoring is a research focus in engineering applications. In this paper, a CNN–LSTM algorithm is proposed, which combines the advantages of convolutional neural networks (CNNs) and long short-term memory networks (LSTMs). The CNN–LSTM algorithm establishes [...] Read more.
At present, realizing high-quality automatic welding through online monitoring is a research focus in engineering applications. In this paper, a CNN–LSTM algorithm is proposed, which combines the advantages of convolutional neural networks (CNNs) and long short-term memory networks (LSTMs). The CNN–LSTM algorithm establishes a shallow CNN to extract the primary features of the molten pool image. Then the feature tensor extracted by the CNN is transformed into the feature matrix. Finally, the rows of the feature matrix are fed into the LSTM network for feature fusion. This process realizes the implicit mapping from molten pool images to welding defects. The test results on the self-made molten pool image dataset show that CNN contributes to the overall feasibility of the CNN–LSTM algorithm and LSTM network is the most superior in the feature hybrid stage. The algorithm converges at 300 epochs and the accuracy of defects detection in CO2 welding molten pool is 94%. The processing time of a single image is 0.067 ms, which fully meets the real-time monitoring requirement based on molten pool image. The experimental results on the MNIST and FashionMNIST datasets show that the algorithm is universal and can be used for similar image recognition and classification tasks. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Graphical abstract

Open AccessArticle
A MEMS IMU De-Noising Method Using Long Short Term Memory Recurrent Neural Networks (LSTM-RNN)
Sensors 2018, 18(10), 3470; https://doi.org/10.3390/s18103470 - 15 Oct 2018
Cited by 15
Abstract
Microelectromechanical Systems (MEMS) Inertial Measurement Unit (IMU) containing a three-orthogonal gyroscope and three-orthogonal accelerometer has been widely utilized in position and navigation, due to gradually improved accuracy and its small size and low cost. However, the errors of a MEMS IMU based standalone [...] Read more.
Microelectromechanical Systems (MEMS) Inertial Measurement Unit (IMU) containing a three-orthogonal gyroscope and three-orthogonal accelerometer has been widely utilized in position and navigation, due to gradually improved accuracy and its small size and low cost. However, the errors of a MEMS IMU based standalone Inertial Navigation System (INS) will diverge over time dramatically, since there are various and nonlinear errors contained in the MEMS IMU measurements. Therefore, MEMS INS is usually integrated with a Global Positioning System (GPS) for providing reliable navigation solutions. The GPS receiver is able to generate stable and precise position and time information in open sky environment. However, under signal challenging conditions, for instance dense forests, city canyons, or mountain valleys, if the GPS signal is weak and even is blocked, the GPS receiver will fail to output reliable positioning information, and the integration system will fade to an INS standalone system. A number of effects have been devoted to improving the accuracy of INS, and de-nosing or modelling the random errors contained in the MEMS IMU have been demonstrated to be an effective way of improving MEMS INS performance. In this paper, an Artificial Intelligence (AI) method was proposed to de-noise the MEMS IMU output signals, specifically, a popular variant of Recurrent Neural Network (RNN) Long Short Term Memory (LSTM) RNN was employed to filter the MEMS gyroscope outputs, in which the signals were treated as time series. A MEMS IMU (MSI3200, manufactured by MT Microsystems Company, Shijiazhuang, China) was employed to test the proposed method, a 2 min raw gyroscope data with 400 Hz sampling rate was collected and employed in this testing. The results show that the standard deviation (STD) of the gyroscope data decreased by 60.3%, 37%, and 44.6% respectively compared with raw signals, and on the other way, the three-axis attitude errors decreased by 15.8%, 18.3% and 51.3% individually. Further, compared with an Auto Regressive and Moving Average (ARMA) model with fixed parameters, the STD of the three-axis gyroscope outputs decreased by 42.4%, 21.4% and 21.4%, and the attitude errors decreased by 47.6%, 42.3% and 52.0%. The results indicated that the de-noising scheme was effective for improving MEMS INS accuracy, and the proposed LSTM-RNN method was more preferable in this application. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop