sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Systems for Object Tracking—2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: 20 June 2025 | Viewed by 12880

Special Issue Editor


E-Mail Website
Guest Editor
Department of Applied Computer Science, AGH University of Science and Technology, 30-059 Kraków, Poland
Interests: pattern recognition; signal processing; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Homogenous and heterogeneous multi-sensor systems are among the most popular and affordable solutions for object tracking. Sensor-based object tracking can be applied not only to individuals (motion capture, wearable sensors) and autonomous vehicles (self-driving cars and robots), but also to the monitoring of personnel and traffic flow in flats, buildings or even whole cities. Depending on the application, these sensors might be vision-based, inertial measurement units (IMUs), LIDARs, and many others.

This Special Issue aim to represent the latest advances in multi-sensor systems for object tracking. We welcome contributions in all fields of sensor-based object tracking, including new systems, signal processing algorithms, as well as new applications. These include but are not limited to:

  • Simultaneous localization and mapping (SLAM);
  • Motion capture;
  • Autonomous vehicles;
  • Ubiquitous sensors;
  • Wearable sensors;
  • Computer vision;
  • Inertial measurement units (IMUs);
  • High-energy particles detection with CCD cameras;

Dr. Tomasz Hachaj
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object tracking
  • simultaneous localization and mapping (SLAM)
  • motion capture
  • autonomous vehicles
  • ubiquitous sensors
  • wearable sensors
  • computer vision
  • inertial measurement units (IMU)
  • high-energy particles
  • LIDAR

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3828 KiB  
Article
DynamicVLN: Incorporating Dynamics into Vision-and-Language Navigation Scenarios
by Yanjun Sun, Yue Qiu and Yoshimitsu Aoki
Sensors 2025, 25(2), 364; https://doi.org/10.3390/s25020364 - 9 Jan 2025
Viewed by 1112
Abstract
Traditional Vision-and-Language Navigation (VLN) tasks require an agent to navigate static environments using natural language instructions. However, real-world road conditions such as vehicle movements, traffic signal fluctuations, pedestrian activity, and weather variations are dynamic and continually changing. These factors significantly impact an agent’s [...] Read more.
Traditional Vision-and-Language Navigation (VLN) tasks require an agent to navigate static environments using natural language instructions. However, real-world road conditions such as vehicle movements, traffic signal fluctuations, pedestrian activity, and weather variations are dynamic and continually changing. These factors significantly impact an agent’s decision-making ability, underscoring the limitations of current VLN models, which do not accurately reflect the complexities of real-world navigation. To bridge this gap, we propose a novel task called Dynamic Vision-and-Language Navigation (DynamicVLN), incorporating various dynamic scenarios to enhance the agent’s decision-making abilities and adaptability. By redefining the VLN task, we emphasize that a robust and generalizable agent should not rely solely on predefined instructions but must also demonstrate reasoning skills and adaptability to unforeseen events. Specifically, we have designed ten scenarios that simulate the challenges of dynamic navigation and developed a dedicated dataset of 11,261 instances using the CARLA simulator (ver.0.9.13) and large language model to provide realistic training conditions. Additionally, we introduce a baseline model that integrates advanced perception and decision-making modules, enabling effective navigation and interpretation of the complexities of dynamic road conditions. This model showcases the ability to follow natural language instructions while dynamically adapting to environmental cues. Our approach establishes a benchmark for developing agents capable of functioning in real-world, dynamic environments and extending beyond the limitations of static VLN tasks to more practical and versatile applications. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

24 pages, 13862 KiB  
Article
Depth Video-Based Secondary Action Recognition in Vehicles via Convolutional Neural Network and Bidirectional Long Short-Term Memory with Spatial Enhanced Attention Mechanism
by Weirong Shao, Mondher Bouazizi and Ohtuski Tomoaki
Sensors 2024, 24(20), 6604; https://doi.org/10.3390/s24206604 - 13 Oct 2024
Viewed by 1534
Abstract
Secondary actions in vehicles are activities that drivers engage in while driving that are not directly related to the primary task of operating the vehicle. Secondary Action Recognition (SAR) in drivers is vital for enhancing road safety and minimizing accidents related to distracted [...] Read more.
Secondary actions in vehicles are activities that drivers engage in while driving that are not directly related to the primary task of operating the vehicle. Secondary Action Recognition (SAR) in drivers is vital for enhancing road safety and minimizing accidents related to distracted driving. It also plays an important part in modern car driving systems such as Advanced Driving Assistance Systems (ADASs), as it helps identify distractions and predict the driver’s intent. Traditional methods of action recognition in vehicles mostly rely on RGB videos, which can be significantly impacted by external conditions such as low light levels. In this research, we introduce a novel method for SAR. Our approach utilizes depth-video data obtained from a depth sensor located in a vehicle. Our methodology leverages the Convolutional Neural Network (CNN), which is enhanced by the Spatial Enhanced Attention Mechanism (SEAM) and combined with Bidirectional Long Short-Term Memory (Bi-LSTM) networks. This method significantly enhances action recognition ability in depth videos by improving both the spatial and temporal aspects. We conduct experiments using K-fold cross validation, and the experimental results show that on the public benchmark dataset Drive&Act, our proposed method shows significant improvement in SAR compared to the state-of-the-art methods, reaching an accuracy of about 84% in SAR in depth videos. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

20 pages, 8443 KiB  
Article
A Rapid Localization Method Based on Super Resolution Magnetic Array Information for Unknown Number Magnetic Sources
by Linliang Miao, Tianyi Zhang, Chao Zuo, Zijie Chen, Xiaofei Yang and Jun Ouyang
Sensors 2024, 24(10), 3226; https://doi.org/10.3390/s24103226 - 19 May 2024
Cited by 1 | Viewed by 1551
Abstract
A rapid method that uses super-resolution magnetic array data is proposed to localize an unknown number of magnets in a magnetic array. A magnetic data super-resolution (SR) neural network was developed to improve the resolution of a magnetic sensor array. The approximate 3D [...] Read more.
A rapid method that uses super-resolution magnetic array data is proposed to localize an unknown number of magnets in a magnetic array. A magnetic data super-resolution (SR) neural network was developed to improve the resolution of a magnetic sensor array. The approximate 3D positions of multiple targets were then obtained based on the normalized source strength (NSS) and magnetic gradient tensor (MGT) inversion. Finally, refined inversion of the position and magnetic moment was performed using a trust region reflective algorithm (TRR). The effectiveness of the proposed method was examined using experimental field data collected from a magnetic sensor array. The experimental results showed that all the targets were successfully captured in multiple trials with three to five targets with an average positioning error of less than 3 mm and an average time of less than 300 ms. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

22 pages, 1237 KiB  
Article
On the Search for Potentially Anomalous Traces of Cosmic Ray Particles in Images Acquired by Cmos Detectors for a Continuous Stream of Emerging Observational Data
by Marcin Piekarczyk and Tomasz Hachaj
Sensors 2024, 24(6), 1835; https://doi.org/10.3390/s24061835 - 13 Mar 2024
Cited by 2 | Viewed by 1499
Abstract
In this paper we propose the method for detecting potential anomalous cosmic ray particle tracks in big data image dataset acquired by Complementary Metal-Oxide-Semiconductors (CMOS). Those sensors are part of scientific infrastructure of Cosmic Ray Extremely Distributed Observatory (CREDO). The use of Incremental [...] Read more.
In this paper we propose the method for detecting potential anomalous cosmic ray particle tracks in big data image dataset acquired by Complementary Metal-Oxide-Semiconductors (CMOS). Those sensors are part of scientific infrastructure of Cosmic Ray Extremely Distributed Observatory (CREDO). The use of Incremental PCA (Principal Components Analysis) allowed approximation of loadings which might be updated at runtime. Incremental PCA with Sequential Karhunen-Loeve Transform results with almost identical embedding as basic PCA. Depending on image preprocessing method the weighted distance between coordinate frame and its approximation was at the level from 0.01 to 0.02 radian for batches with size of 10,000 images. This significantly reduces the necessary calculations in terms of memory complexity so that our method can be used for big data. The use of intuitive parameters of the potential anomalies detection algorithm based on object density in embedding space makes our method intuitive to use. The sets of anomalies returned by our proposed algorithm do not contain any typical morphologies of particle tracks shapes. Thus, one can conclude that our proposed method effectively filter-off typical (in terms of analysis of variance) shapes of particle tracks by searching for those that can be treated as significantly different from the others in the dataset. We also proposed method that can be used to find similar objects, which gives it the potential, for example, to be used in minimal distance-based classification and CREDO image database querying. The proposed algorithm was tested on more than half a million (570,000+) images that contains various morphologies of cosmic particle tracks. To our knowledge, this is the first study of this kind based on data collected using a distributed network of CMOS sensors embedded in the cell phones of participants collaborating within the citizen science paradigm. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

22 pages, 628 KiB  
Article
Road Risk-Index Analysis Using Satellite Products
by Bogdan-Cristian Firuți, Răzvan-Ștefan Păduraru, Cătălin Negru, Alina Petrescu-Niţă, Octavian Bădescu and Florin Pop
Sensors 2023, 23(5), 2751; https://doi.org/10.3390/s23052751 - 2 Mar 2023
Cited by 1 | Viewed by 2578
Abstract
This paper proposes a service called intelligent routing using satellite products (IRUS) that can be used in order to analyze risks to the road infrastructure during bad weather conditions, such as heavy rainfall, storms, or floods. By diminishing movement risk, rescuers can arrive [...] Read more.
This paper proposes a service called intelligent routing using satellite products (IRUS) that can be used in order to analyze risks to the road infrastructure during bad weather conditions, such as heavy rainfall, storms, or floods. By diminishing movement risk, rescuers can arrive safely at their destination. To analyze these routes, the application uses both data provided by Sentinel satellites from the Copernicus program and meteorological data from local weather stations. Moreover, the application uses algorithms to determine the night driving time. From this analysis we obtain a risk index for each road provided by Google Maps API and then we present the path alongside the risk index in a friendly graphic interface. In order to obtain an accurate risk index, the application analyzes both recent and past data (up to 12 months). Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

15 pages, 29614 KiB  
Article
Potential Obstacle Detection Using RGB to Depth Image Encoder–Decoder Network: Application to Unmanned Aerial Vehicles
by Tomasz Hachaj
Sensors 2022, 22(17), 6703; https://doi.org/10.3390/s22176703 - 5 Sep 2022
Cited by 3 | Viewed by 3428
Abstract
In this work, a new method is proposed that allows the use of a single RGB camera for the real-time detection of objects that could be potential collision sources for Unmanned Aerial Vehicles. For this purpose, a new network with an encoder–decoder architecture [...] Read more.
In this work, a new method is proposed that allows the use of a single RGB camera for the real-time detection of objects that could be potential collision sources for Unmanned Aerial Vehicles. For this purpose, a new network with an encoder–decoder architecture has been developed, which allows rapid distance estimation from a single image by performing RGB to depth mapping. Based on a comparison with other existing RGB to depth mapping methods, the proposed network achieved a satisfactory trade-off between complexity and accuracy. With only 6.3 million parameters, it achieved efficiency close to models with more than five times the number of parameters. This allows the proposed network to operate in real time. A special algorithm makes use of the distance predictions made by the network, compensating for measurement inaccuracies. The entire solution has been implemented and tested in practice in an indoor environment using a micro-drone equipped with a front-facing RGB camera. All data and source codes and pretrained network weights are available to download. Thus, one can easily reproduce the results, and the resulting solution can be tested and quickly deployed in practice. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking—2nd Edition)
Show Figures

Figure 1

Back to TopTop