sensors-logo

Journal Browser

Journal Browser

Sensors and Advanced Sensing Techniques for Computer Vision Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 9987

Special Issue Editors


E-Mail Website
Guest Editor
Cultural Technology and Communication Department, University of the Aegean, Mytilene, Greece
Interests: 3D modeling; 3D visualization; digital culture; face recognition; mixed reality; multimedia
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
Department of Management Science and Technology, International Hellenic University, 65404 Kavala, Greece
Interests: signal and image processing; computer vision; artificial intelligence; information analysis; data mining; big data; data and visual analytics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Special Issue on “Sensors and Advanced Sensing Techniques for Computer Vision Applications” addresses all topics related to the challenging problems of computer vision and pattern recognition in conjunction with the emerging field of deep learning. Technologies related to computational intelligence, including deep learning, neural networks and soft computing will be considered from both the theoretical and technological points of view, with advanced 2D/3D computer vision and visualization infrastructures.

Classical computer vision systems use visible-light 2D cameras combining a 3D scene using photogrammetry, while 3D vision systems use more sophisticated acquisition sensors, such as structured-light 3D scanners, thermographic cameras, hyperspectral imagers and lidar scanners. As a consequence, the classical tasks of computer vision are now handled in 3D space (point clouds, meshes, 3D objects), and artificial intelligence finds a new area to thrive in using deep learning for the comparative analysis of huge amounts of data.

The topics of this Special Issue on “Sensors and Advanced Sensing Techniques for Computer Vision Applications” include (but are not limited to) the following aspects of computer vision and pattern recognition:

  • Deep learning for 2D/3D object recognition and classification
  • Reinforcement learning and robotic agents
  • Data augmentation in computer vision
  • Digital twins  
  • Multidisciplinary applications of deep learning, pattern recognition and computer vision

Multidisciplinary applications can be reported in numerous scientific fields addressing everyday life problems, including engineering, architecture, energy, robotics, medicine, cultural heritage, mixed reality and creative media/entertainment.

Prof. Dr. Christos Nikolaos E. Anagnostopoulos
Dr. Stelios Krinidis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 38627 KiB  
Article
Infrared and Visual Image Fusion Based on a Local-Extrema-Driven Image Filter
by Wenhao Xiang, Jianjun Shen, Li Zhang and Yu Zhang
Sensors 2024, 24(7), 2271; https://doi.org/10.3390/s24072271 - 02 Apr 2024
Viewed by 402
Abstract
The objective of infrared and visual image fusion is to amalgamate the salient and complementary features of the infrared and visual images into a singular informative image. To accomplish this, we introduce a novel local-extrema-driven image filter designed to effectively smooth images by [...] Read more.
The objective of infrared and visual image fusion is to amalgamate the salient and complementary features of the infrared and visual images into a singular informative image. To accomplish this, we introduce a novel local-extrema-driven image filter designed to effectively smooth images by reconstructing pixel intensities based on their local extrema. This filter is iteratively applied to the input infrared and visual images, extracting multiple scales of bright and dark feature maps from the differences between continuously filtered images. Subsequently, the bright and dark feature maps of the infrared and visual images at each scale are fused using elementwise-maximum and elementwise-minimum strategies, respectively. The two base images, representing the final-scale smoothed images of the infrared and visual images, are fused using a novel structural similarity- and intensity-based strategy. Finally, our fusion image can be straightforwardly produced by combining the fused bright feature map, dark feature map, and base image together. Rigorous experimentation conducted on the widely used TNO dataset underscores the superiority of our method in fusing infrared and visual images. Our approach consistently performs on par or surpasses eleven state-of-the-art image-fusion methods, showcasing compelling results in both qualitative and quantitative assessments. Full article
Show Figures

Figure 1

21 pages, 7693 KiB  
Article
The Potential of Diffusion-Based Near-Infrared Image Colorization
by Ayk Borstelmann, Timm Haucke and Volker Steinhage
Sensors 2024, 24(5), 1565; https://doi.org/10.3390/s24051565 - 28 Feb 2024
Viewed by 573
Abstract
Camera traps, an invaluable tool for biodiversity monitoring, capture wildlife activities day and night. In low-light conditions, near-infrared (NIR) imaging is commonly employed to capture images without disturbing animals. However, the reflection properties of NIR light differ from those of visible light in [...] Read more.
Camera traps, an invaluable tool for biodiversity monitoring, capture wildlife activities day and night. In low-light conditions, near-infrared (NIR) imaging is commonly employed to capture images without disturbing animals. However, the reflection properties of NIR light differ from those of visible light in terms of chrominance and luminance, creating a notable gap in human perception. Thus, the objective is to enrich near-infrared images with colors, thereby bridging this domain gap. Conventional colorization techniques are ineffective due to the difference between NIR and visible light. Moreover, regular supervised learning methods cannot be applied because paired training data are rare. Solutions to such unpaired image-to-image translation problems currently commonly involve generative adversarial networks (GANs), but recently, diffusion models gained attention for their superior performance in various tasks. In response to this, we present a novel framework utilizing diffusion models for the colorization of NIR images. This framework allows efficient implementation of various methods for colorizing NIR images. We show NIR colorization is primarily controlled by the translation of the near-infrared intensities to those of visible light. The experimental evaluation of three implementations with increasing complexity shows that even a simple implementation inspired by visible-near-infrared (VIS-NIR) fusion rivals GANs. Moreover, we show that the third implementation is capable of outperforming GANs. With our study, we introduce an intersection field joining the research areas of diffusion models, NIR colorization, and VIS-NIR fusion. Full article
Show Figures

Figure 1

16 pages, 1302 KiB  
Article
Acceleration of Hyperspectral Skin Cancer Image Classification through Parallel Machine-Learning Methods
by Bernardo Petracchi, Emanuele Torti, Elisa Marenzi and Francesco Leporati
Sensors 2024, 24(5), 1399; https://doi.org/10.3390/s24051399 - 21 Feb 2024
Viewed by 824
Abstract
Hyperspectral imaging (HSI) has become a very compelling technique in different scientific areas; indeed, many researchers use it in the fields of remote sensing, agriculture, forensics, and medicine. In the latter, HSI plays a crucial role as a diagnostic support and for surgery [...] Read more.
Hyperspectral imaging (HSI) has become a very compelling technique in different scientific areas; indeed, many researchers use it in the fields of remote sensing, agriculture, forensics, and medicine. In the latter, HSI plays a crucial role as a diagnostic support and for surgery guidance. However, the computational effort in elaborating hyperspectral data is not trivial. Furthermore, the demand for detecting diseases in a short time is undeniable. In this paper, we take up this challenge by parallelizing three machine-learning methods among those that are the most intensively used: Support Vector Machine (SVM), Random Forest (RF), and eXtreme Gradient Boosting (XGB) algorithms using the Compute Unified Device Architecture (CUDA) to accelerate the classification of hyperspectral skin cancer images. They all showed a good performance in HS image classification, in particular when the size of the dataset is limited, as demonstrated in the literature. We illustrate the parallelization techniques adopted for each approach, highlighting the suitability of Graphical Processing Units (GPUs) to this aim. Experimental results show that parallel SVM and XGB algorithms significantly improve the classification times in comparison with their serial counterparts. Full article
Show Figures

Figure 1

15 pages, 13756 KiB  
Article
A Comparative Analysis of UAV Photogrammetric Software Performance for Forest 3D Modeling: A Case Study Using AgiSoft Photoscan, PIX4DMapper, and DJI Terra
by Sina Jarahizadeh and Bahram Salehi
Sensors 2024, 24(1), 286; https://doi.org/10.3390/s24010286 - 03 Jan 2024
Viewed by 1865
Abstract
Three-dimensional (3D) modeling of trees has many applications in various areas, such as forest and urban planning, forest health monitoring, and carbon sequestration, to name a few. Unmanned Aerial Vehicle (UAV) photogrammetry has recently emerged as a low cost, rapid, and accurate method [...] Read more.
Three-dimensional (3D) modeling of trees has many applications in various areas, such as forest and urban planning, forest health monitoring, and carbon sequestration, to name a few. Unmanned Aerial Vehicle (UAV) photogrammetry has recently emerged as a low cost, rapid, and accurate method for 3D modeling of urban and forest trees replacing the costly traditional methods such as plot measurements and surveying. There are numerous commercial and open-source software programs available, each processing UAV data differently to generate forest 3D modeling and photogrammetric products, including point clouds, Digital Surface Models (DSMs), Canopy Height Models (CHMs), and orthophotos in forest areas. The objective of this study is to compare the three widely-used commercial software packages, namely, AgiSoft Photoscan (Metashape) V 1.7.3, PIX4DMapper (Pix4D) V 4.4.12, and DJI Terra V 3.7.6 for processing UAV data over forest areas from three perspectives: point cloud density and reconstruction quality, computational time, DSM assessment for height accuracy (z) and ability of tree detection on DSM. Three datasets, captured by UAVs on the same day at three different flight altitudes, were used in this study. The first, second, and third datasets were collected at altitudes of 60 m, 100 m, and 120 m, respectively over a forested area in Tully, New York. While the first and third datasets were taken horizontally, the second dataset was taken 20 degrees off-nadir to investigate the impact of oblique images. Results show that Pix4D and AgiSoft generate 2.5 times denser point clouds than DJI Terra. However, reconstruction quality evaluation using the Iterative Closest Point method (ICP) shows DJI Terra has fewer gaps in the point cloud and performed better than AgiSoft and Pix4D in generating a point cloud of trees, power lines and poles despite producing a fewer number of points. In other words, the outperformance in key points detection and an improved matching algorithm are key factors in generating improved final products. The computational time comparison demonstrates that the processing time for AgiSoft and DJI Terra is roughly half that of Pix4D. Furthermore, DSM elevation profiles demonstrate that the estimated height variations between the three software range from 0.5 m to 2.5 m. DJI Terra’s estimated heights are generally greater than those of AgiSoft and Pix4D. Furthermore, DJI Terra outperforms AgiSoft and Pix4D for modeling the height contour of trees, buildings, and power lines and poles, followed by AgiSoft and Pix4D. Finally, in terms of the ability of tree detection, DJI Terra outperforms AgiSoft and Pix4D in generating a comprehensive DSM as a result of fewer gaps in the point cloud. Consequently, it stands out as the preferred choice for tree detection applications. The results of this paper can help 3D model users to have confidence in the reliability of the generated 3D models by comprehending the accuracy of the employed software. Full article
Show Figures

Figure 1

22 pages, 16581 KiB  
Article
Analytic Design Technique for 2D FIR Circular Filter Banks and Their Efficient Implementation Using Polyphase Approach
by Radu Matei and Doru Florin Chiper
Sensors 2023, 23(24), 9851; https://doi.org/10.3390/s23249851 - 15 Dec 2023
Cited by 1 | Viewed by 573
Abstract
This paper proposes an analytical design procedure for 2D FIR circular filter banks and also a novel, computationally efficient implementation of the designed filter bank based on a polyphase structure and a block filtering approach. The component filters of the bank are designed [...] Read more.
This paper proposes an analytical design procedure for 2D FIR circular filter banks and also a novel, computationally efficient implementation of the designed filter bank based on a polyphase structure and a block filtering approach. The component filters of the bank are designed in the frequency domain using a specific frequency transformation applied to a low-pass, band-pass and high-pass 1D prototype with a specified Gaussian shape and imposed specifications (peak frequency, bandwidth). The 1D prototype filter frequency response is derived in a closed form as a trigonometric polynomial with a specified order using Fourier series, and then it is factored. Since the design starts from a 1D prototype with a factored transfer function, the frequency response of the designed 2D filter bank components also results directly in a factored form. The designed filters have an accurate shape, with negligible distortions at a relatively low order. We present the design of two types of circular filter banks: uniform and non-uniform (dyadic). An example of image analysis with the uniform filter bank is also provided, showing that the original image can be accurately reconstructed from the sub-band images. The proposed implementation is presented for a simpler case, namely for a smaller size of the filter kernel and of the input image. Using the polyphase and block filtering approach, a convenient implementation at the system level is obtained for the designed 2D FIR filter, with a relatively low computational complexity. Full article
Show Figures

Figure 1

19 pages, 4445 KiB  
Article
A Novel Fuzzy-Based Remote Sensing Image Segmentation Method
by Barbara Cardone, Ferdinando Di Martino and Vittorio Miraglia
Sensors 2023, 23(24), 9641; https://doi.org/10.3390/s23249641 - 05 Dec 2023
Cited by 1 | Viewed by 800
Abstract
Image segmentation is a well-known image processing task that consists of partitioning an image into homogeneous areas. It is applied to remotely sensed imagery for many problems such as land use classification and landscape changes. Recently, several hybrid remote sensing image segmentation techniques [...] Read more.
Image segmentation is a well-known image processing task that consists of partitioning an image into homogeneous areas. It is applied to remotely sensed imagery for many problems such as land use classification and landscape changes. Recently, several hybrid remote sensing image segmentation techniques have been proposed that include metaheuristic approaches in order to increase the segmentation accuracy; however, the critical point of these approaches is the high computational complexity, which affects time and memory consumption. In order to overcome this criticality, we propose a fuzzy-based image segmentation framework implemented in a GIS-based platform for remotely sensed images; furthermore, the proposed model allows us to evaluate the reliability of the segmentation. The Fast Generalized Fuzzy c-means algorithm is implemented to segment images in order to detect local spatial relations between pixels and the Triple Center Relation validity index is used to find the optimal number of clusters. The framework elaborates the composite index to be analyzed starting by multiband remotely sensed images. For each cluster, a segmented image is obtained in which the pixel value represents, transformed into gray levels, the graph belonging to the cluster. A final thematic map is built in which the pixels are classified based on the assignment to the cluster to which they belong with the highest membership degree. In addition, the reliability of the classification is estimated by associating each class with the average of the membership degrees of the pixels assigned to it. The method was tested in the study area consisting of the south-western districts of the city of Naples (Italy) for the segmentation of composite indices maps determined by multiband remote sensing images. The segmentation results are consistent with the segmentations of the study area by morphological and urban characteristics, carried out by domain experts. The high computational speed of the proposed image segmentation method allows it to be applied to massive high-resolution remote sensing images. Full article
Show Figures

Figure 1

14 pages, 3710 KiB  
Article
Non-Contact Face Temperature Measurement by Thermopile-Based Data Fusion
by Faraz Bhatti, Grischan Engel, Joachim Hampel, Chaimae Khalil, Andreas Reber, Stefan Kray and Thomas Greiner
Sensors 2023, 23(18), 7680; https://doi.org/10.3390/s23187680 - 06 Sep 2023
Viewed by 985
Abstract
Thermal imaging cameras and infrared (IR) temperature measurement devices act as state-of-the-art techniques for non-contact temperature determination of the skin surface. The former is cost-intensive in many cases for widespread application, and the latter requires manual alignment to the measuring point. Due to [...] Read more.
Thermal imaging cameras and infrared (IR) temperature measurement devices act as state-of-the-art techniques for non-contact temperature determination of the skin surface. The former is cost-intensive in many cases for widespread application, and the latter requires manual alignment to the measuring point. Due to this background, this paper proposes a new method for automated, non-contact, and area-specific temperature measurement of the facial skin surface. It is based on the combined use of a low-cost thermopile sensor matrix and a 2D image sensor. The temperature values as well as the 2D image data are fused using a parametric affine transformation. Based on face recognition, this allows temperature values to be assigned to selected facial regions and used specifically to determine the skin surface temperature. The advantages of the proposed method are described. It is demonstrated by means of a participant study that the temperature absolute values, which are achieved without manual alignment in an automated manner, are comparable to a commercially available IR-based forehead thermometer. Full article
Show Figures

Figure 1

20 pages, 12701 KiB  
Article
Real-Time Embedded Eye Image Defocus Estimation for Iris Biometrics
by Camilo A. Ruiz-Beltrán, Adrián Romero-Garcés, Martín González-García, Rebeca Marfil and Antonio Bandera
Sensors 2023, 23(17), 7491; https://doi.org/10.3390/s23177491 - 29 Aug 2023
Cited by 1 | Viewed by 753
Abstract
One of the main challenges faced by iris recognition systems is to be able to work with people in motion, where the sensor is at an increasing distance (more than 1 m) from the person. The ultimate goal is to make the system [...] Read more.
One of the main challenges faced by iris recognition systems is to be able to work with people in motion, where the sensor is at an increasing distance (more than 1 m) from the person. The ultimate goal is to make the system less and less intrusive and require less cooperation from the person. When this scenario is implemented using a single static sensor, it will be necessary for the sensor to have a wide field of view and for the system to process a large number of frames per second (fps). In such a scenario, many of the captured eye images will not have adequate quality (contrast or resolution). This paper describes the implementation in an MPSoC (multiprocessor system-on-chip) of an eye image detection system that integrates, in the programmable logic (PL) part, a functional block to evaluate the level of defocus blur of the captured images. In this way, the system will be able to discard images that do not have the required focus quality in the subsequent processing steps. The proposals were successfully designed using Vitis High Level Synthesis (VHLS) and integrated into an eye detection framework capable of processing over 57 fps working with a 16 Mpixel sensor. Using, for validation, an extended version of the CASIA-Iris-distance V4 database, the experimental evaluation shows that the proposed framework is able to successfully discard unfocused eye images. But what is more relevant is that, in a real implementation, this proposal allows discarding up to 97% of out-of-focus eye images, which will not have to be processed by the segmentation and normalised iris pattern extraction blocks. Full article
Show Figures

Figure 1

12 pages, 6152 KiB  
Article
Automated Identification of Hidden Corrosion Based on the D-Sight Technique: A Case Study on a Military Helicopter
by Andrzej Katunin, Piotr Synaszko and Krzysztof Dragan
Sensors 2023, 23(16), 7131; https://doi.org/10.3390/s23167131 - 11 Aug 2023
Viewed by 583
Abstract
Hidden corrosion remains a significant problem during aircraft service, primarily because of difficulties in its detection and assessment. The non-destructive D-Sight testing technique is characterized by high sensitivity to this type of damage and is an effective sensing tool for qualitative assessments of [...] Read more.
Hidden corrosion remains a significant problem during aircraft service, primarily because of difficulties in its detection and assessment. The non-destructive D-Sight testing technique is characterized by high sensitivity to this type of damage and is an effective sensing tool for qualitative assessments of hidden corrosion in aircraft structures used by numerous ground service entities. In this paper, the authors demonstrated a new approach to the automatic quantification of hidden corrosion based on image processing D-Sight images during periodic inspections. The performance of the developed processing algorithm was demonstrated based on the results of the inspection of a Mi family military helicopter. The nondimensional quantitative measurement introduced in this study confirmed the effectiveness of this evaluation of corrosion progression, which was in agreement with the results of qualitative analysis of D-Sight images made by inspectors. This allows for the automation of the inspection process and supports inspectors in evaluating the extent and progression of hidden corrosion. Full article
Show Figures

Figure 1

Review

Jump to: Research

51 pages, 4846 KiB  
Review
Object Detection, Recognition, and Tracking Algorithms for ADASs—A Study on Recent Trends
by Vinay Malligere Shivanna and Jiun-In Guo
Sensors 2024, 24(1), 249; https://doi.org/10.3390/s24010249 - 31 Dec 2023
Viewed by 1662
Abstract
Advanced driver assistance systems (ADASs) are becoming increasingly common in modern-day vehicles, as they not only improve safety and reduce accidents but also aid in smoother and easier driving. ADASs rely on a variety of sensors such as cameras, radars, lidars, and a [...] Read more.
Advanced driver assistance systems (ADASs) are becoming increasingly common in modern-day vehicles, as they not only improve safety and reduce accidents but also aid in smoother and easier driving. ADASs rely on a variety of sensors such as cameras, radars, lidars, and a combination of sensors, to perceive their surroundings and identify and track objects on the road. The key components of ADASs are object detection, recognition, and tracking algorithms that allow vehicles to identify and track other objects on the road, such as other vehicles, pedestrians, cyclists, obstacles, traffic signs, traffic lights, etc. This information is then used to warn the driver of potential hazards or used by the ADAS itself to take corrective actions to avoid an accident. This paper provides a review of prominent state-of-the-art object detection, recognition, and tracking algorithms used in different functionalities of ADASs. The paper begins by introducing the history and fundamentals of ADASs followed by reviewing recent trends in various ADAS algorithms and their functionalities, along with the datasets employed. The paper concludes by discussing the future of object detection, recognition, and tracking algorithms for ADASs. The paper also discusses the need for more research on object detection, recognition, and tracking in challenging environments, such as those with low visibility or high traffic density. Full article
Show Figures

Figure 1

Back to TopTop