E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Imaging: Sensors and Technologies"

A special issue of Sensors (ISSN 1424-8220).

Deadline for manuscript submissions: closed (30 September 2016)

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Guest Editor
Prof. Dr. Gonzalo Pajares Martinsanz

Department Software Engineering and Artificial Intelligence, Faculty of Informatics, University Complutense of Madrid, 28040 Madrid, Spain
Website | E-Mail
Phone: +34.1.3947546
Interests: computer vision; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection and track movement; fusion and registering from imaging sensors; superresolution from low-resolution image sensors

Special Issue Information

Dear Colleagues,

The use of imaging sensors in different areas is obvious. Actively or passively, these sensors capture electromagnetic radiation or acoustic echoes across the whole spectra, which, conveniently arranged in images, allow the extraction of useful information.

Among others, medicine, biology, industry, agriculture, surveillance, security, visual inspection, monitoring, target tracking, photogrammetry, robotics, and navigation aids in manned or unmanned vehicles are areas where advances in imaging sensors and technologies play an important role.

Methods and procedures that are designed to make operational and profitable imaging devices allow for the processing of the relevant information that is oriented toward the efficiency of such systems.

The following is a list of the main topics covered by this Special Issue, where the emphasis is placed on the sensors, devices, and technologies that are oriented toward specific image processing applications. The Special Issue will, however, not be limited to these issues:

• Active or passive sensors and technologies based on physical designs, including CCD, EMCCD, CMOS, NMOS, and Photodiodes.

• Mono-, multi-, and hyper-spectral sensors for spectral analysis: ultraviolet, visible, infrared, thermal, laser or X-Ray.

• Sensors and technologies for radiography, tomography, magnetic resonance, neuroimaging, and microscopy.

• Sensors and technologies for 3D recovery: stereoscopy, TOF, and Laser.

• Multiple and temporal imaging sensors and technologies: video, panoramic,

• Radar and SAR devices.

• Acoustic sensors and devices: ultrasound, sonar.

• Image acquisition and formation: physical and geometric sensory arrangement, optical systems, and spectral filters.

Prof. Dr. Gonzalo Pajares Martinsanz
Guest Editor

Related Journal: Journal of Imaging

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Published Papers (37 papers)

View options order results:
result details:
Displaying articles 1-37
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras
Sensors 2017, 17(1), 92; doi:10.3390/s17010092
Received: 2 September 2016 / Revised: 7 December 2016 / Accepted: 9 December 2016 / Published: 5 January 2017
Cited by 2 | PDF Full-text (5045 KB) | HTML Full-text | XML Full-text
Abstract
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external
[...] Read more.
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m). Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
Sensors 2017, 17(1), 56; doi:10.3390/s17010056
Received: 14 September 2016 / Revised: 23 December 2016 / Accepted: 26 December 2016 / Published: 29 December 2016
PDF Full-text (5273 KB) | HTML Full-text | XML Full-text
Abstract
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene
[...] Read more.
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics
Sensors 2016, 16(12), 1994; doi:10.3390/s16121994
Received: 28 September 2016 / Revised: 17 November 2016 / Accepted: 18 November 2016 / Published: 25 November 2016
Cited by 1 | PDF Full-text (5099 KB) | HTML Full-text | XML Full-text
Abstract
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability
[...] Read more.
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Expanding the Detection of Traversable Area with RealSense for the Visually Impaired
Sensors 2016, 16(11), 1954; doi:10.3390/s16111954
Received: 13 September 2016 / Revised: 4 November 2016 / Accepted: 8 November 2016 / Published: 21 November 2016
Cited by 2 | PDF Full-text (18233 KB) | HTML Full-text | XML Full-text
Abstract
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers
[...] Read more.
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Full-Field Optical Coherence Tomography Using Galvo Filter-Based Wavelength Swept Laser
Sensors 2016, 16(11), 1933; doi:10.3390/s16111933
Received: 30 September 2016 / Revised: 15 November 2016 / Accepted: 15 November 2016 / Published: 17 November 2016
Cited by 3 | PDF Full-text (3190 KB) | HTML Full-text | XML Full-text
Abstract
We report a wavelength swept laser-based full-field optical coherence tomography for measuring the surfaces and thicknesses of refractive and reflective samples. The system consists of a galvo filter–based wavelength swept laser and a simple Michelson interferometer. Combinations of the reflective and refractive samples
[...] Read more.
We report a wavelength swept laser-based full-field optical coherence tomography for measuring the surfaces and thicknesses of refractive and reflective samples. The system consists of a galvo filter–based wavelength swept laser and a simple Michelson interferometer. Combinations of the reflective and refractive samples are used to demonstrate the performance of the system. By synchronizing the camera with the source, the cross-sectional information of the samples can be seen after each sweep of the swept source. This system can be effective for the thickness measurement of optical thin films as well as for the depth investigation of samples in industrial applications. A resolution target with a glass cover slip and a step height standard target are imaged, showing the cross-sectional and topographical information of the samples. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle A Selective Change Driven System for High-Speed Motion Analysis
Sensors 2016, 16(11), 1875; doi:10.3390/s16111875
Received: 29 July 2016 / Revised: 28 October 2016 / Accepted: 3 November 2016 / Published: 8 November 2016
PDF Full-text (5522 KB) | HTML Full-text | XML Full-text
Abstract
Vision-based sensing algorithms are computationally-demanding tasks due to the large amount of data acquired and processed. Visual sensors deliver much information, even if data are redundant, and do not give any additional information. A Selective Change Driven (SCD) sensing system is based on
[...] Read more.
Vision-based sensing algorithms are computationally-demanding tasks due to the large amount of data acquired and processed. Visual sensors deliver much information, even if data are redundant, and do not give any additional information. A Selective Change Driven (SCD) sensing system is based on a sensor that delivers, ordered by the magnitude of its change, only those pixels that have changed most since the last read-out. This allows the information stream to be adjusted to the computation capabilities. Following this strategy, a new SCD processing architecture for high-speed motion analysis, based on processing pixels instead of full frames, has been developed and implemented into a Field Programmable Gate-Array (FPGA). The programmable device controls the data stream, delivering a new object distance calculation for every new pixel. The acquisition, processing and delivery of a new object distance takes just 1.7 μ s. Obtaining a similar result using a conventional frame-based camera would require a device working at roughly 500 Kfps, which is far from being practical or even feasible. This system, built with the recently-developed 64 × 64 CMOS SCD sensor, shows the potential of the SCD approach when combined with a hardware processing system. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle A 3D Optical Surface Profilometer Using a Dual-Frequency Liquid Crystal-Based Dynamic Fringe Pattern Generator
Sensors 2016, 16(11), 1794; doi:10.3390/s16111794
Received: 23 September 2016 / Revised: 22 October 2016 / Accepted: 24 October 2016 / Published: 27 October 2016
PDF Full-text (7116 KB) | HTML Full-text | XML Full-text
Abstract
We propose a liquid crystal (LC)-based 3D optical surface profilometer that can utilize multiple fringe patterns to extract an enhanced 3D surface depth profile. To avoid the optical phase ambiguity and enhance the 3D depth extraction, 16 interference patterns were generated by the
[...] Read more.
We propose a liquid crystal (LC)-based 3D optical surface profilometer that can utilize multiple fringe patterns to extract an enhanced 3D surface depth profile. To avoid the optical phase ambiguity and enhance the 3D depth extraction, 16 interference patterns were generated by the LC-based dynamic fringe pattern generator (DFPG) using four-step phase shifting and four-step spatial frequency varying schemes. The DFPG had one common slit with an electrically controllable birefringence (ECB) LC mode and four switching slits with a twisted nematic LC mode. The spatial frequency of the projected fringe pattern could be controlled by selecting one of the switching slits. In addition, moving fringe patterns were obtainable by applying voltages to the ECB LC layer, which varied the phase difference between the common and the selected switching slits. Notably, the DFPG switching time required to project 16 fringe patterns was minimized by utilizing the dual-frequency modulation of the driving waveform to switch the LC layers. We calculated the phase modulation of the DFPG and reconstructed the depth profile of 3D objects using a discrete Fourier transform method and geometric optical parameters. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Open AccessArticle Geometric Calibration and Validation of Kompsat-3A AEISS-A Camera
Sensors 2016, 16(10), 1776; doi:10.3390/s16101776
Received: 26 September 2016 / Revised: 19 October 2016 / Accepted: 20 October 2016 / Published: 24 October 2016
Cited by 3 | PDF Full-text (4480 KB) | HTML Full-text | XML Full-text
Abstract
Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN
[...] Read more.
Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN (Panchromatic) resolution of 0.55 m, MS (multispectral) resolution of 2.20 m, and TIR (thermal infrared) at 5.5 m resolution. In this paper we present the geometric calibration and validation work of Kompsat-3A that was completed last year. A set of images over the test sites was taken for two months and was utilized for the work. The workflow includes the boresight calibration, CCDs (charge-coupled devices) alignment and focal length determination, the merge of two CCD lines, and the band-to-band registration. Then, the positional accuracies without any GCPs (ground control points) were validated for hundreds of test sites across the world using various image acquisition modes. In addition, we checked the planimetric accuracy by bundle adjustments with GCPs. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
Sensors 2016, 16(10), 1740; doi:10.3390/s16101740
Received: 9 August 2016 / Revised: 9 October 2016 / Accepted: 10 October 2016 / Published: 19 October 2016
PDF Full-text (15699 KB) | HTML Full-text | XML Full-text
Abstract
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments.
[...] Read more.
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging
Sensors 2016, 16(10), 1671; doi:10.3390/s16101671
Received: 3 May 2016 / Revised: 29 September 2016 / Accepted: 8 October 2016 / Published: 11 October 2016
Cited by 2 | PDF Full-text (7759 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and
[...] Read more.
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera
Sensors 2016, 16(10), 1649; doi:10.3390/s16101649
Received: 5 September 2016 / Accepted: 3 October 2016 / Published: 6 October 2016
Cited by 7 | PDF Full-text (2398 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which
[...] Read more.
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Design of a Sub-Picosecond Jitter with Adjustable-Range CMOS Delay-Locked Loop for High-Speed and Low-Power Applications
Sensors 2016, 16(10), 1593; doi:10.3390/s16101593
Received: 28 July 2016 / Revised: 5 September 2016 / Accepted: 5 September 2016 / Published: 28 September 2016
Cited by 1 | PDF Full-text (6168 KB) | HTML Full-text | XML Full-text
Abstract
A Delay-Locked Loop (DLL) with a modified charge pump circuit is proposed for generating high-resolution linear delay steps with sub-picosecond jitter performance and adjustable delay range. The small-signal model of the modified charge pump circuit is analyzed to bring forth the relationship between
[...] Read more.
A Delay-Locked Loop (DLL) with a modified charge pump circuit is proposed for generating high-resolution linear delay steps with sub-picosecond jitter performance and adjustable delay range. The small-signal model of the modified charge pump circuit is analyzed to bring forth the relationship between the DLL’s internal control voltage and output time delay. Circuit post-layout simulation shows that a 0.97 ps delay step within a 69 ps delay range with 0.26 ps Root-Mean Square (RMS) jitter performance is achievable using a standard 0.13 µm Complementary Metal-Oxide Semiconductor (CMOS) process. The post-layout simulation results show that the power consumption of the proposed DLL architecture’s circuit is 0.1 mW when the DLL is operated at 2 GHz. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor
Sensors 2016, 16(10), 1572; doi:10.3390/s16101572
Received: 13 July 2016 / Revised: 18 September 2016 / Accepted: 19 September 2016 / Published: 23 September 2016
Cited by 1 | PDF Full-text (6315 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards
[...] Read more.
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Long-Term Continuous Double Station Observation of Faint Meteor Showers
Sensors 2016, 16(9), 1493; doi:10.3390/s16091493
Received: 13 July 2016 / Revised: 1 September 2016 / Accepted: 7 September 2016 / Published: 14 September 2016
Cited by 1 | PDF Full-text (10042 KB) | HTML Full-text | XML Full-text
Abstract
Meteor detection and analysis is an essential topic in the field of astronomy. In this paper, a high-sensitivity and high-time-resolution imaging device for the detection of faint meteoric events is presented. The instrument is based on a fast CCD camera and an image
[...] Read more.
Meteor detection and analysis is an essential topic in the field of astronomy. In this paper, a high-sensitivity and high-time-resolution imaging device for the detection of faint meteoric events is presented. The instrument is based on a fast CCD camera and an image intensifier. Two such instruments form a double-station observation network. The MAIA (Meteor Automatic Imager and Analyzer) system has been in continuous operation since 2013 and has successfully captured hundreds of meteors belonging to different meteor showers, as well as sporadic meteors. A data processing pipeline for the efficient processing and evaluation of the massive amount of video sequences is also introduced in this paper. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology
Sensors 2016, 16(9), 1364; doi:10.3390/s16091364
Received: 6 July 2016 / Revised: 17 August 2016 / Accepted: 23 August 2016 / Published: 25 August 2016
PDF Full-text (13846 KB) | HTML Full-text | XML Full-text
Abstract
Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system
[...] Read more.
Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Open AccessArticle Are We Ready to Build a System for Assisting Blind People in Tactile Exploration of Bas-Reliefs?
Sensors 2016, 16(9), 1361; doi:10.3390/s16091361
Received: 6 July 2016 / Revised: 10 August 2016 / Accepted: 18 August 2016 / Published: 24 August 2016
PDF Full-text (4020 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, the creation of methodologies and tools for facilitating the 3D reproduction of artworks and, contextually, to make their exploration possible and more meaningful for blind users is becoming increasingly relevant in society. Accordingly, the creation of integrated systems including both tactile media
[...] Read more.
Nowadays, the creation of methodologies and tools for facilitating the 3D reproduction of artworks and, contextually, to make their exploration possible and more meaningful for blind users is becoming increasingly relevant in society. Accordingly, the creation of integrated systems including both tactile media (e.g., bas-reliefs) and interfaces capable of providing the users with an experience cognitively comparable to the one originally envisioned by the artist, may be considered the next step for enhancing artworks exploration. In light of this, the present work provides a description of a first-attempt system designed to aid blind people (BP) in the tactile exploration of bas-reliefs. In detail, consistent hardware layout, comprising a hand-tracking system based on Kinect® sensor and an audio device, together with a number of methodologies, algorithms and information related to physical design are proposed. Moreover, according to experimental test on the developed system related to the device position, some design alternatives are suggested so as to discuss pros and cons. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Open AccessArticle Substrate and Passivation Techniques for Flexible Amorphous Silicon-Based X-ray Detectors
Sensors 2016, 16(8), 1162; doi:10.3390/s16081162
Received: 23 May 2016 / Revised: 19 July 2016 / Accepted: 19 July 2016 / Published: 26 July 2016
Cited by 1 | PDF Full-text (4037 KB) | HTML Full-text | XML Full-text
Abstract
Flexible active matrix display technology has been adapted to create new flexible photo-sensing electronic devices, including flexible X-ray detectors. Monolithic integration of amorphous silicon (a-Si) PIN photodiodes on a flexible substrate poses significant challenges associated with the intrinsic film stress of amorphous silicon.
[...] Read more.
Flexible active matrix display technology has been adapted to create new flexible photo-sensing electronic devices, including flexible X-ray detectors. Monolithic integration of amorphous silicon (a-Si) PIN photodiodes on a flexible substrate poses significant challenges associated with the intrinsic film stress of amorphous silicon. This paper examines how altering device structuring and diode passivation layers can greatly improve the electrical performance and the mechanical reliability of the device, thereby eliminating one of the major weaknesses of a-Si PIN diodes in comparison to alternative photodetector technology, such as organic bulk heterojunction photodiodes and amorphous selenium. A dark current of 0.5 pA/mm2 and photodiode quantum efficiency of 74% are possible with a pixelated diode structure with a silicon nitride/SU-8 bilayer passivation structure on a 20 µm-thick polyimide substrate. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Open AccessArticle Uncertainty Comparison of Visual Sensing in Adverse Weather Conditions
Sensors 2016, 16(7), 1125; doi:10.3390/s16071125
Received: 28 April 2016 / Revised: 5 July 2016 / Accepted: 15 July 2016 / Published: 20 July 2016
PDF Full-text (17207 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and
[...] Read more.
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and RegGro. For each growing rule, several tests on adverse weather and lens-stained scenes were performed, taking into account and analyzing different weather conditions with the outdoor visual sensing system. The influence of several weather conditions was analyzed, highlighting their effect on the outdoor visual sensing system with different growing rules. Furthermore, experimental errors and uncertainties obtained with the growing rules were compared. The segmentation accuracy of flood regions yielded by the GrowCut, RegGro, and hybrid methods was 75%, 85%, and 87.7%, respectively. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity
Sensors 2016, 16(7), 999; doi:10.3390/s16070999
Received: 3 April 2016 / Revised: 19 June 2016 / Accepted: 22 June 2016 / Published: 29 June 2016
PDF Full-text (2408 KB) | HTML Full-text | XML Full-text
Abstract
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each
[...] Read more.
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System
Sensors 2016, 16(7), 982; doi:10.3390/s16070982
Received: 3 May 2016 / Revised: 16 June 2016 / Accepted: 23 June 2016 / Published: 25 June 2016
Cited by 1 | PDF Full-text (24948 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed
[...] Read more.
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Evaluation of a Wobbling Method Applied to Correcting Defective Pixels of CZT Detectors in SPECT Imaging
Sensors 2016, 16(6), 772; doi:10.3390/s16060772
Received: 17 February 2016 / Revised: 20 April 2016 / Accepted: 16 May 2016 / Published: 27 May 2016
Cited by 1 | PDF Full-text (5967 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT)
[...] Read more.
In this paper, we propose a wobbling method to correct bad pixels in cadmium zinc telluride (CZT) detectors, using information of related images. We build up an automated device that realizes the wobbling correction for small animal Single Photon Emission Computed Tomography (SPECT) imaging. The wobbling correction method is applied to various constellations of defective pixels. The corrected images are compared with the results of conventional interpolation method, and the correction effectiveness is evaluated quantitatively using the factor of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In summary, the proposed wobbling method, equipped with the automatic mechanical system, provides a better image quality for correcting defective pixels, which could be used for all pixelated detectors for molecular imaging. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Open AccessArticle Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Sensors 2016, 16(5), 719; doi:10.3390/s16050719
Received: 22 February 2016 / Revised: 12 May 2016 / Accepted: 13 May 2016 / Published: 18 May 2016
Cited by 3 | PDF Full-text (4303 KB) | HTML Full-text | XML Full-text
Abstract
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band
[...] Read more.
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Open AccessArticle Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors
Sensors 2016, 16(5), 700; doi:10.3390/s16050700
Received: 8 March 2016 / Revised: 3 May 2016 / Accepted: 10 May 2016 / Published: 14 May 2016
Cited by 1 | PDF Full-text (4191 KB) | HTML Full-text | XML Full-text
Abstract
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable
[...] Read more.
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder
Sensors 2016, 16(4), 441; doi:10.3390/s16040441
Received: 1 September 2015 / Revised: 21 March 2016 / Accepted: 22 March 2016 / Published: 25 March 2016
Cited by 2 | PDF Full-text (2210 KB) | HTML Full-text | XML Full-text
Abstract
The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging
[...] Read more.
The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm–5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5–1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm–3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI
Sensors 2016, 16(3), 410; doi:10.3390/s16030410
Received: 24 January 2016 / Revised: 15 March 2016 / Accepted: 16 March 2016 / Published: 19 March 2016
Cited by 6 | PDF Full-text (5752 KB) | HTML Full-text | XML Full-text
Abstract
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies
[...] Read more.
Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Underwater Imaging Using a 1 × 16 CMUT Linear Array
Sensors 2016, 16(3), 312; doi:10.3390/s16030312
Received: 25 January 2016 / Revised: 20 February 2016 / Accepted: 25 February 2016 / Published: 1 March 2016
Cited by 2 | PDF Full-text (4399 KB) | HTML Full-text | XML Full-text
Abstract
A 1 × 16 capacitive micro-machined ultrasonic transducer linear array was designed, fabricated, and tested for underwater imaging in the low frequency range. The linear array was fabricated using Si-SOI bonding techniques. Underwater transmission performance was tested in a water tank, and the
[...] Read more.
A 1 × 16 capacitive micro-machined ultrasonic transducer linear array was designed, fabricated, and tested for underwater imaging in the low frequency range. The linear array was fabricated using Si-SOI bonding techniques. Underwater transmission performance was tested in a water tank, and the array has a resonant frequency of 700 kHz, with pressure amplitude 182 dB (μPa·m/V) at 1 m. The −3 dB main beam width of the designed dense linear array is approximately 5 degrees. Synthetic aperture focusing technique was applied to improve the resolution of reconstructed images, with promising results. Thus, the proposed array was shown to be suitable for underwater imaging applications. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs
Sensors 2016, 16(1), 27; doi:10.3390/s16010027
Received: 17 November 2015 / Revised: 14 December 2015 / Accepted: 22 December 2015 / Published: 26 December 2015
PDF Full-text (8276 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a
[...] Read more.
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Parallax-Robust Surveillance Video Stitching
Sensors 2016, 16(1), 7; doi:10.3390/s16010007
Received: 7 October 2015 / Revised: 30 November 2015 / Accepted: 17 December 2015 / Published: 25 December 2015
PDF Full-text (2668 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered
[...] Read more.
This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays
Sensors 2015, 15(12), 29938-29949; doi:10.3390/s151229779
Received: 24 September 2015 / Revised: 2 November 2015 / Accepted: 9 November 2015 / Published: 30 November 2015
PDF Full-text (5135 KB) | HTML Full-text | XML Full-text
Abstract
A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its
[...] Read more.
A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle An Indoor Obstacle Detection System Using Depth Information and Region Growth
Sensors 2015, 15(10), 27116-27141; doi:10.3390/s151027116
Received: 16 June 2015 / Revised: 14 September 2015 / Accepted: 9 October 2015 / Published: 23 October 2015
Cited by 6 | PDF Full-text (10980 KB) | HTML Full-text | XML Full-text
Abstract
This study proposes an obstacle detection method that uses depth information to allow the visually impaired to avoid obstacles when they move in an unfamiliar environment. The system is composed of three parts: scene detection, obstacle detection and a vocal announcement. This study
[...] Read more.
This study proposes an obstacle detection method that uses depth information to allow the visually impaired to avoid obstacles when they move in an unfamiliar environment. The system is composed of three parts: scene detection, obstacle detection and a vocal announcement. This study proposes a new method to remove the ground plane that overcomes the over-segmentation problem. This system addresses the over-segmentation problem by removing the edge and the initial seed position problem for the region growth method using the Connected Component Method (CCM). This system can detect static and dynamic obstacles. The system is simple, robust and efficient. The experimental results show that the proposed system is both robust and convenient. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Time-Resolved Synchronous Fluorescence for Biomedical Diagnosis
Sensors 2015, 15(9), 21746-21759; doi:10.3390/s150921746
Received: 28 July 2015 / Revised: 24 August 2015 / Accepted: 26 August 2015 / Published: 31 August 2015
Cited by 4 | PDF Full-text (1328 KB) | HTML Full-text | XML Full-text
Abstract
This article presents our most recent advances in synchronous fluorescence (SF) methodology for biomedical diagnostics. The SF method is characterized by simultaneously scanning both the excitation and emission wavelengths while keeping a constant wavelength interval between them. Compared to conventional fluorescence spectroscopy, the
[...] Read more.
This article presents our most recent advances in synchronous fluorescence (SF) methodology for biomedical diagnostics. The SF method is characterized by simultaneously scanning both the excitation and emission wavelengths while keeping a constant wavelength interval between them. Compared to conventional fluorescence spectroscopy, the SF method simplifies the emission spectrum while enabling greater selectivity, and has been successfully used to detect subtle differences in the fluorescence emission signatures of biochemical species in cells and tissues. The SF method can be used in imaging to analyze dysplastic cells in vitro and tissue in vivo. Based on the SF method, here we demonstrate the feasibility of a time-resolved synchronous fluorescence (TRSF) method, which incorporates the intrinsic fluorescent decay characteristics of the fluorophores. Our prototype TRSF system has clearly shown its advantage in spectro-temporal separation of the fluorophores that were otherwise difficult to spectrally separate in SF spectroscopy. We envision that our previously-tested SF imaging and the newly-developed TRSF methods will combine their proven diagnostic potentials in cancer diagnosis to further improve the efficacy of SF-based biomedical diagnostics. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Sensors 2015, 15(8), 20894-20924; doi:10.3390/s150820894
Received: 23 June 2015 / Revised: 7 August 2015 / Accepted: 17 August 2015 / Published: 21 August 2015
Cited by 5 | PDF Full-text (22352 KB) | HTML Full-text | XML Full-text
Abstract
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain
[...] Read more.
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors
Sensors 2015, 15(7), 16866-16894; doi:10.3390/s150716866
Received: 1 May 2015 / Revised: 8 July 2015 / Accepted: 9 July 2015 / Published: 13 July 2015
Cited by 4 | PDF Full-text (5597 KB) | HTML Full-text | XML Full-text
Abstract
Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is
[...] Read more.
Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle A High Performance Banknote Recognition System Based on a One-Dimensional Visible Light Line Sensor
Sensors 2015, 15(6), 14093-14115; doi:10.3390/s150614093
Received: 16 April 2015 / Accepted: 8 June 2015 / Published: 15 June 2015
Cited by 5 | PDF Full-text (10031 KB) | HTML Full-text | XML Full-text
Abstract
An algorithm for recognizing banknotes is required in many fields, such as banknote-counting machines and automatic teller machines (ATM). Due to the size and cost limitations of banknote-counting machines and ATMs, the banknote image is usually captured by a one-dimensional (line) sensor instead
[...] Read more.
An algorithm for recognizing banknotes is required in many fields, such as banknote-counting machines and automatic teller machines (ATM). Due to the size and cost limitations of banknote-counting machines and ATMs, the banknote image is usually captured by a one-dimensional (line) sensor instead of a conventional two-dimensional (area) sensor. Because the banknote image is captured by the line sensor while it is moved at fast speed through the rollers inside the banknote-counting machine or ATM, misalignment, geometric distortion, and non-uniform illumination of the captured images frequently occur, which degrades the banknote recognition accuracy. To overcome these problems, we propose a new method for recognizing banknotes. The experimental results using two-fold cross-validation for 61,240 United States dollar (USD) images show that the pre-classification error rate is 0%, and the average error rate for the final recognition of the USD banknotes is 0.114%. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Open AccessArticle Monocular-Vision-Based Autonomous Hovering for a Miniature Flying Ball
Sensors 2015, 15(6), 13270-13287; doi:10.3390/s150613270
Received: 14 April 2015 / Revised: 31 May 2015 / Accepted: 1 June 2015 / Published: 5 June 2015
Cited by 1 | PDF Full-text (2827 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as
[...] Read more.
This paper presents a method for detecting and controlling the autonomous hovering of a miniature flying ball (MFB) based on monocular vision. A camera is employed to estimate the three-dimensional position of the vehicle relative to the ground without auxiliary sensors, such as inertial measurement units (IMUs). An image of the ground captured by the camera mounted directly under the miniature flying ball is set as a reference. The position variations between the subsequent frames and the reference image are calculated by comparing their correspondence points. The Kalman filter is used to predict the position of the miniature flying ball to handle situations, such as a lost or wrong frame. Finally, a PID controller is designed, and the performance of the entire system is tested experimentally. The results show that the proposed method can keep the aircraft in a stable hover. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Review

Jump to: Research, Other

Open AccessReview Driver Distraction Using Visual-Based Sensors and Algorithms
Sensors 2016, 16(11), 1805; doi:10.3390/s16111805
Received: 14 July 2016 / Revised: 21 October 2016 / Accepted: 24 October 2016 / Published: 28 October 2016
Cited by 2 | PDF Full-text (1124 KB) | HTML Full-text | XML Full-text
Abstract
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information
[...] Read more.
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Figure 1

Other

Jump to: Research, Review

Open AccessTechnical Note Forward-Looking Infrared Cameras for Micrometeorological Applications within Vineyards
Sensors 2016, 16(9), 1518; doi:10.3390/s16091518
Received: 30 June 2016 / Revised: 19 August 2016 / Accepted: 13 September 2016 / Published: 18 September 2016
PDF Full-text (3335 KB) | HTML Full-text | XML Full-text
Abstract
We apply the principles of atmospheric surface layer dynamics within a vineyard canopy to demonstrate the use of forward-looking infrared cameras measuring surface brightness temperature (spectrum bandwidth of 7.5 to 14 μm) at a relatively high temporal rate of 10 s. The temporal
[...] Read more.
We apply the principles of atmospheric surface layer dynamics within a vineyard canopy to demonstrate the use of forward-looking infrared cameras measuring surface brightness temperature (spectrum bandwidth of 7.5 to 14 μm) at a relatively high temporal rate of 10 s. The temporal surface brightness signal over a few hours of the stable nighttime boundary layer, intermittently interrupted by periods of turbulent heat flux surges, was shown to be related to the observed meteorological measurements by an in situ eddy-covariance system, and reflected the above-canopy wind variability. The infrared raster images were collected and the resultant self-organized spatial cluster provided the meteorological context when compared to in situ data. The spatial brightness temperature pattern was explained in terms of the presence or absence of nighttime cloud cover and down-welling of long-wave radiation and the canopy turbulent heat flux. Time sequential thermography as demonstrated in this research provides positive evidence behind the application of thermal infrared cameras in the domain of micrometeorology, and to enhance our spatial understanding of turbulent eddy interactions with the surface. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies) Printed Edition available
Figures

Back to Top