Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (120)

Search Parameters:
Keywords = optical camera communications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 2500 KiB  
Article
Deep Learning-Based Optical Camera Communication with a 2D MIMO-OOK Scheme for IoT Networks
by Huy Nguyen and Yeng Min Jang
Electronics 2025, 14(15), 3011; https://doi.org/10.3390/electronics14153011 - 29 Jul 2025
Viewed by 256
Abstract
Radio frequency (RF)-based wireless systems are broadly used in communication systems such as mobile networks, satellite links, and monitoring applications. These systems offer outstanding advantages over wired systems, particularly in terms of ease of installation. However, researchers are looking for safer alternatives as [...] Read more.
Radio frequency (RF)-based wireless systems are broadly used in communication systems such as mobile networks, satellite links, and monitoring applications. These systems offer outstanding advantages over wired systems, particularly in terms of ease of installation. However, researchers are looking for safer alternatives as a result of worries about possible health problems connected to high-frequency radiofrequency transmission. Using the visible light spectrum is one promising approach; three cutting-edge technologies are emerging in this regard: Optical Camera Communication (OCC), Light Fidelity (Li-Fi), and Visible Light Communication (VLC). In this paper, we propose a Multiple-Input Multiple-Output (MIMO) modulation technology for Internet of Things (IoT) applications, utilizing an LED array and time-domain on-off keying (OOK). The proposed system is compatible with both rolling shutter and global shutter cameras, including commercially available models such as CCTV, webcams, and smart cameras, commonly deployed in buildings and industrial environments. Despite the compact size of the LED array, we demonstrate that, by optimizing parameters such as exposure time, camera focal length, and channel coding, our system can achieve up to 20 communication links over a 20 m distance with low bit error rate. Full article
(This article belongs to the Special Issue Advances in Optical Communications and Optical Networks)
Show Figures

Figure 1

22 pages, 5418 KiB  
Article
TickRS: A High-Speed Gapless Signal Sampling Method for Rolling-Shutter Optical Camera Communication
by Yongfeng Hong, Xiangting Xie and Xingfa Shen
Photonics 2025, 12(7), 720; https://doi.org/10.3390/photonics12070720 - 16 Jul 2025
Viewed by 148
Abstract
Using the rolling-shutter mechanism to enhance the signal sampling frequency of Optical Camera Communication (OCC) is a low-cost solution, but its periodic sampling interruptions may cause signal loss, and existing solutions often compromise communication rate and distance. To address this, this paper proposes [...] Read more.
Using the rolling-shutter mechanism to enhance the signal sampling frequency of Optical Camera Communication (OCC) is a low-cost solution, but its periodic sampling interruptions may cause signal loss, and existing solutions often compromise communication rate and distance. To address this, this paper proposes NoGap-RS, a no-gap sampling method, theoretically addressing the signal loss issue at longer distances from a perspective of CMOS exposure timing. Experiments show that NoGap-OOK, a OCC system based on NoGap-RS and On-Off key modulation, can achieve a communication rate of 6.41 Kbps at a distance of 3 m, with a BER of 105 under indoor artificial light. This paper further proposes TickRS, a time slot division method, innovatively addressing the overlap that occurs during consecutive-row exposures to further enhance communication rate. Experiments show that TickRS-CSK, a OCC system based on TickRS and Color-Shift Key, can achieve a communication rate of 20.09 Kbps at a distance of 3.6 m, with a BER of 102 under indoor natural light. Full article
Show Figures

Figure 1

13 pages, 2180 KiB  
Article
Wide Field-of-View Air-to-Water Rolling Shutter-Based Optical Camera Communication (OCC) Using CUDA Deep-Neural-Network Long-Short-Term-Memory (CuDNNLSTM)
by Yung-Jie Chen, Yu-Han Lin, Guo-Liang Shih, Chi-Wai Chow and Chien-Hung Yeh
Appl. Sci. 2025, 15(11), 5971; https://doi.org/10.3390/app15115971 - 26 May 2025
Viewed by 407
Abstract
Nowadays, underwater activities are becoming more and more important. As the number of underwater sensing devices grows rapidly, the amount of bandwidth needed also increases very quickly. Apart from underwater communication, direct communication across the water–air interface is also highly desirable. Air-to-water wireless [...] Read more.
Nowadays, underwater activities are becoming more and more important. As the number of underwater sensing devices grows rapidly, the amount of bandwidth needed also increases very quickly. Apart from underwater communication, direct communication across the water–air interface is also highly desirable. Air-to-water wireless transmission is crucial for sending control information or instructions from unmanned aerial vehicles (UAVs) or ground stations above the sea surface to autonomous underwater vehicles (AUVs). On the other hand, water-to-air wireless transmission is also required to transmit real-time information from AUVs or underwater sensor nodes to UAVs above the water surface. Previously, we successfully demonstrated a water-to-air optical camera-based OWC system, which is also known as optical camera communication (OCC). However, the reverse transmission (i.e., air-to-water) using OCC has not been analyzed. It is worth noting that in the water-to-air OCC system, since the camera is located in the air, the image of the light source is magnified due to diffraction. Hence, the pixel-per-symbol (PPS) decoding of the OCC pattern is easier. In the proposed air-to-water OCC system reported here, since the camera is located in the water, the image of the light source in the air will be diminished in size due to diffraction. Hence, the PPS decoding of the OCC pattern becomes more difficult. In this work, we propose and experimentally demonstrate a wide field-of-view (FOV) air-to-water OCC system using CUDA Deep-Neural-Network Long-Short-Term-Memory (CuDNNLSTM). Due to water turbulence and air turbulence affecting the AUV and UAV, a precise line-of-sight (LOS) between the AUV and the UAV is difficult to achieve. OCC can provide wide FOV without the need for precise optical alignment. Results revealed that the proposed air-to-water OCC system can support a transmission rate of 7.2 kbit/s through a still water surface, and 6.6 kbit/s through a wavy water surface; this satisfies the hard-decision forward error correction (HD-FEC) bit-error-rate (BER). Full article
(This article belongs to the Special Issue Screen-Based Visible Light Communication)
Show Figures

Figure 1

17 pages, 5978 KiB  
Article
Control and Real-Time Monitoring of Autonomous Underwater Vehicle Through Underwater Wireless Optical Communication
by Dongwook Jung, Rouchen Zhang, Hyunjoon Cho, Daehyeong Ji, Seunghyen Kim and Hyeungsik Choi
Appl. Sci. 2025, 15(11), 5910; https://doi.org/10.3390/app15115910 - 24 May 2025
Viewed by 533
Abstract
Real-time command and data transfer are essential for autonomous underwater vehicle (AUV) motion control in underwater missions. Due to the limitations of underwater acoustic communication, which has a low data rate, this paper introduces a new control structure using underwater wireless optical communication [...] Read more.
Real-time command and data transfer are essential for autonomous underwater vehicle (AUV) motion control in underwater missions. Due to the limitations of underwater acoustic communication, which has a low data rate, this paper introduces a new control structure using underwater wireless optical communication (UWOC) to enable effective real-time command and data transfer. In this control structure, control inputs for the AUV attitude from outside of the water are transferred to the AUV for motion control, while its orientation data and visual images from the AUV camera are sent to the control station outside the water via the UWOC system. For demonstrating the performance of control action and data monitoring, an AUV is built with a constructed UWOC system, two vertical thrusters, and two horizontal thrusters. For attitude control of the AUV, an attitude heading reference system (AHRS) and a depth sensor are installed. Bi-directional communication in the UWOC system is achieved using a return-to-zero (RZ) modulation scheme for faster, longer-range data transfer. A signal processor converts sensor data received from the transmitted data. Finally, the hovering control performance of the AUV equipped with the UWOC system was experimentally evaluated in a water tank, achieving average root mean square errors (RMSEs) of 4.82° in roll, 2.49° in pitch, and 1.99 mm in depth, while simultaneously transmitting real-time motion data at 21.2 FPS with VGA-resolution images (640 × 480 pixels) at a communication rate of 1 Mbps. Full article
Show Figures

Figure 1

24 pages, 27231 KiB  
Article
Bentayga-I: Development of a Low-Cost and Open-Source Multispectral CubeSat for Marine Environment Monitoring and Prevention
by Adrián Rodríguez-Molina, Alejandro Santana, Felipe Machado, Yubal Barrios, Emma Hernández-Suárez, Ámbar Pérez-García, María Díaz, Raúl Santana, Antonio J. Sánchez and José F. López
Sensors 2024, 24(23), 7648; https://doi.org/10.3390/s24237648 - 29 Nov 2024
Viewed by 1908
Abstract
CubeSats have emerged as a promising alternative to satellite missions for studying remote areas where satellite data are scarce and insufficient, such as coastal and marine environments. However, their standard size and weight limitations make integrating remote sensing optical instruments challenging. This work [...] Read more.
CubeSats have emerged as a promising alternative to satellite missions for studying remote areas where satellite data are scarce and insufficient, such as coastal and marine environments. However, their standard size and weight limitations make integrating remote sensing optical instruments challenging. This work presents the development of Bentayga-I, a CubeSat designed to validate PANDORA, a self-made, lightweight, cost-effective multispectral camera with interchangeable spectral optical filters, in near-space conditions. Its four selected spectral bands are relevant for ocean studies. Alongside the camera, Bentayga-I integrates a power system for short-time operation capacity; a thermal subsystem to maintain battery function; environmental sensors to monitor the CubeSat’s internal and external conditions; and a communication subsystem to transmit acquired data to a ground station. The first helium balloon launch with B2Space proved that Bentayga-I electronics worked correctly in near-space environments. During this launch, the spectral capabilities of PANDORA alongside the spectrum were validated using a hyperspectral camera. Its scientific applicability was also tested by capturing images of coastal areas. A second launch is planned to further validate the multispectral camera in a real-world scenario. The integration of Bentayga-I and PANDORA presents promising results for future low-cost CubeSats missions. Full article
Show Figures

Figure 1

16 pages, 5620 KiB  
Article
Online Optical Axis Parallelism Measurement Method for Continuous Zoom Camera Based on High-Precision Spot Center Positioning Algorithm
by Chanchan Kang, Yao Fang, Huawei Wang, Feng Zhou, Zeyue Ren and Feixiang Han
Photonics 2024, 11(11), 1017; https://doi.org/10.3390/photonics11111017 - 29 Oct 2024
Viewed by 973
Abstract
Ensuring precise alignment of the optical axis is critical for achieving high-quality imaging in continuous zoom cameras. However, existing methods for measuring optical axis parallelism often lack accuracy and fail to assess parallelism across the entire focal range. This study introduces an online [...] Read more.
Ensuring precise alignment of the optical axis is critical for achieving high-quality imaging in continuous zoom cameras. However, existing methods for measuring optical axis parallelism often lack accuracy and fail to assess parallelism across the entire focal range. This study introduces an online measurement method designed to address these limitations by incorporating two enhancements. First, image processing methodologies enable sub-pixel-level extraction of the spot center, achieved through improved morphological processing and the incorporation of an edge tracing algorithm. Second, measurement software developed using Qt Creator can output real-time data on optical axis parallelism across the full focal range post-measurement. This software features a multi-threaded architecture that facilitates the concurrent execution of image acquisition, data processing, and serial communication. Experimental results derived from simulations and real data indicate that the maximum average error in extracting the center of the spot is 0.13 pixels. The proposed system provides critical data for optical axis calibration during camera adjustment and inspection. Full article
(This article belongs to the Special Issue Advancements in Optical Measurement Techniques and Applications)
Show Figures

Figure 1

30 pages, 10580 KiB  
Review
Display Field Communication: Enabling Seamless Data Exchange in Screen–Camera Environments
by Pankaj Singh, Yu-Jeong Kim, Byung Wook Kim and Sung-Yoon Jung
Photonics 2024, 11(11), 1000; https://doi.org/10.3390/photonics11111000 - 24 Oct 2024
Viewed by 1473
Abstract
Display field communication (DFC) is an emerging technology that enables seamless communication between electronic displays and cameras. It utilizes the frequency-domain characteristics of image frames to embed and transmit data, which are then decoded and interpreted by a camera. DFC offers a novel [...] Read more.
Display field communication (DFC) is an emerging technology that enables seamless communication between electronic displays and cameras. It utilizes the frequency-domain characteristics of image frames to embed and transmit data, which are then decoded and interpreted by a camera. DFC offers a novel solution for screen-to-camera data communication, leveraging existing displays and camera infrastructures. This makes it a cost-effective and easily deployable solution. DFC can be applied in various fields, including secure data transfer, mobile payments, and interactive advertising, where data can be exchanged by simply pointing a camera at a screen. This article provides a comprehensive survey of DFC, highlighting significant milestones achieved in recent years and discussing future challenges in establishing a fully functional DFC system. We begin by introducing the broader topic of screen–camera communication (SCC), classifying it into visible and hidden SCC. DFC, a type of spectral-domain hidden SCC, is then explored in detail. Various DFC variants are introduced, with a focus on the physical layer. Finally, we present promising experimental results from our lab and outline further research directions and challenges. Full article
(This article belongs to the Special Issue Novel Advances in Optical Communications)
Show Figures

Figure 1

13 pages, 6160 KiB  
Article
Robust License Plate Recognition in OCC-Based Vehicle Networks Using Image Reconstruction
by Dingfa Zhang, Ziwei Liu, Weiye Zhu, Jie Zheng, Yimao Sun, Chen Chen and Yanbing Yang
Sensors 2024, 24(20), 6568; https://doi.org/10.3390/s24206568 - 12 Oct 2024
Viewed by 1400
Abstract
With the help of traffic lights and street cameras, optical camera communication (OCC) can be adopted in Internet of Vehicles (IoV) applications to realize communication between vehicles and roadside units. However, the encoded light emitted by these OCC transmitters (LED infrastructures on the [...] Read more.
With the help of traffic lights and street cameras, optical camera communication (OCC) can be adopted in Internet of Vehicles (IoV) applications to realize communication between vehicles and roadside units. However, the encoded light emitted by these OCC transmitters (LED infrastructures on the roadside and/or LED-based headlamps embedded in cars) will generate stripe patterns in image frames captured by existing license-plate recognition systems, which seriously degrades the accuracy of the recognition. To this end, we propose and experimentally demonstrate a method that can reduce the interference of OCC stripes in the image frames captured by the license-plate recognition system. We introduce an innovative pipeline with an end-to-end image reconstruction module. This module learns the distribution of images without OCC stripes and provides high-quality license-plate images for recognition in OCC conditions. In order to solve the problem of insufficient data, we model the OCC strips as multiplicative noise and propose a method to synthesize a pairwise dataset under OCC using the existing license-plate dataset. Moreover, we also build a prototype to simulate real scenes of the OCC-based vehicle networks and collect data in such scenes. Overall, the proposed method can achieve a recognition performance of 81.58% and 79.35% on the synthesized dataset and that captured from real scenes, respectively, which is improved by about 31.18% and 24.26%, respectively, compared with the conventional method. Full article
Show Figures

Figure 1

18 pages, 3527 KiB  
Article
ZEROES: Robust Derivative-Based Demodulation Method for Optical Camera Communication
by Maugan De Murcia, Hervé Boeglen and Anne Julien-Vergonjanne
Photonics 2024, 11(10), 949; https://doi.org/10.3390/photonics11100949 - 9 Oct 2024
Viewed by 1246
Abstract
Most of Optical Camera Communication (OCC) systems benefit from the rolling shutter mechanism of Complementary Metal-Oxide Semiconductor (CMOS) cameras to record the brightness evolution of the Light-Emitting Diode (LED) through dark and bright strips within images. While this technique enhances the maximum achievable [...] Read more.
Most of Optical Camera Communication (OCC) systems benefit from the rolling shutter mechanism of Complementary Metal-Oxide Semiconductor (CMOS) cameras to record the brightness evolution of the Light-Emitting Diode (LED) through dark and bright strips within images. While this technique enhances the maximum achievable data rate, the main difficulty lies in the demodulation of the signal extracted from images, subject to blooming effect. Thus, two main approaches were proposed to deal with this issue, using adaptive thresholds whose value evolves according to amplitude changes or detecting signal variations with the first-order derivative. As the second method is more robust, a new demodulation method based on the detection of the zeros of the first-order derivative of the extracted signal was proposed in this paper. Obtained results clearly show an improvement in the extracted signal demodulation compared to other methods, achieving a raw Bit Error Rate (BER) of 10−3 around 50 cm in a Line-Of-Sight scenario, and increasing the maximum communication distance by 43.5%, reaching 330 cm in the case of a Non-Line-Of-Sight transmission. Full article
(This article belongs to the Special Issue Optical Wireless Communications (OWC) for Internet-of-Things (IoT))
Show Figures

Figure 1

17 pages, 3459 KiB  
Article
Performance Analysis of a Color-Code-Based Optical Camera Communication System
by Hasan Ziya Dinc and Yavuz Erol
Appl. Sci. 2024, 14(19), 9102; https://doi.org/10.3390/app14199102 - 8 Oct 2024
Viewed by 1159
Abstract
In this study, we present a visible light communication (VLC) system that analyzes the performance of an optical camera communication (OCC) system, utilizing a mobile phone camera as the receiver and a computer monitor as the transmitter. By creating color channels in the [...] Read more.
In this study, we present a visible light communication (VLC) system that analyzes the performance of an optical camera communication (OCC) system, utilizing a mobile phone camera as the receiver and a computer monitor as the transmitter. By creating color channels in the form of a 4 × 4 matrix within a frame, we determine the parameters that affect the successful transmission of data packets. Factors such as the brightness or darkness of the test room, the light color of the lamp in the illuminated environment, the effects of daylight when the monitor is positioned in front of a window, and issues related to dead pixels and light bleed originating from the monitor’s production process have been considered to ensure accurate data transmission. In this context, we utilized the PyCharm, Pydroid, Python, Tkinter, and OpenCV platforms for programming the transmitter and receiver units. Through the application of image processing techniques, we mitigated the effects of daylight on communication performance, thereby proposing a superior system compared to standard VLC systems that incorporate photodiodes. Additionally, considering objectives such as the maximum number of channels and the maximum distance, we regulated the sizes of the channels, the distances between the channels, and the number of channels. The NumPy library, compatible with Python–Tkinter, was employed to determine the color levels and dimensions of the channels. We investigate the effects of RGB and HSV color spaces on the data transmission rate and communication distance. Furthermore, the impact of the distance between color channels on color detection performance is discussed in detail. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

19 pages, 16985 KiB  
Article
Farm Monitoring System with Drones and Optical Camera Communication
by Shinnosuke Kondo, Naoto Yoshimoto and Yu Nakayama
Sensors 2024, 24(18), 6146; https://doi.org/10.3390/s24186146 - 23 Sep 2024
Cited by 2 | Viewed by 3357
Abstract
Drones have been attracting significant attention in the field of agriculture. They can be used for various tasks such as spraying pesticides, monitoring pests, and assessing crop growth. Sensors are also widely used in agriculture to monitor environmental parameters such as soil moisture [...] Read more.
Drones have been attracting significant attention in the field of agriculture. They can be used for various tasks such as spraying pesticides, monitoring pests, and assessing crop growth. Sensors are also widely used in agriculture to monitor environmental parameters such as soil moisture and temperature. Due to the high cost of communication infrastructure and radio-wave modules, the adoption of high-density sensing systems in agriculture is limited. To address this issue, we propose an agricultural sensor network system using drones and Optical Camera Communication (OCC). The idea is to transmit sensor data from LED panels mounted on sensor nodes and receive the data using a drone-mounted camera. This enables high-density sensing at low cost and can be deployed in areas with underdeveloped infrastructure and radio silence. We propose a trajectory control algorithm for the receiving drone to efficiently collect the sensor data. From computer simulations, we confirmed that the proposed algorithm reduces total flight time by 30% compared to a shortest-path algorithm. We also conducted a preliminary experiment at a leaf mustard farm in Kamitonda-cho, Wakayama, Japan, to demonstrate the effectiveness of the proposed system. We collected 5178 images of LED panels with a drone-mounted camera to train YOLOv5 for object detection. With simple On–Off Keying (OOK) modulation, we achieved sufficiently low bit error rates (BERs) under 103 in the real-world environment. The experimental results show that the proposed system is applicable for drone-based sensor data collection in agriculture. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 5588 KiB  
Article
Rolling Shutter-Based Underwater Optical Camera Communication (UWOCC) with Side Glow Optical Fiber (SGOF)
by Jia-Fu Li, Yun-Han Chang, Yung-Jie Chen and Chi-Wai Chow
Appl. Sci. 2024, 14(17), 7840; https://doi.org/10.3390/app14177840 - 4 Sep 2024
Cited by 1 | Viewed by 1341
Abstract
Nowadays, a variety of underwater activities, such as underwater surveillance, marine monitoring, etc., are becoming crucial worldwide. Underwater sensors and autonomous underwater vehicles (AUVs) are widely adopted for underwater exploration. Underwater communication via radio frequency (RF) or acoustic wave suffers high transmission loss [...] Read more.
Nowadays, a variety of underwater activities, such as underwater surveillance, marine monitoring, etc., are becoming crucial worldwide. Underwater sensors and autonomous underwater vehicles (AUVs) are widely adopted for underwater exploration. Underwater communication via radio frequency (RF) or acoustic wave suffers high transmission loss and limited bandwidth. In this work, we present and demonstrate a rolling shutter (RS)-based underwater optical camera communication (UWOCC) system utilizing a long short-term memory neural network (LSTM-NN) with side glow optical fiber (SGOF). SGOF is made of poly-methyl methacrylate (PMMA) SGOF. It is lightweight and flexibly bendable. Most importantly, SGOF is water resistant; hence, it can be installed in an underwater environment to provide 360° “omni-directional” uniform radial light emission around its circumference. This large FOV can fascinate the optical detection in underwater turbulent environments. The proposed LSTM-NN has the time-memorizing characteristics to enhance UWOCC signal decoding. The proposed LSTM-NN is also compared with other decoding methods in the literature, such as the PPB-NN. The experimental results demonstrated that the proposed LSTM-NN outperforms the PPB-NN in the UWOCC system. A data rate of 2.7 kbit/s can be achieved in UWOCC, satisfying the pre-forward error correction (FEC) condition (i.e., bit error rate, BER ≤ 3.8 × 10−3). We also found that thin fiber also allows performing spatial multiplexing to enhance transmission capacity. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

20 pages, 10124 KiB  
Article
3D Positioning of Drones through Images
by Jianxing Yang, Enhui Zheng, Jiqi Fan and Yuwen Yao
Sensors 2024, 24(17), 5491; https://doi.org/10.3390/s24175491 - 24 Aug 2024
Viewed by 2837
Abstract
Drones traditionally rely on satellite signals for positioning and altitude. However, when in a special denial environment, satellite communication is interrupted, and the traditional positioning and height determination methods face challenges. We made a dataset at the height of 80–200 m and proposed [...] Read more.
Drones traditionally rely on satellite signals for positioning and altitude. However, when in a special denial environment, satellite communication is interrupted, and the traditional positioning and height determination methods face challenges. We made a dataset at the height of 80–200 m and proposed a multi-scale input network. The positioning index RDS achieved 76.3 points, and the positioning accuracy within 20 m was 81.7%. This paper proposes a method to judge the height by image alone, without the support of other sensor data. One height judgment can be made per single image. Based on the UAV image–satellite image matching positioning technology, by calculating the actual area represented by the UAV image in real space, combined with the fixed parameters of the optical camera, the actual height of the UAV flight is calculated, which is 80–200 m, and the relative error rate of height is 18.1%. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

14 pages, 3111 KiB  
Article
Cost-Effective Optical Wireless Sensor Networks: Enhancing Detection of Sub-Pixel Transmitters in Camera-Based Communications
by Idaira Rodríguez-Yánez, Víctor Guerra, José Rabadán and Rafael Pérez-Jiménez
Sensors 2024, 24(10), 3249; https://doi.org/10.3390/s24103249 - 20 May 2024
Cited by 2 | Viewed by 1299
Abstract
In the domain of the Internet of Things (IoT), Optical Camera Communication (OCC) has garnered significant attention. This wireless technology employs solid-state lamps as transmitters and image sensors as receivers, offering a promising avenue for reducing energy costs and simplifying electronics. Moreover, image [...] Read more.
In the domain of the Internet of Things (IoT), Optical Camera Communication (OCC) has garnered significant attention. This wireless technology employs solid-state lamps as transmitters and image sensors as receivers, offering a promising avenue for reducing energy costs and simplifying electronics. Moreover, image sensors are prevalent in various applications today, enabling dual functionality: recording and communication. However, a challenge arises when optical transmitters are not in close proximity to the camera, leading to sub-pixel projections on the image sensor and introducing strong channel dependence. Previous approaches, such as modifying camera optics or adjusting image sensor parameters, not only limited the camera’s utility for purposes beyond communication but also made it challenging to accommodate multiple transmitters. In this paper, a novel sub-pixel optical transmitter discovery algorithm that overcomes these limitations is presented. This algorithm enables the use of OCC in scenarios with static transmitters and receivers without the need for camera modifications. This allows increasing the number of transmitters in a given scenario and alleviates the proximity and size limitations of the transmitters. Implemented in Python with multiprocessing programming schemes for efficiency, the algorithm achieved a 100% detection rate in nighttime scenarios, while there was a 89% detection rate indoors and a 72% rate outdoors during daylight. Detection rates were strongly influenced by varying transmitter types and lighting conditions. False positives remained minimal, and processing times were consistently under 1 s. With these results, the algorithm is considered suitable for export as a web service or as an intermediary component for data conversion into other network technologies. Full article
(This article belongs to the Special Issue Lighting Up Wireless Communication, Sensing and Power Delivery)
Show Figures

Figure 1

16 pages, 3382 KiB  
Article
Neural Network-Based Detection of OCC Signals in Lighting-Constrained Environments: A Museum Use Case
by Saray Rufo, Lidia Aguiar-Castillo, Julio Rufo and Rafael Perez-Jimenez
Electronics 2024, 13(10), 1828; https://doi.org/10.3390/electronics13101828 - 8 May 2024
Cited by 1 | Viewed by 2053
Abstract
This research presents a novel approach by applying convolutional neural networks (CNNs) to enhance optical camera communication (OCC) signal detection under challenging indoor lighting conditions. The study utilizes a smartphone app to capture images of an LED lamp that emits 25 unique optical [...] Read more.
This research presents a novel approach by applying convolutional neural networks (CNNs) to enhance optical camera communication (OCC) signal detection under challenging indoor lighting conditions. The study utilizes a smartphone app to capture images of an LED lamp that emits 25 unique optical codes at distances of up to four meters. The developed CNN model demonstrates superior accuracy and outperforms traditional methodologies, which often struggle under variable illumination. This advancement provides a robust solution for reliable OCC detection where previous methods underperform, particularly in the tourism industry, where it can be used to create a virtual museum on the Unity platform. This innovation showcases the potential of integrating the application with a virtual environment to enhance tourist experiences. It also establishes a comprehensive visible light positioning (VLP) system, marking a significant advance in using CNN for OCC technology in various lighting conditions. The findings underscore the effectiveness of CNNs in overcoming ambient lighting challenges, paving the way for new applications in museums and similar environments and laying the foundation for future OCC system improvements. Full article
(This article belongs to the Special Issue Next-Generation Indoor Wireless Communication)
Show Figures

Figure 1

Back to TopTop