Next Article in Journal
Design and Machine Learning Optimization of a Dynamically Tunable VO2-Integrated Broadband Metamaterial Absorber for THz
Previous Article in Journal
Infrared Image Super Resolution Method Based on Stochastic Degradation Modeling
Previous Article in Special Issue
Optimal Photovoltaic Array Configuration under Non-Uniform Laser Beam Conditions for Laser Wireless Power Transmission
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low Latency and Multi-Target Camera-Based Safety System for Optical Wireless Power Transmission

Laboratory of Future Interdisciplinary Research of Science and Technology (FIRST), Institute of Integrated Research (IIR), Institute of Science Tokyo, Tokyo 152-8550, Japan
*
Author to whom correspondence should be addressed.
Photonics 2026, 13(2), 156; https://doi.org/10.3390/photonics13020156
Submission received: 3 January 2026 / Revised: 28 January 2026 / Accepted: 29 January 2026 / Published: 6 February 2026

Abstract

Optical Wireless Power Transmission (OWPT) holds a significant position for enabling cable-free energy delivery in long-distance, high-energy, and mobile scenarios. However, ensuring human and equipment safety under high-power laser exposure remains a critical challenge. This study reports a vision-based OWPT safety system that implements the principle of automatic emission control (AEC)—dynamically modulating laser emission in real time to prevent hazardous exposure. While camera-based OWPT safety systems have been proposed in the concept, there are extremely limited working implementations to date. Moreover, existing systems struggle with response speed and single-object assumptions. To address these gaps, this research presents a low-latency safety architecture based on a customized deep learning-based object detection framework, a dedicated OWPT dataset, and a multi-threaded control stack. The research also introduces a real-time risk factor (RF) metric that evaluates proximity and velocity for each detected intrusion object (IO), enabling dynamic prioritization among multiple threats. The system achieves a minimum response latency of 14 ms (average 29 ms) and maintains reliable performance in complex multi-object scenarios. This work establishes a new benchmark for OWPT safety system and contributes a scalable reference for future development.

1. Introduction

1.1. Research Background: Safety in OWPT

Optical Wireless Power Transmission (OWPT) is increasingly recognized as a transformative technology for efficiently transferring power without physical connections, leveraging laser beams or high-intensity light emitting diodes (LEDs) to transmit energy over distances ranging from meters to over tens of kilometers [1,2,3,4]. The ability of OWPT systems to deliver significant power levels wirelessly opens numerous applications, such as powering remote sensors in Internet-of-Things (IoT) network, drones, and automated guided vehicles (AGVs), thereby eliminating the constraints imposed by traditional wiring methods [5,6,7,8,9,10,11,12,13]. However, the high-intensity optical beams utilized in OWPT inherently pose certain safety risks, including potential damage to human eyes, skin, or sensitive equipment upon unintended exposure. The International Electrotechnical Commission (IEC 60825 standard) has established strict laser safety guidelines, classifying lasers according to power levels and associated exposure hazards [14]. Within this regulatory framework, ensuring absolute safety in operational OWPT environments is both a critical challenge and a necessary prerequisite for broader acceptance and deployment.
The number of cases in which safety technologies for OWPT have been examined is limited to a very small number, since there are still a few cases in which OWPT systems themselves have been constructed. As Table 1 shows, each safety approach was tailored to particular OWPT system configurations, operational contexts, and risk profiles. For example, resonant-cavity-based techniques (e.g., SWIPT by Tongji University) excel in fast shutdown (nanosecond level) for highly controlled and static setups, while using LED or eye-safe wavelengths addresses broader safety requirements through alternative means. Power Light’s light curtain systems also developed as an important safety measurement for high-power OWPT; it uses a secondary harmless beam surrounding the main OWPT beam to detect foreign objects and fulfill safety control [15,16,17]. However, there is no “one-size-fits-all” solution; the suitability of each method depends on factors such as required power, deployment scale, regulatory constraints, and the complexity of the operating environment. Thus, the design of safety strategies in OWPT must be context-dependent.

1.2. Problem Statement

Specifically, the camera-based OWPT safety system has gained attention due to its adaptability, broad coverage, and relatively compact and cost-effect aspects. A fundamental camera-based OWPT safety system, previously reported by the authors, employs a depth camera and computer vision-based control logic to realize concept of emission control, and the overall system structure is shown in Figure 1 [18]. This approach enabled the system to detect the moving object and calculate real-time distance between beam and object using depth data and dynamically adjust a “Safety-Distance” threshold based on object velocity, effectively preventing beam contact in controlled indoor settings; the overall logic is shown in Figure 2. While effective in controlled indoor environments, this initial approach was limited by relatively long latency (100–180 ms) and could only handle single-object intrusion scenarios reliably. Such limitations significantly constrain real-world applicability, where rapid responses and simultaneous multi-object handling are critical.
To address these identified gaps, this paper reports on the development of the improved OWPT safety system that integrates deep learning-based object detection with a tailored OWPT-specific training dataset, capable of recognizing diverse objects. It typically focuses on OWPT objects such as photovoltaic (PV) cells, AGVs, and human intruders under varying lighting and environmental conditions. The type of image training is continuously expanding; thus, unmanned aerial vehicles (UAVs) and more OWPT objects will also be included [19]. The system utilizes multi-threaded architecture, leveraging GPU acceleration for image processing and CPU-based parallel computation, thereby achieving reduced latency in real-time laser emission control. Crucially, this new system introduces a risk factor (RF) model for multi-objects intrusion scenario, quantitatively assessing each detected object’s threat level based on its proximity to the beam path, velocity, and class of object, enabling effective prioritization and laser shutdown management.
The contributions of this paper are threefold: first, it achieves a substantial reduction in latency (minimum 14 ms, average 33 ms), representing nearly an order-of-magnitude improvement compared to prior art. Second, it robustly handles multiple simultaneous intrusions through a prioritized, risk-based automatic emission control (AEC) model, thereby fully adhering to stringent OWPT safety guidelines. Third, through extensive experimental validation and comprehensive failure case analysis, it highlights practical reliability and clearly outlines current limitations and directions for future enhancements, including potential extension to outdoor and complex operational environments.
The remainder of this paper is structured as follows: Section 2 provides a comprehensive review of related work and details the methodology and system design of the proposed advanced OWPT safety system; Section 3 describes setups for on-field experiments and presents extensive results and detailed performance analyses; and Section 4 discusses implications, limitations, and future research opportunities, and finally concludes with a summary of key findings and their broader significance. The basic concept and initial experiments were presented at the OWPT2025 conference [20]. This paper provides a detailed explanation, implements additional technology, and discusses detailed characteristics based on it.

2. Methodology, System Design, and Experiment

The proposed OWPT safety system is designed to robustly address the complexities of dynamic operational environments involving multiple objects, each potentially moving unpredictably. To effectively manage the risks of high-intensity optical beams, the system integrates object detection and automated risk assessment frameworks, all operating in real time (>30 fps). It is conceived as a layered architecture that turns raw visual data into deterministic beam (laser/LED) emission commands within a few tens of milliseconds.

2.1. Automatic Emission Control (AEC)

The conventional approach to laser safety is anchored in the concept of Maximum Permissible Exposure (MPE), a set of threshold values defined by international standards IEC 60825 [14]. The MPE represents the highest level of laser radiation to which a person may be exposed without hazardous effects or adverse biological changes to the eye or skin. These values are derived from extensive biophysical research and are dependent on static parameters including the laser’s wavelength, the exposure duration, and whether the emission is continuous-wave or pulsed. While MPE is an indispensable metric for defining absolute hazard levels and forms the basis for laser classification, its application to dynamic OWPT systems reveals significant limitations. As the left figure in Figure 3 shows, the current MPE standard of IEC 60825 is fundamentally static and reactive; it presupposes that exposure may occur and seeks only to limit its magnitude. This leads to the definition of fixed hazard zones, such as the Nominal Ocular Hazard Distance (NOHD), which dictates a “keep-out” area around the laser source. For dynamic applications where the operational environment is unstructured and may involve frequent human–robot interaction, this static approach often necessitates overly conservative system designs that limit transmitted power or restrict the operational range, thereby hindering the technology’s practical utility.
In stark contrast, this work proposes the paradigm of AEC as a more suitable framework for ensuring safety in dynamic environments. As the centerpiece of the OWPT safety architecture, AEC is not to focus on controlling the parameters of the high-power optical source but as a dynamically gated device whose emission state is contingent, on a frame-by-frame basis, upon a verified safe condition within its operational environment. The primary objective of an AEC system is not merely to operate below the MPE, but to prevent unintended hazardous exposure entirely, thereby achieving a state of “zero effective exposure” to any intrusion object. As the right graph in Figure 3 is also illustrated in the conceptual diagram reported in the OWPT2025 conference paper [20], if the control system performs with perfect reliability, no hazardous irradiation of an intrusion object ever occurs. In such a scenario, the MPE value for an intrusion event becomes effectively zero, rendering the static hazard calculation for such events moot. This paradigm shift allows the system to be engineered for “Class-1 equivalence” through intelligent, real-time control, rather than through inherent and often restrictive power or parameter limitations. AEC is currently undergoing a standardization review at the IEC.

2.2. Multiple Intrusion Objects in OWPT

To effectively establish a robust safety system that possesses the AEC functionalities within an OWPT environment that involves multiple objects, including both designated power-receiving targets and potential IOs, careful analysis for detection, and safety control mechanisms must be implemented. This subsection systematically investigates the complexities involved across various scenarios and multiple moving objects and provides the corresponding solutions proposed to maintain OWPT safety.
Real-world operational environments, such as factory floors, warehouses, or collaborative workspaces, are rarely clean or predictable single-object scenarios. They are often cluttered with both static and dynamic elements, as Figure 4 shows. The simplified models are created and analyzed to clarify the challenges for safety system in various scenarios:
(a)
Dynamic Target, Static Intrusions: This scenario involves a mobile power-receiving target (e.g., an AGV) moving within an environment containing stationary objects (e.g., shelving, machinery, and parked vehicles). The primary challenge here is one of robust detection and classification to avoid false positives, where a benign static object near the beam path is incorrectly identified as a hazardous intrusion, leading to unnecessary shutdowns and reduced system efficiency.
(b)
Dynamic Target, Dynamic Intrusions: This represents a significantly more complex case, characterized by the simultaneous movement of both the intended power receiver and one or more potential IOs. This requires the system to perform simultaneous tracking and continuous risk assessment for all moving entities and movement directions within its field of view, accurately calculating their parameters even in the most complicated scenario, i.e., that the IOs are overlapped with each other, resulting in losing parameters to determine safety operation.
As initial research, this work mainly focuses on the scenario of when static powering target is beamed for OWPT, and there exist dynamic intrusion objects on the scene. The beam remains static and it supposes multiple different IOs will pass through the OWPT operation field with the above varied moving schemes.

2.3. The Risk Factor Model for Deterministic Threat Prioritization

In the fundamental safety system, the “Dynamic Safety-Distance” model was proposed; the core logic of the model was to calculate the required safety distance, denoted as d safetyDistance , based on the following two key real-time parameters: the object’s velocity ( v object ) and an estimated overall system latency ( t latency ) [18]. The velocity was estimated by tracking the displacement of the detected object’s centroid between consecutive frames. Using the 3D world coordinates derived from the depth camera, the system calculated the Euclidean distance the object traveled and divided it by the known time interval between frames (e.g., 1/30th of a second for a 30 fps camera). The system latency was a pre-characterized value representing the total delay from image capture to the final light-off command being executed by the hardware. The dynamic safety distance was then calculated using Function (1):
d safetyDistance = k × v object × t latency ( k = 3.1579 × v object 0.33375
The equation derivation was introduced in previous research, in every frame, where k is a function that is fitted through referencing the fixed safety distance experiment result value. A quadratic function was chosen to be fitted, and it could model a trend in which the k value is gradually decreases in accordance with the increasing velocity; the decreasing rate of the function also becomes slower as the velocity does, thus exactly suiting the needs of the safety distance-changing scheme. The system would compare this calculated d safetyDistance   to the actual measured shortest distance between the object and the beam path ( d i ), as shown in Figure 5. If the object was closer than the calculated safety distance, the system would immediately issue a command to shut off the light. The primary purpose of this model was to balance safety and efficiency as follows: for fast-moving objects, the system would demand a larger safety buffer, while for slow-moving objects, it would allow them to approach closer before shutting down the beam, thereby maximizing the potential uptime of the power transmission.
Then, the central problem in any multi-object scenario is the judgement of the most immediate threat among the intrusion objects. Using only the previous safety distance model could cause the controller oscillating between different objects, spurious and inefficient toggling of the beam, or, in the worst case, failing to act on the most critical threat in time, resulting in safety failure (recorded as a False Negatives). To resolve this critical issue, this work introduces the risk factor (RF) model, a quantitative mechanism for real-time, deterministic threat prioritization. The RF for each detected object is computed in every processing cycle using the following weighted Function (2):
R F = W d × ( 1 / D ) + W v × V
Each component of this formulation is designed to capture a critical aspect of the potential hazard: distance (D) represents the depth distance measured by the camera. The incorporation of distance into the risk formulation through the reciprocal term (1/D) is essential, as it strongly penalizes proximity. This is critical due to the triangular field-of-view (FOV) geometry inherent in single-camera systems. As objects move closer to the camera, the lateral clearance space shrinks dramatically, significantly reducing both available processing time and system redundancy. Thus, even minor positional errors at short distances can rapidly escalate into unsafe scenarios. Although velocity alone could theoretically be managed by designing for the maximum credible closing speed, the unique geometric constraints of the camera’s FOV necessitate explicitly emphasizing proximity through the 1/D term. Velocity (V) represents the object’s relative velocity component directed toward the beam path, assigning higher risk to rapidly approaching objects even if they are currently distant. The adjustable weighting factors ( W d and W v ) provide tunability, allowing the RF model to be optimized for specific operational contexts. For instance, in environments with structured, high-speed paths such as AGV lanes, W v may be increased to prioritize speed-related risks. Conversely, in cluttered office settings with unpredictable, slower-moving human targets, W d can be increased to emphasize proximity, thus ensuring a safer buffer around nearby individuals. This flexibility underscores the RF model’s adaptability and practical utility in diverse application scenarios.
The operational logic of the RF model is that, in each processing cycle, the system calculates the RF value for all detected intrusion objects. As Figure 6 shows here, it then identifies the object with the maximum RF score and designates it as the single “priority IO” for that cycle. The system’s full safety logic—including the calculation of the dynamic safety distance, depth data averaging, pixel filtering and close contour, IO’s edge pixel point selection, and the subsequent decision to issue an AEC command—is then applied exclusively to this one priority IO. By using this method to select a single priority target, the system avoids the processing with huge amounts of depth and RGB data for more detailed IO’s parameter measurement and unnecessary computation, which could add significant latency and breach the critical 33 ms safety threshold.

2.4. System Architecture

The safety system is realized through a tightly integrated combination of off-the-shelf and custom-developed hardware and software components, selected and configured to optimize low-latency performance.

2.4.1. Hardware Architecture and Devices

As Figure 7 shows, the laboratory environment and the settlement of the hardware. The sensing and computation hardware front-end is built around an Intel RealSense D455 RGB-D camera (Manufactured by Intel, Serial No. K83122-110) that deliver synchronized color and depth images at 848 × 480 pixels with 30/60 fps for both channels. Setting the global shutter speed to a fixed value lower than 1/125 would guarantee that optical sampling grid does not affect the data sampling speed [21]. To maximize depth accuracy, the on-board infra-red projector is driven at its nominal maximum of 360 mW class-1 equivalent laser power, and the factory-calibrated depth scale (≈0.001) is queried at start-up so that all range calculations remain metric-correct in subsequent code paths. These settings, together with the D455’s factory baseline (≈95 mm) and wide optical field (87° × 58°), furnish a usable depth span of 0.3–10 m, which covers typical indoor OWPT operation. The experiment field is inside an approximately 8 m(w) × 5 m(l) × 4 m(h) room with sufficient lighting.
The camera streams are ingested over USB 3.2 into a workstation PC equipped with an NVIDIA GeForce RTX3060(12 GB) GPU. A multi-threaded pipeline assigns color frames to CUDA for object detection while the CPU handles depth-map conditioning. The light beam during the experiment is a highly directional LED beam carried by a stage light gimbal (model ADJ Vizi BSW 300, LED is used for experiment simplicity). However, in this work, as the beam is static, the gimbal is static during the experiment, and beam could be pointed to the designated location when testing with different experimental conditions.

2.4.2. Software Architecture and Performance

The core of the software perception stack is built around a You Only Look Once (YOLOv8) object recognition model. The pre-trained YOLOv8 model was not employed directly. Instead, a fine-tuned network was trained on a domain-specific dataset for OWPT applications. As Figure 8 partly demonstrated, this dataset comprises over ten thousand annotated images containing designated power-receiving targets (e.g., photovoltaic cells), common intrusion objects (e.g., humans, body parts), and specialized entities such as AGVs, small drones, and cars used in another OWPT research. These images were post-processed with diverse lighting conditions and environmental contexts, significantly contributing to the robustness and accuracy of the resulting detection model. During training, augmentation techniques were utilized, including simulated optical glare, varied illumination conditions, artificial occlusions, and partial obscurations representative of realistic operational scenarios. Additionally, standard augmentation methods such as mosaic and mix-up were applied to further enhance model generalization and robustness against overfitting.
Figure 9 presents the training and validation statistics of the customized YOLO-v8 model after 200 epochs. The training is with image size of 840 × 640 and batch size of 32. All three training losses—box, classification, and distribution-focal (dfl)—drop monotonically (box: 1.40 → 0.60, −57%; cls: 1.47 → 0.40, −73%; and dfl: 1.50 → 0.98, −35%), and the corresponding validation losses stabilize around 1.10–1.30 without late-stage divergence, indicating that the network converges cleanly and exhibits no signs of overfitting. The precision curve saturates at P ≈ 0.98 and the recall curve plateaus at R ≈ 0.91 when the confidence threshold is swept from 0 to 1.0, yielding a best-F1 operating point near conf ≈ 0.55. Aggregate detection quality reaches mAP50 = 0.933 and mAP50-95 ≈ 0.78. Per-class results confirm that AGV, UAV (drone), PV panel, and pidcar targets all achieve mAP50 > 0.99. The person class attains AP@0.5 ≈ 0.683. This reduction consists of greater pose/scale variation, frequent occlusions, and domain mismatch introduced by incorporating > 20k human/part instances from COCO; at high confidence thresholds, the person recalls drops more quickly than the engineered classes. From the validation samples shown in Figure 10, pidcar targets with non-reflective surface still maintain stable contour shapes under low illumination, indicating that the model is robust in low-light/contrast-deficient scenarios; this is consistent with the enhancement of exposure, noise, and color perturbation added in the training. For the person category, it is recommended to supplement the nighttime and backlighting samples within the project domain in the subsequent data collection and adopt domain-specific fine-tuning or category adaptive thresholding for further enhancement.
Figure 11 demonstrates the real-time perception-to-emission loop. First, the aligned depth and color frames are fused into a hybrid map, then passed through a rolling-window average and an IQR-based depth filter. This removes speckle noise and makes the subsequent contour operations numerically stable. Then, morphological closing identifies complete object silhouettes; a bounding contour based on filtered depth pixels encloses them, and pixel-level post-processing yields a clean “object region” mask that is streamed to two parallel calculators. (1) Velocity estimator: frame-to-frame centroid displacement, re-projected into metric space with camera intrinsics, gives instantaneous 3-D velocity for every IO. (2) Beam-IO distance: the same mask is intersected with the real-time beam path, reconstructed from gimbal telemetry, to obtain the shortest perpendicular distance to beam. Thirdly, the measured velocity feeds a velocity-depth weighted safety distance model, producing a dynamic threshold that tightens for fast IOs. A comparison between safety distance and beam-IO distance updates a shared beam status for real-time. Finally, the flag is polled by the micro-controller that drives the laser/LED. If the threshold is violated, the module asserts laser-off within the same frame; otherwise, the beam remains enabled. The new beam position is echoed back to the distance calculator, closing the control loop.

3. Experiment, Result, and Analysis

To empirically validate the performance and reliability of the advanced OWPT safety system, a series of controlled laboratory experiments was carried out and rigorously characterized the system’s end-to-end latency and its ability to detect and handle complex, multi-object intrusion scenarios. This section details the physical testbed and boundary conditions under which the trials were executed and presents visual evidence confirming that the system operates as intended within its designed operational domain, thereby establishing a baseline for the subsequent discussion of inherent limitations and future research directions.

3.1. OWPT Safety System Operation and Conditions

The experiments were conducted under conditions designed to systematically evaluate the system’s performance against its key safety objectives.
(a)
Operational Environment: All experiments were conducted in a controlled indoor laboratory space configured to represent realistic operational conditions. The testing area was intentionally designed to simulate a cluttered environment, populated with both stationary obstacles and dynamic IOs, to rigorously test the OWPT system’s perception accuracy, latency characteristics, and overall control reliability. The camera-based safety system operated steadily at frame rates of 30 fps, providing a consistent baseline for latency measurements and ensuring precise timing of the AEC responses.
(b)
Test Scenarios and Intrusion Objects: The primary objective of the experimental design was to validate the system’s capability to manage multiple simultaneous intrusions, an area previously highlighted as a major limitation in earlier implementations. Experiments included systematically varied scenarios involving both human actors and AGV car and pidcar used in the laboratory as intrusion objects. Each scenario was rigorously repeated 100 times, ensuring statistical reliability and robustness of the obtained data.
  • One moving human with two static AGVs.
  • One moving human with one moving AGV and one static AGV.
  • One moving human with two moving AGVs.
These scenarios were designed to test the robustness of the RF prioritization model and the overall system latency under increasing cognitive load, thereby providing a comprehensive assessment of its suitability for real-world deployment.
A representative sequence of the system’s real-time operation captured in consecutive frames is presented in Figure 12. These frames were selectively extracted from an 8 s continuous operation, with redundant and identical frames omitted to clearly illustrate the critical moments when significant changes occurred.
Frames 1–4 (first row): A pidcar enters the OWPT operational area, immediately recognized as the priority IO due to its proximity to the beam’s central axis. Upon detecting the priority IO crossing the predefined safety distance threshold, the emission control subsystem immediately triggers the safety mechanism, turning the simulated laser (“Laser OFF”).
Frames 5–8 (second row): The pidcar continues traversing the OWPT region, maintaining the safety mechanism activation throughout its trajectory. Once the AGV clears the safety threshold, the system dynamically turns back on the beam emission and searches for a new priority IO, which is the human actor who has entered and is approaching the OWPT field.
Frames 9–12 (third row): The human actor is detected as the new priority IO, triggering the emission control subsystem once again due to their closer proximity and higher calculated risk factor. The safety mechanism (“Laser OFF”) remains engaged as the human continues moving within the critical safety distance.
Frames 13–16 (fourth row): Finally, both the pidcar and the human actor move completely out of the OWPT region and are no longer be able to trigger the safety mechanism, prompting the system to again restore the beam operation (“Laser ON”), confirming that the system promptly reverts to normal operational conditions upon clearance of all detected intrusions.
Throughout the experiment, the OWPT beam was maintained statically at the frame center position, ensuring consistency and clarity in evaluating system responses solely based on IO interactions. These experiments confirm the reliability of the developed OWPT safety system, clearly demonstrating real-time object recognition, accurate prioritization, immediate emission control responses, and quick recovery to normal operation without manual intervention.

3.2. Detailed System Latency Analysis

A thorough analysis of the latency results from the current system’s experiment reveals the specific performance characteristics of its new architecture and identifies the bottlenecks for future optimization. The total end-to-end latency is composed of the following two main stages: the detection stage and the process/control stage.
By plotting the results of the experiment as latency histogram, as shown in Figure 13, the blue columns describe the detection stage of the customized YOLOv8 network, which exhibits a mean latency of 6.86 ms with a standard deviation of 2.03 ms (μ ± σ = 6.86 ± 2.03 ms; min–max 4.00–14.32 ms). The distribution is right-skewed (skew = 0.94) and bimodal, with modes at ≈4–5 ms and ≈7–8 ms; the 95th and 99th percentiles are 9.90 ms and 13.02 ms, respectively. The orange columns depict the aggregate process and control stage that covers multi-object tracking, risk factor computation, safety distance evaluation, and the gimbal/laser issuance. Values cluster tightly between 17 ms and 25 ms. The result shows μ ± σ = 21.66 ± 4.87 ms (min–max 10.60–34.81 ms) and a 95th percentile of 30.75 ms; 82.7% of frames complete within 25 ms, leaving sufficient headroom for the safety controller to react before the next video frame arrives. Summing both stages yields an end-to-end response of μ ± σ = 28.52 ± 6.85 ms (min–max 14.60–49.14 ms), with an 80.3% of frames ≤ 33 ms (single 30 fps period) and a 95th percentile of 40.65 ms; the worst-observed near 49.1 ms outlier occurred less than 10 times in >1000 frames. Detection and process/control times are strongly correlated, indicating shared frame complexity and occasional resource contention; thus, perception is not the dominant bottleneck, and tail latencies are primarily introduced by frames containing three or more moving IOs and rich background texture, where additional GPU ↔ CPU transfers (depth map fusion, etc.) temporarily stall the pipeline.
During the development, an initial version of the neural network-equipped camera-based OWPT safety prototypes reported total latencies in the 40–60 ms range and were limited to single-object scenarios. The version of the safety system in current research not only lowers the average latency by ~40% but also introduces a multi-object priority scheme that remains deterministic underload, setting a new benchmark for OWPT safety system. Also, at a representative camera-target spacing of 1 m, the dynamic safety distance allows a permissible system delay of 30 ms for objects moving up to 11.65 m/s. The average measured 28.5 ms therefore satisfies the tightest part of the up limit. Because the allowable delay increases with depth distance, the current implementation is safe for all objects further than 1 m depth distance even when they sprint at 15 m/s, providing a scalable margin for some fast-moving intrusion objects as they have been recognized.

3.3. Benchmark Compared with Previous Research

Table 2 substantiate the latency advancement achieved in this work by providing a direct benchmark against the fixed safety distance (fixed-SD) safety system reported in Ref. [18]. In Ref. [18], the safety decision principle was defined as follows: an RGB-D camera is used to estimate IO motion and beam proximity; safety distance is computed from IO velocity and system delay; and emission shutoff is triggered when the shortest beam-to-IO distance violates this safety distance criterion. As listed in Table 2, the re-produced fixed-SD baseline relies on static thresholds and largely sequential control logic, resulting in a high end-to-end response time with an average of 57.8 ms and peak delays up to 99.2 ms. With the segmented measurement, the baseline can be decomposed into a detection latency of 32.4 ms on average (19.7–61 ms) and a process/control latency of 25.4 ms on average (9.4–38.2 ms), indicating that both stages contribute substantially to the overall latency. In contrast, the proposed system achieves an average end-to-end latency of 28.5 ms across three experimental groups (Table 2), corresponding to a reduction in more than 49% relative to the fixed-SD baseline. This end-to-end latency is decomposed into 6.9 ms detection latency on average (4–14.3 ms) and 21.7 ms process/control latency on average (10.6–34.8 ms).
Notably, the most significant reduction is achieved in the detection stage, since the perception module is replaced with a GPU-accelerated, OWPT-domain fine-tuned YOLOv8 detector, which largely reduces the detection stage latency. Meanwhile, the process/control latency does not exhibit a similarly large decrease. This result should be interpreted together with the fact that the present work is not a direct incremental extension of Ref. [18], but a newly designed system architecture that integrates substantially richer functionality. Compared with the fundamental Ref. [18] system, the present work introduces multiple post-processing and decision functions beyond simple single-object-based shutoff, including multi-object intrusion handling, deterministic priority selection across simultaneous intrusions, risk factor evaluation and ranking, and additional geometric processing for more precise safety evaluation in each cycle, as summarized in Table 3. Under such an expanded workload, the process/control stage would normally increase in latency if implemented in a purely sequential manner. The fact that the process/control latency remains within a comparable range (21.7 ms in the current system on average; Table 2) indicates that the new system architecture, particularly the multi-threaded pipeline that overlaps GPU inference with CPU-side depth conditioning and control, effectively mitigates the added computational burden and prevents an unfavorable increase in post-processing time. In this sense, the process/control result is not only a latency metric but also evidence that the proposed architecture sustains real-time feasibility while integrating more powerful safety functions.

3.4. Detection Capability

Another key objective of the advanced safety system was to overcome the single-object limitation of its predecessor. To validate this, the system’s detection capability was categorized using the following standard metrics: true positives (TPs), false positives (FPs), and false negatives (FNs). Across all the trespassing actions in entire experiment (300 times passing), the system recorded zero false positives. The results are summarized into a success rate distribution Figure 14. It indicates the high accuracy of the custom-trained YOLOv8 model and the effectiveness of the overall safety method. An FP-free system is essential for industrial applications, as it will not cause unnecessary and costly interruptions to the power transmission process.
However, both in the relatively more complex scenario tested “1 Moving Human, 1 Moving Car, 1 Static Car” and “1 Moving Human, 2 Moving Cars”, a considerable number of false negatives were recorded. An FN occurs when the system fails to disable the laser before an object crosses the safety threshold. A nuanced analysis reveals that these failures were not random. They occurred specifically in instances where the high cognitive load of tracking three dynamic objects simultaneously caused the system’s processing latency to spike beyond the ~33 ms real-time threshold. In these rare cases, the system correctly identified the threats, but its reaction was delayed, resulting in a safety failure. The existence of FNs under high load is a finding that reveals the system’s current limitations and demonstrates the trade-off between capability (handling more objects) and reliability (maintaining low latency). This insight is essential for defining safe operational constraints for deployment (e.g., certifying the system for a maximum number of dynamic objects) and for prioritizing future work on optimizing the processing pipeline to eliminate these failure modes.

4. Discussion

The results presented in Section 3 demonstrate the successful implementation of a closed-loop safety system for OWPT applications with multiple IOs. The system exhibits the capability to perceive multiple intrusion objects within a defined workspace and initiate appropriate actions based on their presence and location. This research serves as a valuable and foundational proof-of-concept, confirming the potential for leveraging modern computer vision techniques to enhance the operational safety of OWPT technology.

4.1. Performance Analysis in the Context of System Latency and Intrusion Velocity

A critical aspect of any active safety system is its temporal performance limitation, that is, its ability to react faster than a potential hazard can cause harm. To visualize this capability, a framework similar to that used by PowerLight Technologies to evaluate their safety systems was adopted [15,16]. Figure 15 maps the required safety buffer distance against the velocity of an IO. This log-log plot provides an intuitive map of the system’s operational domain, where diagonal lines represent the performance threshold for a given end-to-end system latency. For the system to be effective, it must detect, decide, and actuate a safety response before an object traveling at a given speed can cross the available safety distance. Any object-speed combination that falls below the system’s latency line represents a potential failure case.
From the experimental results, the system has a verified latency of approximately 20–30 milliseconds (0.02–0.03 s). This performance places the system in a capable position for many common industrial scenarios. It can comfortably react in time to prevent contact with a human moving at typical speeds (e.g., walking or reaching) and can handle incursions from most commercial drones or other flying objects. However, the analysis also clearly delineates the system’s limitations. It operates at the margin for faster-moving threats, such as a bird (over 10 m/s), and would be insufficient for high-speed IOs without a significantly larger mandated safety zone.
The main component of latency is the safety processing time compared with image processing time. This may be further shortened in the future with the optimization of the system pipeline. Furthermore, faster camera frame rates are also available. These enable shorter operation times, i.e., faster-moving objects, and the usage of shorter safety distances. On the other hand, lower-cost PCs than those used in this study require longer control/process latency. In safety systems, extending the safety distance can broaden the usable conditions. While this shortens the effective operation time of OWPT, it also means that operation can be adapted to various system configurations.
It is instructive to compare this performance to that of similar commercial systems. Previously mentioned PowerLight’s enhanced light curtain (ELC), for instance, is a safety mechanism that can achieve reaction times as low as 1 millisecond using harmless sub beam around the main beam, to passively trigger light control by foreign objects [15]. This speed allows for much smaller safety distance or, conversely, protection against extremely high-velocity intrusions. While in this research, it explores alternative camera sensor-based architecture that offers the potential for more flexible, 3D, and context-aware safety zones. By validating the safety system and quantifying its 30 ms performance level, it establishes a baseline for this class of sensor-driven safety systems. It points out a clear path for future work focused on reducing the latency of vision-based systems to approach the ideal performance.

4.2. The Static Beam Assumption and the Challenge of Dynamic Beam Tracking

A primary constraint of the research is that it is designed and validated for a static beam. The system’s logic, as demonstrated in the operational snapshot in Figure 7 and Figure 12, is based on monitoring and calculating a dynamic safety region around the beam. This approach is effective for stationary applications but falls short for the next generation of OWPT systems, which mainly focus on dynamic beam tracking to power moving targets like drones or mobile robots [23,24]. In a dynamic tracking scenario, the hazardous zone is no longer a relatively fixed area but becomes a mobile, invisible corridor that follows the beam’s path. The core issue is that the safety system must know the precise location of this mobile beam in real time. However, the infrared power beam itself is visible to the IR camera, but it is not possible to generate 3D trajectory profile based on a 2D beam showing in the frame, not to mention the beam pattern cannot be recognized well during its movement and varied irradiation on different backgrounds. A safety protocol based solely on visual perception of the workspace is therefore insufficient for a dynamic beam scenario.

5. Conclusions

This paper presented a low-latency camera-based safety system for OWPT, designed to mitigate safety risks associated with high-intensity laser emissions. A major progress of this study is the introduction and validation of the AEC mechanism into the safety system control stack; this approach dynamically modulates optical emission based on the newly developed risk factor model and deep learning-based computer vision algorithm, enabling the deterministic prioritization of multiple intrusion objects based on their proximity and velocity relative to the optical beam. This model has proven to be effective and practical in managing complex scenarios involving simultaneous multi-object intrusions, a crucial capability lacking in previous OWPT safety systems. The entire safety system achieves a low latency of averaging 28.5 ms with a minimum latency of 14 ms, which represents a substantial 3× improvement over previous work. Overall, this work provides solid progress for practical OWPT deployment, defining clear directions for further enhancement and standardization efforts.

Author Contributions

Conceptualization, C.Z. and T.M.; software, C.Z.; validation, C.Z.; writing—original draft preparation, C.Z.; writing—review and editing, T.M.; supervision, T.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JST SPRING, Japan Grant Number JPMJSP2180.

Data Availability Statement

All the data could be found in author’s GitHub repository, Ref. [22].

Acknowledgments

The author thanks the Department of Electrical and Electronic Engineering of the Institute of Science Tokyo for settlement of all the experiment resources; the author also thanks the academic supervisors and the lab members for the support made with proposal, execution, discussion, and revision of this work during its period.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Miyamoto, T. Optical wireless power transmission using VCSELs. In Semiconductor Lasers and Laser Dynamics VIII, Proceedings of the SPIE 10682, Strasbourg, France, 22–26 April 2018; The International Society for Optics and Photonics: Bellingham, WA, USA, 2018; p. 1068204. [Google Scholar]
  2. Ahmadi, K.; Serdijn, W.A. Advancements in laser and LED-based optical wireless power transfer for IoT applications: A comprehensive review. IEEE Internet Things J. 2025, 12, 18887–18907. [Google Scholar] [CrossRef]
  3. Zheng, Y.; Zhang, G.; Huan, Z.; Zhang, Y.; Yuan, G.; Li, Q.; Ding, G.; Lv, Z.; Ni, W.; Shao, Y.; et al. Wireless laser power transmission: Recent progress and future challenges. Space Sol. Power Wirel. Transm. 2024, 1, 17–26. [Google Scholar] [CrossRef]
  4. Hou, X.; Zhou, L.; Dong, S.; Shi, D. The high-power electricity generation and WPT demonstration mission—Proposed first step to develop space solar power. Space Sol. Power Wirel. Transm. 2025, 2, 73–80. [Google Scholar] [CrossRef]
  5. Li, X.; Huang, G.; Wang, Z.; Zhao, B. Optics-driven drone. Sci. China Inf. Sci. 2024, 67, 124201. [Google Scholar] [CrossRef]
  6. Jaffe, P.; Wilcoski, E.; DePuma, C.; Wagner, E.; Chen, D.; Baughman, J. The first demonstration of laser power beaming in orbit. In Proceedings of the 6th Optical Wireless and Fiber Power Transmission Conference (OWPT 2024), Proc. OWPT-3-01, Yokohama, Japan, 23–26 January 2024. [Google Scholar]
  7. Gou, Y.; Wang, J.; Chen, Y.; Mou, Z.; Feng, D. High-performance laser power converters for simultaneous wireless information and power transfer. In Proceedings of the SPIE 13361, Physics, Simulation, and Photonic Engineering of Photovoltaic Devices XIV, San Francisco, CA, USA, 28–30 January 2025; p. 1336105. [Google Scholar]
  8. Gou, Y.; Wang, H.; Wang, J.; Chen, Y.; Mou, Z.; Chen, Y.; Yang, H.; Deng, G. High-performance laser power converters for wireless information transmission applications. Opt. Express 2023, 31, 34937–34945. [Google Scholar] [CrossRef] [PubMed]
  9. Bai, Y.; Liu, Q.; Chen, R.; Zhang, Q.; Wang, W. Long-range optical wireless information and power transfer. IEEE Internet Things J. 2023, 10, 1617–1627. [Google Scholar] [CrossRef]
  10. Deng, H.; Zhao, S.; Liu, M.; Liu, Q. Joint sensing and power transfer via distributed coupled-cavity lasers. IEEE Internet Things J. 2024, 11, 1122–1135. [Google Scholar] [CrossRef]
  11. Zhao, M.; Miyamoto, T. Efficient LED-array optical wireless power transmission system for portable power supply and its compact modularization. Photonics 2023, 10, 824. [Google Scholar] [CrossRef]
  12. Watamura, T.; Nagasaka, T.; Kikuchi, Y.; Miyamoto, T. Flying a micro-drone by dynamic charging for vertical direction using optical wireless power transmission. Energies 2025, 18, 351. [Google Scholar] [CrossRef]
  13. Kawakami, M.; Miyamoto, T. Dynamic optical wireless power transmission infrastructure configuration for EVs. Energies 2025, 18, 2264. [Google Scholar] [CrossRef]
  14. IEC 60825:2024; Safety of Laser Products. International Electrotechnical Commission: Geneva, Switzerland, 2024.
  15. Nugent, T., Jr. Low-latency enhanced light curtain for safe laser power beaming. In Proceedings of the SPIE 13359, Free-Space Laser Communications XXXVII, San Francisco, CA, USA, 25–30 January 2025; p. 1335906. [Google Scholar]
  16. Nugent, T. Improving metrics for laser power beaming. In Proceedings of the 2024 IEEE Wireless Power Technology Conference and Expo (WPTCE), Kyoto, Japan, 8–11 May 2024; pp. 486–488. [Google Scholar]
  17. Xia, S.; Liu, Q.; Liu, M.; Fang, W.; Xiong, M.; Bai, Y.; Li, X. Auto-Protection for resonant beam SWIPT in portable applications. IEEE Internet Things J. 2024, 11, 4127–4138. [Google Scholar] [CrossRef]
  18. Zuo, C.; Miyamoto, T. Camera-based safety system for optical wireless power transmission using dynamic safety-distance. Photonics 2024, 11, 500. [Google Scholar] [CrossRef]
  19. Jocher, G.; Qiu, J.; Chaurasia, A. Ultralytics YOLO, version 8.0.0; Computer Software; Ultralytics: Frederick, MA, USA, 2023.
  20. Zuo, C.; Miyamoto, T. Advanced OWPT safety system: Improved metrics and optimization for multiple intrusion objects. In Proceedings of the 7th Optical Wireless and Fiber Power Transmission Conference (OWPT 2025), Yokohama, Japan, 22–25 April 2025. [Google Scholar]
  21. Keselman, L.; Woodfill, J.I.; Grunnet-Jepsen, A.; Bhowmik, A. Intel RealSense stereoscopic depth cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2017), Honolulu, HI, USA, 21–26 July 2017; pp. 1267–1276. [Google Scholar]
  22. Zuo, C. OWPT-Safety-System-2.0 [Source Code]. GitHub. Available online: https://github.com/realzuoc/OWPT-Safety-System-2.0 (accessed on 18 August 2025).
  23. Zhao, M.; Miyamoto, T. LED-based optical wireless power transmission for automatic tracking and powering mobile object in real time. IEEE Access 2025, 13, 33643–33654. [Google Scholar] [CrossRef]
  24. Schäfer, C.A. Continuous adaptive beam pointing and tracking for laser power transmission. Opt. Express 2010, 18, 13451–13467. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The configuration of the OWPT safety system using camera.
Figure 1. The configuration of the OWPT safety system using camera.
Photonics 13 00156 g001
Figure 2. The logic of OWPT safety system, other than normal operation, detects IO and reacts.
Figure 2. The logic of OWPT safety system, other than normal operation, detects IO and reacts.
Photonics 13 00156 g002
Figure 3. The automatic emission control concept and MPE regulation.
Figure 3. The automatic emission control concept and MPE regulation.
Photonics 13 00156 g003
Figure 4. The multiple intrusion object and priority IO schematic scene during OWPT.
Figure 4. The multiple intrusion object and priority IO schematic scene during OWPT.
Photonics 13 00156 g004
Figure 5. The schematic figure of the actual shortest beam-IO distance during the OWPT operation.
Figure 5. The schematic figure of the actual shortest beam-IO distance during the OWPT operation.
Photonics 13 00156 g005
Figure 6. The RF process logic picks the most priority IO for safety control, while other IOs’ parameters are stored at the same time for preparation.
Figure 6. The RF process logic picks the most priority IO for safety control, while other IOs’ parameters are stored at the same time for preparation.
Photonics 13 00156 g006
Figure 7. The experiment devices and system components in laboratory.
Figure 7. The experiment devices and system components in laboratory.
Photonics 13 00156 g007
Figure 8. Some of the trained targets that appear in OWPT operation in authors lab.
Figure 8. Some of the trained targets that appear in OWPT operation in authors lab.
Photonics 13 00156 g008
Figure 9. Training dynamics of the custom OWPT model (YOLOv8). (a) Training and validation losses—box (localization), cls (classification), and dfl (distribution-focal)—versus epoch. Solid lines denote training losses; dashed lines denote validation losses. All training losses decrease monotonically, and the validation losses converge without late-stage divergence, indicating stable optimization and the absence of overfitting. (b) Validation metrics versus epoch. The ordinate is a normalized score in [0, 1]. Shown are precision P, recall R, mAP@0.5 [0.5:0.95], and the derived F1 = 2PR/(P + R). Circular markers highlight the best epoch of each metric; numerical values and epochs are summarized in the on-plot text block (starting at epoch ≈ 70). The curves rise smoothly and plateau near the best-F1 region, demonstrating a high precision–recall balance suitable for safety-critical OWPT emission gating.
Figure 9. Training dynamics of the custom OWPT model (YOLOv8). (a) Training and validation losses—box (localization), cls (classification), and dfl (distribution-focal)—versus epoch. Solid lines denote training losses; dashed lines denote validation losses. All training losses decrease monotonically, and the validation losses converge without late-stage divergence, indicating stable optimization and the absence of overfitting. (b) Validation metrics versus epoch. The ordinate is a normalized score in [0, 1]. Shown are precision P, recall R, mAP@0.5 [0.5:0.95], and the derived F1 = 2PR/(P + R). Circular markers highlight the best epoch of each metric; numerical values and epochs are summarized in the on-plot text block (starting at epoch ≈ 70). The curves rise smoothly and plateau near the best-F1 region, demonstrating a high precision–recall balance suitable for safety-critical OWPT emission gating.
Photonics 13 00156 g009
Figure 10. Sample results from validation processes during the model training, showing pidcar targets are recognized well under different lighting conditions.
Figure 10. Sample results from validation processes during the model training, showing pidcar targets are recognized well under different lighting conditions.
Photonics 13 00156 g010
Figure 11. The detailed logic of the IO’s post-processing and core parameters calculation to ensure beam is safely controlled.
Figure 11. The detailed logic of the IO’s post-processing and core parameters calculation to ensure beam is safely controlled.
Photonics 13 00156 g011
Figure 12. Consecutive frames demonstrate real-time OWPT safety system operation during multiple-object intrusion testing. Initially, a pidcar enters the safety zone, triggering immediate emission control. Subsequently, priority shifts dynamically to a human actor as they enter the zone, sustaining emission control. Once both intrusion objects exit the operational boundary, the system promptly resumes normal beam operation. This sequence validates the system’s ability to manage complex, multi-object scenarios effectively and safely. The video could also be downloaded from the author’s GitHub repository, Ref. [22].
Figure 12. Consecutive frames demonstrate real-time OWPT safety system operation during multiple-object intrusion testing. Initially, a pidcar enters the safety zone, triggering immediate emission control. Subsequently, priority shifts dynamically to a human actor as they enter the zone, sustaining emission control. Once both intrusion objects exit the operational boundary, the system promptly resumes normal beam operation. This sequence validates the system’s ability to manage complex, multi-object scenarios effectively and safely. The video could also be downloaded from the author’s GitHub repository, Ref. [22].
Photonics 13 00156 g012
Figure 13. Latency performance of 3 groups of experiments. (detection 6.86 ± 2.03 ms; process/control 21.66 ± 4.87 ms; and end-to-end 28.52 ± 6.85 ms; 80.3% ≤ 33 ms; the average latency for the highest frequency data cluster is tagged).
Figure 13. Latency performance of 3 groups of experiments. (detection 6.86 ± 2.03 ms; process/control 21.66 ± 4.87 ms; and end-to-end 28.52 ± 6.85 ms; 80.3% ≤ 33 ms; the average latency for the highest frequency data cluster is tagged).
Photonics 13 00156 g013
Figure 14. The distribution of the robustness and success rate of safety system.
Figure 14. The distribution of the robustness and success rate of safety system.
Photonics 13 00156 g014
Figure 15. The relationship between dynamic safety distance and the IO’s size and velocity. (Modified based on Figure 4, [16].
Figure 15. The relationship between dynamic safety distance and the IO’s size and velocity. (Modified based on Figure 4, [16].
Photonics 13 00156 g015
Table 1. Representative safety system for OWPT and laser WPT.
Table 1. Representative safety system for OWPT and laser WPT.
Safety SystemAffiliationMechanismUsage
Camera-based OWPT Safety SystemScience TokyoDepth CameraIndoor/Industry
Light curtainPowerLight 1Sub-beam and sensor km distance WPT
AirCordWi-Charge 2Low power and sensorIoT device powering
LED-OWPTScience TokyoEye safety wavelength and low intensity lightIoT device powering
SWIPTTongji UniversityResonant Beam 3Information and Power transfer
1 light curtain uses multiple no-harm sub-beams around the main power transmission beam to fulfill extremely low latency and robust safety for any foreign object collision [15]. 2 Wi-Charge’s AirCord system has not been disclosed; however, it complies with the IEC 60825 MPE regulation as a commercial product, and the system has a bi-lateral communication mechanism. 3 Foreign objects would cause the stop of the oscillation condition of the resonant beam; latency is at a nanosecond level and it does not pose threat in low power operation [17].
Table 2. Performance metrics (latency) of fundamental vs. current OWPT safety systems.
Table 2. Performance metrics (latency) of fundamental vs. current OWPT safety systems.
Fundamental Safety System Latency Performance 1
Min. (ms)Max. (ms)Average (ms)
Detection Latency19.76132.4
Process and Control9.438.225.4
Overall31.899.257.8
Current Research Safety System Latency Performance
Min. (ms)Max. (ms)Average (ms)
Detection Latency414.36.9
Process and Control10.634.821.7
Overall14.649.128.5
1 A re-produced result based on the safety system proposed by authors in 2024, using identical experiment condition as in this research [18].
Table 3. Architectural difference between fundamental vs. current OWPT safety systems.
Table 3. Architectural difference between fundamental vs. current OWPT safety systems.
Pipeline ComponentDetection MethodProcessing
Architecture
Safety Decision
Ref. [18]Classical OpenCV
detection
Sequential pipelineFixed safety distance-based
This workYOLOv8-based detection (GPU inference)Multi-threaded
pipeline
Dynamic safety distance-based on a priority IO
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zuo, C.; Miyamoto, T. Low Latency and Multi-Target Camera-Based Safety System for Optical Wireless Power Transmission. Photonics 2026, 13, 156. https://doi.org/10.3390/photonics13020156

AMA Style

Zuo C, Miyamoto T. Low Latency and Multi-Target Camera-Based Safety System for Optical Wireless Power Transmission. Photonics. 2026; 13(2):156. https://doi.org/10.3390/photonics13020156

Chicago/Turabian Style

Zuo, Chen, and Tomoyuki Miyamoto. 2026. "Low Latency and Multi-Target Camera-Based Safety System for Optical Wireless Power Transmission" Photonics 13, no. 2: 156. https://doi.org/10.3390/photonics13020156

APA Style

Zuo, C., & Miyamoto, T. (2026). Low Latency and Multi-Target Camera-Based Safety System for Optical Wireless Power Transmission. Photonics, 13(2), 156. https://doi.org/10.3390/photonics13020156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop