Previous Article in Journal
Random Access Resource Configuration for LEO Satellite Communication Systems Based on TDD
Previous Article in Special Issue
Initial Synchronization Procedure and Doppler Pre-Compensation for LEO-SATCOM Terminals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Cost Optical Wireless Communication for Underwater IoT: LED and Photodiode System Design and Characterization

by
Kidsanapong Puntsri
1,* and
Wannaree Wongtrairat
2
1
Department of Electronics and Telecommunication Engineering, Faculty of Engineering, Rajamangala University of Technology Isan, Khon Kaen Campus, Khon Kaen 40000, Thailand
2
Department of Electronics Engineering, Faculty of Engineering and Technology, Rajamangala University of Technology Isan, Nakornratchasrima 30000, Thailand
*
Author to whom correspondence should be addressed.
Telecom 2025, 6(4), 95; https://doi.org/10.3390/telecom6040095 (registering DOI)
Submission received: 22 October 2025 / Revised: 26 November 2025 / Accepted: 1 December 2025 / Published: 10 December 2025

Abstract

Underwater marine and freshwater environments are vast and mysterious, but our ability to explore them is limited by the inflexibility and inconvenience of monitoring systems. To overcome this problem, in this work, we present a proof-of-concept deployment of a real-time Internet of Underwater Things (IoUT) using blue light-emitting-diode-based visible light communication (VLC). Pulse-amplitude modulation with four levels is employed. To relax the focus point and increase the received power, four avalanche photodiodes (APDs) are adopted. Moreover, to reduce the error rate, the convolutional code with constraint-7 is used, which is the simplest to implement. Encoding and decoding are implemented by a field-programmable gate array. The results are verified by experimental demonstration. A baud rate of 9600 is used, but, unfortunately, we only have a 2 m long tank. System performance is improved when the number of APDs is increased; we investigated the effects of up to four APDs. Notably, bit error-free data transmission can be achieved. Additionally, this method would make underwater monitoring very conventional and dependable, and low-cost real-time monitoring would be possible, with data shown on the Grafana dashboard tool.

1. Introduction

Underwater exploration and real-time monitoring are essential for understanding and managing both freshwater and marine environments. Despite technological progress, more than 90% of the ocean remains unexplored [1]. Direct measurements by humans or robotic platforms are often impractical or hazardous, particularly in chemically contaminated areas. Conventional radio frequency (RF) communication, which performs well in terrestrial Internet of Things (IoT) applications [2,3], faces severe attenuation in underwater environments [4,5], limiting transmission range and reliability. Although laser-based underwater optical wireless communication (UWOC) can achieve high data rates and long transmission distances, it requires precise beam alignment and a small receiver aperture, resulting in a narrow communication beam that is difficult to maintain in dynamic underwater conditions.
In contrast, light-emitting diode (LED)-based visible light communication (VLC) systems provide a wider optical beam [6,7], relaxing alignment requirements and simplifying receiver design. However, the relatively low optical power of LEDs limits the achievable communication distance. This trade-off between simplicity and range remains a key challenge for realizing practical Internet of Underwater Things (IoUT) systems [8].

1.1. Motivation

The IoUT concept extends IoT connectivity into underwater environments by linking underwater sensor nodes, autonomous underwater vehicles (AUVs), surface vessels, and terrestrial communication platforms such as 5G and optical networks. Reliable underwater communication can enable autonomous environmental monitoring, pollution detection, and marine ecosystem management using machine learning (ML) and artificial intelligence (AI) while minimizing human exposure to hazardous environments. Existing wired approaches for underwater monitoring are inflexible and prone to signal degradation due to cable noise, especially when multiple sensor nodes are deployed across a wide area. A wireless optical communication approach offers higher flexibility, scalability, and lower maintenance requirements.
For underwater VLC systems, intensity modulation with direct detection (IM/DD) is typically adopted due to its simplicity. Among IM/DD schemes, pulse-amplitude modulation with four levels (PAM-4) offers a higher data rate and better signal-to-noise ratio (SNR) compared to on–off keying (OOK) while maintaining lower complexity than orthogonal frequency division multiplexing (OFDM) [7,8,9,10,11]. These characteristics make PAM-4 a suitable candidate for practical IoUT systems.

1.2. Contributions

In this paper, we present a prototype and experimental demonstration of a real-time underwater monitoring system based on LED-based VLC within the IoUT framework. The key contributions are summarized as follows:
-
A real-time underwater monitoring system is designed and implemented using IM/DD PAM-4 modulation with constraint-7 convolutional coding. The optical transmitter employs a 405 nm, 10 W blue LED, while the receiver utilizes four avalanche photodiodes (APDs) to improve optical sensitivity and relax beam alignment requirements.
-
The convolutional encoder and decoder are implemented on an AMD-Xilinx field-programmable gate array (FPGA), providing a low-complexity and hardware-efficient solution. Although advanced codes like low-density parity check (LDPC) [12] and polar codes (PC) [13] can be employed, they require complex calculations. In contrast, Bose–Chaudhuri–Hocquenghem (BCH) codes have recently been proposed for underwater optical wireless communication [14]. However, despite their low complexity, BCH codes typically provide lower performance. Additionally, frame synchronization is developed to ensure robust data recovery under varying optical conditions.
-
A Grafana dashboard [15] is employed to display sensor data and system performance metrics in real time, demonstrating the practical applicability of the proposed system for underwater monitoring.
-
Experimental results obtained using a 2 m water tank confirm error-free data transmission at a bit rate of 9.6 Mbps, verifying the feasibility of short-range underwater communication using LED-based VLC with multiple APDs.
In summary, the proposed system provides a simple, reliable, and cost-effective solution for short-range underwater wireless communication, paving the way for scalable IoUT deployments in real-world environments.

2. IoUT System Model

In this section, the proposed IoUT using a VLC system is presented. The details are shown in Figure 1. The transmitter (Tx) is located underwater and is equipped with sensors, which perform underwater environment measurements. VLC with PAM-4 is used as the communication system. To reduce the bit error rate (BER), the convolutional code with constraint-7 is employed at the receiver end, which is located above the water’s surface. The APD is used to convert light signals to electrical signals. Additionally, to increase the received power and relax the focus point, four APDs are proposed. The analog electrical measurement signal is sampled and converted to the digital signal domain by using AD09220, 12 bits, and 10 Msps from analog devices. Next, frame synchronization is performed, as detailed in Section 3.1. Then, convolution decoding is performed, implemented by an FPGA, as detailed in Section 3.2. Finally, the decoded data is uploaded to the server using ESP8266. As a result, data management and display online on the dashboard is simplified. The main parameters used in the experiment are shown in Table 1.
The communication system is detailed below. The received signal with four APDs is denoted by y ( t ) , which is expressed as
y ( t ) = l = 0 3 ρ l η l x ( t ) h l ( t ) + z l ( t ) ,
where stands for linear convolution, ρ is the electrical-to-optical conversion coefficient, and η is the photo detector sensitivity. x ( t ) is the transmitted coded PAM-4 signal and h ( t ) is the underwater channel. z ( t ) is a noise component, which is modeled by additive white Gaussian noise (AWGN) with zero mean and variant σ z = N 0 / 2 . N 0 = 4 K T B / R , where K is the Boltzmann constant, T is the temperature in Kelvin, B is the electrical signal bandwidth, and R is the receiver resistance. For simplicity, in this case, l is omitted. All of them exist in the time domain. If there is no loss of light conversion and only AWGN is considered, the theoretical BER of the error correcting code is defined by [16]
P e P A M = 2 M 1 M   log 2 M Q 6 log 2 M M 2 1 γ s ,
hence, P e P A M is the probability of the PAM-M bit error rate (BER), where Q ( x ) = 1 2 π x e z 2 / 2 d z . γ s = E b / N 0 is the signal-to-noise ratio (SNR), where E s is the energy per symbol. From (1), after decoding, the BER performance can be obtained by [17]
P e = 1 n j = t + 1 n j n j P e P A M j 1 P e P A M n j ,
where P e is the bit error probability after decoding, n is the total transmitted bit in each block, t is the bit error correcting capability of the convolution code, and j is the number of bit errors occurring on each block. The theoretical error probability from Equation (3) is plotted in Figure 2, where the n of messages is set to 20. As can be seen, the capability of error correction, t , increases, and the BER dramatically drops as the SNR increases. Greater bit correction leads to a lower BER. This confirms that coding can significantly improve the system’s performance.

3. FPGA Implementation

In this section, the FPGA implementation of IoUT using VLC is discussed and prototyped. The most important processes are frame synchronization and convolution coding. At the transmitter, FPGA Cmod S6, Spartan-6 XC6SLX4-2CPG196 from AMD Xilinx (Santa Clara, CA, USA) is used for the convolution encoder. The receiver performs frame synchronization, Viterbi decoding, and PAM-4 demapping, where the Nexys 4 DDR board and ship ARtix7 XC7A100T, also from AMD Xilinx, is used. The Nexys 4 DDR board is bigger than the Tx since processing is more complex.

3.1. Frame Synchronization

Synchronization is very important, as it is used to enable and reset the decoding process. Five full-scale samples are appended to the front of the encoded PAM-4, as shown in Figure 3A. d i n ( n ) is the received electrical PAM-4 input data, and t h is the threshold at which synchronization is detected. The synchronization algorithm is performed by summing d i n ( n ) three times, and the code is repeated. If the sum is more than the t h , the frame is detected. From the above, the synchronization algorithm is expressed as follows.
E N ( n ) = l = 0 2 d i n ( n l ) > t h ,
where E N ( n ) stands for “enable signal”. A random access memory (RAM) is used for storing input data. The input data will be stored only when E N ( n ) = 1 . Then, the RAM is read and fed to the Viterbi decoder, as detailed in the next section.

3.2. Convolution Code

As we already know, the convolution code is very simple to implement, especially in an FPGA, which is very flexible [17,18]. The resource consumption is very light. Research on convolutional codes is ongoing to improve their performance, and new applications are reported in [18,19]. Additionally, the theoretical convolution code can be found in [20]. In this section, only the implementation algorithm is presented, following the algorithms in [21,22]. The error correction capability is dependent on constraint length, denoted by k . The larger the k , the greater the bit error correction. However, the complexity will also be increased. To compromise between error correction capability and complexity, a constraint length of 7, k = 7 , with a coding rate of 0.5, is considered.
The details of the convolution encoder are shown in Figure 4A. The encoder uses six shift registers (SRs), where each SR is denoted by Z 1 , and the two-bit outputs are encoded by
c 0 ( n ) = d i n ( n ) d i n ( n 2 ) d i n ( n 3 ) d i n ( n 5 ) d i n ( n 6 ) ,
and
c 1 ( n ) = d i n ( n ) d i n ( n 1 ) d i n ( n 2 ) d i n ( n 3 ) d i n ( n 6 ) ,
where d i n ( n ) is the input data. is the XOR gate.
The decoder processing unit is shown in Figure 4B. The decoding procedure employs two quantities: the Hamming distance (HD) and the path metric (PM). PM is denoted by β . There are two dimensions, which are state, denoted by s , and time of looking back, denoted by t . In this case, s = 0 2 k = 7 1 = 0 127 states, and we obtain t = 0 19 since only 20 messages are sent in this work. HD is the measurement of the distance between the received symbol, denoted by y s , t , and the references, indicated by r f ( s , t ) . HD is obtained by h d s , t = y 2 s , t r f 2 s , t . However, for actual implementation and simplicity, the square and square root can be omitted, and only the absolute value (ABS) is needed. Therefore, h d s , t can be modified and implemented by
h d m s , t = a b s y s , t r f s , t ,
where h d m s , t is the modified HD. a b s   is the absolute value operator, and that is very simple to implement. Only two complement operators are needed. Next, h d m s , t is accumulated for all PMs and is calculated by
h d a c m ( s , t ) = h d a c m ( s , t 1 ) + h d m ( s , t ) .
where h d a c m ( s , t ) is the current accumulated by the modified HD on each path. There are 128 paths in total. In each state, there are two path candidates from h d a c m ( s , t ) , and they are called survivor paths. Therefore, the path metric is selected by
β ( s ˜ , t ) = min h d a c m ( s , t ) , h d a c m ( s + 1 , t ) ,
in this case, s ˜ = 0 2 k / 2 1 = 0 63 . Please see Figure 4B for more details. The dashed line is decoded for bit ‘1’, while the solid line is decoded for bit ‘0.’ Moreover, each t t h , the β ( s ˜ , t ) , and the path number are saved into RAM for the decoded bit by the look back module in the following process. The RAM depth is 20 × 64 states. Only the minimum accumulated Hamming distance on each PM and selected part is saved to RAM. For h d a c m ( s + 1 , t ) , however, only the minimum value is used, and the final unit of the Viterbi decoder is the look back process. Firstly, the minimum modified Hamming distance at the last t = 19 from the RAM bank is first calculated. The minimum Hamming location is used to indicate the starting point of the look back state. Each stage has the lowest Hamming distance from the incoming two candidates. Additionally, the stage has its own specific path; therefore, saving the next look back time with the lowest Hamming distance into the RAM is very simple, where the address is an indicator for the look back time, ( t 1 ) . The output of a certain address will be mapped with ‘0’ when the address is less than 31; otherwise, it will be decoded to ‘1’.

3.3. PAM-4 Generator

A PAM-4 generator is a circuit that converts digital signals into four-level pulse-amplitude modulation (PAM-4) signals by combining two digital signals, c 0 and c 1 , using a summing circuit. The summing circuit [22] is typically implemented using an operational amplifier (Op-Amp); in this work, the LM741 is used. The power gain of the PAM-4 is manually adjusted with a 10 W blue LED bias voltage to achieve symmetrical four-level PAM-4, by adjusting the current through the LED, which in turn changes the voltage drop across the LED. The voltage drop across the LED is then used to bias the Op-Amp in the summing circuit. To generate PAM-4, the voltage of the FPGA output must be divided by half using a voltage divider. Four resistors, which are connected in series between the FPGA output and the Op-Amp in the summing circuit, are used for this purpose. The values of R 1 ,   R 2 ,   R 3 and R 4   = 100   Ω are typically chosen to be equal, so that the voltage divider divides the FPGA output voltage by a factor of 2. The power gain of PAM-4 voltage is manually adjusted with a 10 w blue LED bias to achieve symmetrical four-level PAM-4. The input to the summing circuit is carried out using resisters R 5 = 680   Ω , R 6 = 1 k and R f = 600   Ω . The details of the received PAM-4 signal will be discussed in the Section 5.

4. Experimental Setup

The experimental setup is shown in Figure 5. The Arduino reads the sensor values from a pH and a temperature sensor. The measured values were then fed to the FPGA, Cmod S6 with XC6SLX4-2CPG196 from AMD-Xilinx, to encode the data according to Equations (5) and (6). The c 0 ( n ) and c 1 ( n ) outputs of the encoders were combined to form a PAM-4 signal that has a baud rate of 9600 bps. A nonlinear PAM-4 signal was avoided by tuning the gain of the op-amp 741, as shown in Figure 6. The electrical PAM-4 signal drives a 10 W blue LED to convert it to a light signal. A single transistor (NSS1C201LT1G) was used to drive the LED, since the speed is slow. The maximum collector current of the transistor is 2 A with a maximum Vce of 100 Vdc. The light was transmitted to a freshwater tank of 50 × 200 × 70 cm. The attenuation of the water tank is about 14 dBm/m at a room temperature of 30 °C. Please refer to the experimental results.
At the receiver, a convex lens was used to focus the light. The received light was converted to an electrical signal by an avalanche photodetector (APD) model S1223-1 from Hamamatsu. Four APDs were used to relax the focus point and increase the light power. The electrical signal was amplified by an OPA-2380AIDGKT from TI with a voltage gain of 35 dB. R f = 10   k Ω was used. The amplified signal was fed to an ADC, AD9220 from analog devices, to convert it to a digital signal at a speed of 10 Msps with 12-bit resolution. The digital signal was fed to the FPGA, Nexys 4DDR, ARtix7 ship XC7A100T from AMD-Xilinx. The FPGA performs frame synchronization and convolution decoding on the digital signal. The received signal amplitude was controlled in the digital domain inside the FPGA. The received signal was multiplied by a control gain by observing the receiver amplitude manually.
The decoded signal was fed to the ESP-8266 Wi-Fi to upload the measured data to a MySQL database. The data was then displayed on the Grafana dashboard tool. The monitoring measurement process was performed in real time, and no offline processing was required.

5. Experimental Results

In this section, experimental results are discussed and detailed. A bit error rate (BER) is mainly used as a performance metric. To measure the BER, 20 sensor messages are transmitted in 3 s, and the bit count of 1,000,000 is used for each test. The received bit data after FPGA decoding is fed to the Arduino to be saved into the computer and calculate the BER offline. The LED light beam is adjusted to be broad enough to cover four APDs with the lens in order to achieve flexible focusing. Nevertheless, the tank for this experiment is only 2 m long.
First, the received wave forms after amplification are observed, and the number of APDs is investigated. The results are shown in Figure 7. The amplitude values at a distance of 1 m for one, two, and four APDs are shown in Figure 7A–C. The voltage for one APD is 0.74 v (=1.36 dBm), for two APDs 1.72 v (=17.71 dBm), and for four APDs 2.5 v (=21.02 dBm). At 1.8 m, the amplitude values for one, two, and four APDs are depicted in Figure 7D,E,F, respectively. In this case, the voltage is reduced to only 0.13 v (=−13.74 dBm) for one APD, 0.38 v (=−4.42 dBm) for two APDs, and 0.49 v (=−2.21 dBm) for four APDs. As can be seen, the wave form is very clear for 1 m for all numbers of APDs, and then BER-free transmission can be achieved. Additionally, the amplitude increases with the increase in the number of APDs. However, for 2 m, some noise is induced.
Next, the BER versus distance with various numbers of APDs is investigated, as shown in Figure 8. The BER for two APDs is lower than that for one APD because the received light power is higher, as evidenced in the figure. At 1 m, there is no error for any number of APDs and, especially, for four APDs, the transmission is BER-free for all distances. Additionally, all results are recorded continuously for more than 3 h.
A dashboard is used to display the measurement data from the system. The dashboard is implemented using the Grafana dashboard tool, as shown in Figure 9. It shows the sensor values in real time; these can be viewed from anywhere, which makes the system very flexible. The results from the dashboard confirm that the system works well and does not require human monitoring in real time. Additionally, the communication distance for a four-APD receiver can be extended to more than 2 m, e.g., to 4 m or 6 m. This is a significant improvement over previous systems, and it could be very helpful in the future.

6. Conclusions

In this paper, we present an LED-based IoT system using VLC PAM-4 signaling for underwater monitoring. The system employs a convolution code with constraint-7 to correct the BER at the receiver end. The code is implemented on an FPGA for both the encoder and decoder. The system also includes a wireless communication module. Real-time system performance is evaluated using experimental demonstrations. Four APDs are employed to relax the focus point and increase the received power. The results show that BER-free communication can be achieved. This makes the system convenient for farmers and officers who need to monitor crops or other objects underwater. The collected data can be used to create big data, which can then be used to manage the system using AI and ML in the future. For example, AI can be used to identify patterns in the data, and ML can be used to predict future events to allow for more efficient and effective monitoring of underwater environments. In addition, these advancements mean that real-time monitoring of underwater environments is no longer limited by human availability and, thus, that data can be collected 24/7 to provide a more comprehensive view of the underwater environment. Additionally, the use of four APDs to increase received power and relax the focus point offers the high reliability and data rate needed to support these real-time mobility applications in the future.

Author Contributions

Conceptualization, K.P.; software, K.P.; formal analysis, K.P. and W.W.; writing—original draft preparation, K.P. and W.W.; writing—review and editing, K.P. and W.W.; verification, K.P. and W.W.; project administration, K.P.; funding acquisition, K.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by the Science Research and Innovation Fund (Contract No. FF66-P1-041).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank Tanatip Bubpawan, Bussakorn Bunsri, and Yaowarat Pittayong from Rajamangala University of Technology Isan, Khon Kaen campus, for preparing the experimental set.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Available online: https://education.nationalgeographic.org/resource/ocean/ (accessed on 3 June 2024).
  2. Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Gener. Comput. Syst. 2013, 29, 1645–1660. [Google Scholar] [CrossRef]
  3. Asghari, P.; Rahmani, A.M.; Javadi, H.H.S. Internet of Things applications: A systematic review. Comput. Netw. 2019, 148, 241–261. [Google Scholar] [CrossRef]
  4. Takizawa, K.; Suga, R.; Matsuda, T.; Kojima, F. Experiment on MIMO Communications in Seawater by RF Signals. In Proceedings of the 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–5. [Google Scholar] [CrossRef]
  5. Kaushal, H.; Kaddoum, G. Underwater Optical Wireless Communication. IEEE Access 2016, 4, 1518–1547. [Google Scholar] [CrossRef]
  6. Al-Kinani, A.; Wang, C.-X.; Zhou, L.; Zhang, W. Optical Wireless Communication Channel Measurements and Models. IEEE Commun. Surv. Tutor. 2018, 20, 1939–1962. [Google Scholar] [CrossRef]
  7. Ali, M.F.; Jayakody, D.N.K.; Li, Y. Recent Trends in Underwater Visible Light Communication (UVLC) Systems. IEEE Access 2022, 10, 22169–22225. [Google Scholar] [CrossRef]
  8. Jahanbakht, M.; Xiang, W.; Hanzo, L.; Azghadi, M.R. Internet of Underwater Things and Big Marine Data Analytics—A Comprehensive Survey. IEEE Commun. Surv. Tutor. 2021, 23, 904–956. [Google Scholar] [CrossRef]
  9. Kong, M.; Chen, Y.; Sarwar, R.; Sun, B.; Xu, Z.; Han, J.; Chen, J.; Qin, H.; Xu, J. Underwater wireless optical communication using an arrayed transmitter/receiver and optical superimposition-based PAM-4 signal. Opt. Express 2018, 26, 3087–3097. [Google Scholar] [CrossRef] [PubMed]
  10. Nakamura, K.; Mizukoshi, I.; Hanawa, M. Optical wireless transmission of 405 nm, 1.45 Gbit/s optical IM/DD-OFDM signals through a 4.8 m underwater channel. Opt. Express 2015, 23, 1558–1566. [Google Scholar] [CrossRef] [PubMed]
  11. Xu, L.; He, J.; Zhou, Z.; Xiao, Y. Underwater optical wireless communication performance enhancement using 4D 8PAM trellis-coded modulation OFDM with DFT precoding. Appl. Opt. 2022, 61, 2483–2489. [Google Scholar] [CrossRef]
  12. Li, J.; Yang, B.; Ye, D.; Wang, L.; Fu, K.; Piao, J.; Wang, Y. A Real-Time, Full-Duplex System for Underwater Wireless Optical Communication: Hardware Structure and Optical Link Model. IEEE Access 2020, 8, 109372–109387. [Google Scholar] [CrossRef]
  13. Falk, M.; Bauch, G.; Nissen, I. On Channel Codes for Short Underwater Messages. Information 2020, 11, 58. [Google Scholar] [CrossRef]
  14. Ramavath, P.; Udupi, A.; Krishnan, P. Experimental demonstration and analysis of underwater wireless optical communication link: Design, BCH coded receiver diversity over the turbid and turbulent seawater channels. Microw. Opt. Technol. Lett. 2020, 62, 2207–2216. [Google Scholar] [CrossRef]
  15. Available online: https://grafana.com/ (accessed on 15 May 2024).
  16. Forney, G.D.; Ungerboeck, G. Modulation and coding for linear Gaussian channels. IEEE Trans. Inf. Theory 1998, 44, 2384–2415. [Google Scholar] [CrossRef]
  17. Sklar, B. Digital Communications: Fundamentals and Applications. In Prentice Hall, 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 2001. [Google Scholar]
  18. Reeve, J.S.; Amarasinghe, K. A FPGA implementation of a parallel Viterbi decoder for block cyclic and convolution codes. In Proceedings of the IEEE International Conference on Communications (IEEE Cat. No.04CH37577), Paris, France, 20–24 June 2004; Volume 5, pp. 2596–2599. [Google Scholar] [CrossRef]
  19. Jiang, Y. Analysis of Bit Error Rate Between BCH Code and Convolutional Code in Picture Transmission. In Proceedings of the 3rd International Conference on Electronic Communication and Artificial Intelligence (IWECAI), Zhuhai, China, 14–16 January 2022; pp. 77–80. [Google Scholar] [CrossRef]
  20. Astharini, D.; Asvial, M.; Gunawan, D. Performance of signal detection with trellis code for downlink non-orthogonal multiple access visible light communication. Photonic Netw. Commun. 2022, 43, 185–192. [Google Scholar] [CrossRef]
  21. Nassar, C. Telecommunications Demystified (Demystifying Technology Series). In LLH Technology Pub, 1st ed.; Newnes: Boston, MA, USA, 2001. [Google Scholar]
  22. AN-31 Amplifier Circuit Collection; Application Report; Revised Version; Texas Instruments: Dallas, TX, USA, 2020.
Figure 1. System model of proposed IoUT using VLC with four APDs.
Figure 1. System model of proposed IoUT using VLC with four APDs.
Telecom 06 00095 g001
Figure 2. BER comparison of error correction capability of PAM-4 in the AWGN channel.
Figure 2. BER comparison of error correction capability of PAM-4 in the AWGN channel.
Telecom 06 00095 g002
Figure 3. Proposed frame synchronization for underwater communication: (A) frame synchronization pattern, (B) synchronization method, and (C) synchronization algorithm.
Figure 3. Proposed frame synchronization for underwater communication: (A) frame synchronization pattern, (B) synchronization method, and (C) synchronization algorithm.
Telecom 06 00095 g003
Figure 4. Convolution code for (A) encoder algorithm and (B) state diagram for K = 7 and r = 1 / 2 .
Figure 4. Convolution code for (A) encoder algorithm and (B) state diagram for K = 7 and r = 1 / 2 .
Telecom 06 00095 g004
Figure 5. Experimental setup for the proposed IoUT system.
Figure 5. Experimental setup for the proposed IoUT system.
Telecom 06 00095 g005
Figure 6. Proposed PAM-4 generator using a summing circuit.
Figure 6. Proposed PAM-4 generator using a summing circuit.
Telecom 06 00095 g006
Figure 7. The received signal for (A) 1 m and one APD, (B) 1 m and two APDs, (C) 1 m and four APDs, (D) 2 m with one APD, (E) 2 m with two APDs, and (F) 2 m with four APDs.
Figure 7. The received signal for (A) 1 m and one APD, (B) 1 m and two APDs, (C) 1 m and four APDs, (D) 2 m with one APD, (E) 2 m with two APDs, and (F) 2 m with four APDs.
Telecom 06 00095 g007
Figure 8. BER versus distance for IoUT with coded VLC systems where one and two APDs are considered.
Figure 8. BER versus distance for IoUT with coded VLC systems where one and two APDs are considered.
Telecom 06 00095 g008
Figure 9. An example of real-time Grafana dashboard for pH and temperature sensors for underwater measurement.
Figure 9. An example of real-time Grafana dashboard for pH and temperature sensors for underwater measurement.
Telecom 06 00095 g009
Table 1. The main parameters used in the experiment.
Table 1. The main parameters used in the experiment.
NameParameters
LEDWavelength460 nm
Forward current12 V
PhotodiodeHamamatsu (S1223)BW = 20 MHz
AmplifierOPA-2380AIDGKTBW = 90 MHz
Water TankDimension0.5 × 2 × 1 (m)
ADCAD0922010 MHz
FPGA (Tx)Cmods6-
FPGA (Rx)Artix-7-
Water tank length-2 m
Visualization toolGrafana dashboard-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Puntsri, K.; Wongtrairat, W. Low-Cost Optical Wireless Communication for Underwater IoT: LED and Photodiode System Design and Characterization. Telecom 2025, 6, 95. https://doi.org/10.3390/telecom6040095

AMA Style

Puntsri K, Wongtrairat W. Low-Cost Optical Wireless Communication for Underwater IoT: LED and Photodiode System Design and Characterization. Telecom. 2025; 6(4):95. https://doi.org/10.3390/telecom6040095

Chicago/Turabian Style

Puntsri, Kidsanapong, and Wannaree Wongtrairat. 2025. "Low-Cost Optical Wireless Communication for Underwater IoT: LED and Photodiode System Design and Characterization" Telecom 6, no. 4: 95. https://doi.org/10.3390/telecom6040095

APA Style

Puntsri, K., & Wongtrairat, W. (2025). Low-Cost Optical Wireless Communication for Underwater IoT: LED and Photodiode System Design and Characterization. Telecom, 6(4), 95. https://doi.org/10.3390/telecom6040095

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop