Next Article in Journal
Speckle Reduction in Digital Holography by Fast Logistic Adaptive Non-Local Means Filtering
Next Article in Special Issue
Computational Optical Scanning Holography
Previous Article in Journal
CO2 Measurement under Different Pressure and Vibration Conditions Using Tunable Diode Laser Absorption Spectroscopy
Previous Article in Special Issue
Reducing the Crosstalk in Collinear Holographic Data Storage Systems Based on Random Position Orthogonal Phase-Coding Reference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diffractive Deep-Neural-Network-Based Classifier for Holographic Memory

Graduate School of Engineering, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba-shi 263-8522, Chiba, Japan
*
Author to whom correspondence should be addressed.
Photonics 2024, 11(2), 145; https://doi.org/10.3390/photonics11020145
Submission received: 4 January 2024 / Revised: 25 January 2024 / Accepted: 29 January 2024 / Published: 4 February 2024
(This article belongs to the Special Issue Holographic Information Processing)

Abstract

:
Holographic memory offers high-capacity optical storage with rapid data readout and long-term durability. Recently, read data pages have been classified using digital deep neural networks (DNNs). This approach is highly accurate, but the prediction time hinders the data readout throughput. This study presents a diffractive DNN (D2NN)-based classifier for holographic memory. D2NNs have so far attracted a great deal of attention for object identification and image transformation at the speed of light. A D2NN, consisting of trainable diffractive layers and devoid of electronic devices, facilitates high-speed data readout. Furthermore, we numerically investigated the classification performance of a D2NN-based classifier. The classification accuracy of the D2NN was 99.7% on 4-bit symbols, exceeding that of the hard decision method.

1. Introduction

As both hot and cold data have accumulated over the years, the demand for high-capacity data storage with long-term records beyond hard disks and solid-state drives has increased enormously. Optical data storage is one such candidate, and within optical data storage, holographic memories offer optical storage with a large capacity, fast data readout, and long-term records [1]. Holographic memories are expected to dramatically increase data storage capacity using the unique holographic properties that allow multiple data pages to be recorded in the same area and in the depth direction of the recording medium. Recently, it was demonstrated that high-density 8 K video data can be recorded in holographic media [2]. Data recorded in holographic memory are represented as a two-dimensional data page, and the data pages are displayed on a spatial light modulator (SLM). Data pages are recorded on a holographic medium with a reference wave as a hologram. The reference wave illuminates the hologram to recover the data pages from the medium. To enhance data density, several data pages are recorded in the same area using shift multiplexing with spherical waves [3,4] and random illumination [5]. Data pages are typically represented as amplitude-only data. To reduce the consumption of holographic media [1], phase-only data have been used [6,7]. Complex amplitude data pages have been proposed [8,9,10,11]. Conventional holographic memory requires a two-arm interferometer to record holograms, which in turn requires sophisticated vibration isolation equipment. Collinear holographic memory reduces vibration problems owing to coaxial setups, where the signal and reference waves propagate along the same axis [12,13]. Computer-generated hologram-based holographic memories can mitigate this problem [14,15]. The readout data pages can be detected using an image sensor. There are no significant problems with amplitude data page detection; however, phase and complex data pages require sophisticated detection techniques because image sensors can only detect light intensity. Phase and complex data pages can be detected using interferometric [16] and non-interferometric methods. Non-interferometric methods include iterative Fourier transform algorithms [17,18], transport-of-intensity equations (TIEs) [11,19,20], and compressive sensing [21,22]. Iterative Fourier transform algorithms and compressed sensing can recover high-quality data pages but are computationally expensive. TIEs are a one-shot phase recovery method; however, a minimum of two diffraction patterns must be captured time-sequentially or using multiple image sensors. Recently, digital deep neural networks (DNNs) have been used as classifiers and equalizers and for the defocus correction and and super-resolution of readout data pages [10,11,23,24,25,26,27,28,29,30,31,32,33]. For example, Ref. [30] demonstrated the super-resolution of readout data for data pages, Ref. [31] applied a DNN to self-referential holographic data storage, Ref. [32] used a DNN as an equalizer, and Ref. [33] corrected the defocus error of data pages using a DNN.
DNNs can automatically acquire the mapping relationship between the input data and ground-truth data. Deep-learning-based methods exhibit high classification accuracy but require high-performance computing equipment and high power consumption, and the prediction time hinders the data readout throughput. This paper presents a diffractive DNN (D2NN)-based classifier for holographic memory. D2NNs [34] consist of multiple optical modulation layers called diffractive layers and do not contain any electronic devices, enabling ultrahigh-speed classification with low power consumption. Furthermore, the light reproduced from the holographic memory has a complex amplitude, and the D2NN can directly process this complex amplitude; thus, the D2NN approach does not require any interferometric or non-interferometric methods. This study numerically investigated the classification performance of a D2NN-based classifier and compared it with that of a hard decision method. The classification accuracy of the D2NN was 99.7%, and that of the hard decision method was 95.8% for the 16 symbols.
The remainder of this paper is organized as follows. Section 2 outlines the proposed method, Section 3 details the results of the proposed method, and Section 4 concludes the study.

2. Method

Figure 1 illustrates the optical system with the D2NN. The optical system consisted of a phase-only SLM, a lens, a half mirror, and a D2NN. The readout data pages were fed into the D2NN and classified as symbols. Examples of data pages are shown in Figure 2. Collinear holography [12] was used in this study. Figure 2a illustrates a data page with a circular reference pattern and signal pattern in the center of the region. Collinear holography is robust against vibrations, because the signal and reference waves propagate along the same axis. A hologram is formed by interfering with two waves on a holographic medium such as photorefractive polymers. The process of hologram formation can be expressed as H   =   | P z S + R | 2 , where P z denotes a diffraction operator at the propagation distance z, and S and R indicate the signal and reference waves, respectively. In the readout, only the reference pattern was displayed on the SLM, and the reference wave illuminated the medium. The readout can expressed as u 1 = P z 1 H , where u 1 denotes the complex amplitude on the input layer of the D2NN at a distance of z 1 from the recording medium. Examples of the intensity and phase of the readout wave are shown in Figure 2b,c, respectively. In this study, the on and off cells comprised 10 × 10 SLM pixels. Original data pages have phase-only distributions. The on cells and off cells are assigned to π and zero radians. All areas are phase-only modulated and therefore uniform in amplitude. That is, the white areas of the signal pattern and the circular reference pattern have π -radian and the other black areas have zero-radian phase modulation. Such phase-limited collinear holographic memory has been used in, e.g., [35].
We directly coded four bits of data into one symbol using 2 × 2 cells. The D2NN can classify 16 symbols by focusing the input light waves on one of the 16 target regions. Focused light can be detected using high-speed photodetectors. The D2NN comprises N phase-only diffractive layers, and the input light is modulated each time it passes through the diffractive layers, finally forming the desired light. Materials that can be used for diffractive layers include photopolymers and glasses processed by laser lithography.
The forward process of the D2NN can be expressed as
u = i = 1 N P z i w i u i ,
where u denotes the output complex amplitude of the D2NN, w i is the trainable diffractive layer, and u i is the source field of each diffractive layer. The final layer of the D2NN concentrates the light into a targeted region. It is worth noting that the D2NN can handle complex data pages directly from holographic memory without using interferometric or non-interferometric techniques. For training the D2NN, the ground-truth data are prepared as shown in Figure 3. Figure 3a illustrates the assignment of 16 symbols to each label region, and Figure 3b shows an example of the ground-truth image for symbol 13. Figure 3c depicts an example of the output intensity from the D2NN, focusing on region 13. The means square error was used for the loss function:
L = | u | 2 G 2 ,
where G indicates the ground-truth image (label image), and · denotes the 2 norm.
To train the D2NN, the gradient of L is calculated, and the backpropagation algorithm based on this gradient can be applied to optimize the trainable diffractive layers w i by feeding in datasets composed of the D2NN outputs and ground-truth images. The distances between the diffractive layers significantly affect the classification accuracy. Generally, the equivalent spacing of the diffractive layers is utilized; however, this study employed the tree-structured Parzen estimator (TPE), which is a hyperparameter tuning method, to optimize the distances. We have already utilized the TPE technique to classify orbital angular momentum beams via a D2NN [36]. TPE optimization is also effective for holographic memory, and its effectiveness is illustrated in the next section.

3. Results

The data pages had 2048 × 2048 pixels, and the signal area had 500 × 500 pixels. The wavelength was 410 nm, and the pixel pitch was 2 μ m. The resolution of the D2NN was 256 × 256 pixels. Fifty pages of data were prepared for the analysis. The entire dataset consisted of 31,250 symbol images (625 symbols/data page × 50 data pages). The training and validation data were divided into 28,125 and 3125 images, respectively. The difference between employing two and three diffractive layers on the D2NN was compared, and no significant difference was found; therefore, two diffractive layers were used, as more diffractive layers would make development more difficult. Figure 4 depicts the structure of the D2NN and the training and validation curves for the accuracy and loss value over the epochs. In Figure 4a, the distance between each diffractive layer is 10 mm. These two curves in Figure 4c indicated strongly decreasing loss values and did not give rise to over-fitting. The classification accuracy was 96.8%, which is relatively low.
Figure 5a shows the confusion matrix using the label data shown in Figure 3b. Figure 6 shows examples of label images 1, 2, 4, and 8, and 7, 11, 13, and 14. The confusion matrix shows that label 0 had a low classification accuracy, and label 15 had an extremely low classification accuracy owing to its zero correct predictions. These low accuracies indicated that the D2NN output was unable to focus light on the target areas. These errors occurred when the difference between the Hamming distances of the labels was one. For example, label 0 was likely to misclassify labels 1, 2, 4, and 8. Label 15 was likely to misclassify labels 7, 11, 13, and 14. Therefore, we modified these label images to focus light on each correct area by adding a 50% brightness area to labels 0 and 15, as shown in Figure 7. As mentioned above, light was not focused correctly on the regions of label 0 and label 15. Therefore, to increase the opportunities for light to be focused on the regions of label 0 and label 15, regions of 50% brightness, where the Hamming distance differed by 1, were added to the label images so that the D2NN learned correctly. The modified label images facilitated correct light focusing. The D2NN obtained improved results using modified label images, as shown in Figure 5b, which indicated that the misclassification of labels 0 and 15 could be significantly decreased. The classification accuracy improved to 99.0%.
To further improve the classification accuracy, TPE optimization was conducted for the distances between each diffractive layer. After optimization, as shown in Figure 8a, the distance from the input plane to the first diffractive layer was 12.4 mm, and the distance between the second diffractive layers was 11.3 mm. Figure 8b,c show the training and validation curves for the accuracy and loss value. In Figure 8c, these two curves indicate strongly decreasing loss values and did not give rise to over-fitting. The classification accuracy of the validation approach was 99.7%. A hard decision method was used as a comparison method. The hard decision method uses only intensity images, as in Figure 2b, because an image sensor cannot directly detect phase images, as in Figure 2c. Each on and off cell in the detected data pages is classified by
1 ( ( x , y ) Ω c ( x , y ) > T ) 0 ( o t h e r w i s e ) ,
where Ω indicates the region of each cell c ( x , y ) , and T indicates a threshold value that is optimized to obtain the best classification accuracy in advance. The classification accuracy of the hard decision method was 95.8%. The D2NN classifier outperformed the hard decision method.
Here, the classification rates and bit error rates provided by other studies are given for reference. For example, Ref. [23] obtained a classification accuracy of 99.98%, Ref. [25] decreased the bit rate errors by about 75% compared to a hard decision method, Ref. [27] achieved a bit rate error of 0.32%, and Ref. [26] presented a bit rate error of 1.61 × 10 3 . It is difficult to compare these studies with the proposed D2NN classifier because the experimental conditions were different for each of them. However, the classification rate of the proposed D2NN classifier was acceptable compared to these results.

4. Conclusions

In this study, we proposed a D2NN-based classifier for holographic memory. A D2NN is an all-optical neural network that can classify data pages at the speed of light. The classification speed in a D2NN is constrained by the capabilities of the detectors. However, with the recent practical use of photodiodes in the GHz band, gigabit readout speeds can now be achieved. Labels 0 and 15 were difficult to classify. The modified label images focused light on the correct target regions. In addition, the distance between the diffractive layers significantly affected the classification accuracy. The distances were optimized using the TPE technique, and the final classification accuracy achieved was 99.7%. In future work, we aim to investigate complex modulated data pages in a D2NN classifier to obtain a higher data density and accelerate the readout speed. The current D2NN only classified symbols with 2 × 2 cells, which resulted in a slow readout speed. There are two ways to increase the data read speed: one is to increase the seek speed using a single D2NN, and the other is to parallelize the D2NNs. We will develop a parallel D2NN classifier. In addition, D2NNs can be used as equalizers and to defocus the error correction for data pages. We will investigate D2NN-based equalizers and error correction defocusing.

Author Contributions

Conceptualization, T.S. (Tomoyoshi Shimobaba); methodology, T.S. (Toshihiro Sakurai); validation, T.S. (Toshihiro Sakurai) and T.S. (Tomoyoshi Shimobaba); formal analysis, T.S. (Toshihiro Sakurai) and T.S. (Tomoyoshi Shimobaba); investigation, T.S. (Toshihiro Sakurai) and T.S. (Tomoyoshi Shimobaba); resources, T.S. (Tomoyoshi Shimobaba) and T.I.; data curation, T.S. (Toshihiro Sakurai) and T.S. (Tomoyoshi Shimobaba); writing—original draft preparation, T.S. (Tomoyoshi Shimobaba) and T.S. (Toshihiro Sakurai); writing—review and editing, T.S. (Tomoyoshi Shimobaba), T.S. (Toshihiro Sakurai), and T.I.; visualization, T.S. (Tomoyoshi Shimobaba) and T.S. (Toshihiro Sakurai); supervision, T.S. (Tomoyoshi Shimobaba); project administration, T.S. (Tomoyoshi Shimobaba); funding acquisition, T.S. (Tomoyoshi Shimobaba) and T.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by JSPS KAKENHI, grant numbers 22H03607 and 19H01097, and the IAAR Research Support Program, Chiba University, Japan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the 189 corresponding author, T.Shimobaba, upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Coufal, H.J.; Psaltis, D.; Sincerbox, G.T. Holographic Data Storage; Springer: Berlin/Heidelberg, Germany, 2000; Volume 8. [Google Scholar]
  2. Katano, Y.; Muroi, T.; Kinoshita, N.; Ishii, N. Prototype holographic data storage drive with wavefront compensation for playback of 8K video data. IEEE Trans. Consum. Electron. 2017, 63, 243–250. [Google Scholar] [CrossRef]
  3. Barbastathis, G.; Levene, M.; Psaltis, D. Shift multiplexing with spherical reference waves. Appl. Opt. 1996, 35, 2403–2417. [Google Scholar] [CrossRef]
  4. Yoshida, S.; Kurata, H.; Ozawa, S.; Okubo, K.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.; Koga, S.; Tanaka, A. High-density holographic data storage using three-dimensional shift multiplexing with spherical reference wave. Jpn. J. Appl. Phys. 2013, 52, 09LD07. [Google Scholar] [CrossRef]
  5. Tsunoda, Y.; Takeda, Y. High density image-storage holograms by a random phase sampling method. Appl. Opt. 1974, 13, 2046–2051. [Google Scholar] [CrossRef]
  6. John, R.; Joseph, J.; Singh, K. Holographic digital data storage using phase-modulated pixels. Opt. Lasers Eng. 2005, 43, 183–194. [Google Scholar] [CrossRef]
  7. Saita, Y.; Nomura, T. Design method of input phase mask to improve light use efficiency and reconstructed image quality for holographic memory. Appl. Opt. 2014, 53, 4136–4140. [Google Scholar] [CrossRef]
  8. Nobukawa, T.; Nomura, T. Linear phase encoding for holographic data storage with a single phase-only spatial light modulator. Appl. Opt. 2016, 55, 2565–2573. [Google Scholar] [CrossRef]
  9. Honma, S.; Funakoshi, H. A two-step exposure method with interleaved phase pages for recording of SQAM signal in holographic memory. Jpn. J. Appl. Phys. 2019, 58, SKKD05. [Google Scholar] [CrossRef]
  10. Hao, J.; Lin, X.; Lin, Y.; Chen, M.; Chen, R.; Situ, G.; Horimai, H.; Tan, X. Lensless complex amplitude demodulation based on deep learning in holographic data storage. Opto-Electron. Adv. 2023, 6, 220157. [Google Scholar] [CrossRef]
  11. Bunsen, M.; Miwa, T. Accurate decoding of data pages in an amplitude-and phase-modulated signal beam detected by the single-shot transport of intensity equation method with convolutional neural network-based classifiers. Opt. Contin. 2023, 2, 1849–1866. [Google Scholar] [CrossRef]
  12. Horimai, H.; Tan, X.; Li, J. Collinear holography. Appl. Opt. 2005, 44, 2575–2579. [Google Scholar] [CrossRef]
  13. Shibukawa, A.; Okamoto, A.; Tomita, A.; Takabayashi, M.; Sato, K. Multilayer collinear holographic memory with movable random phase mask. Jpn. J. Appl. Phys. 2011, 50, 09ME10. [Google Scholar] [CrossRef]
  14. Nobukawa, T.; Nomura, T. Shift multiplexing with a spherical wave in holographic data storage based on a computer-generated hologram. Appl. Opt. 2017, 56, F31–F36. [Google Scholar] [CrossRef]
  15. Yoneda, N.; Nobukawa, T.; Morimoto, T.; Saita, Y.; Nomura, T. Common-path angular-multiplexing holographic data storage based on computer-generated holography. Opt. Lett. 2021, 46, 2920–2923. [Google Scholar] [CrossRef]
  16. Okamoto, A.; Kunori, K.; Takabayashi, M.; Tomita, A.; Sato, K. Holographic diversity interferometry for optical storage. Opt. Express 2011, 19, 13436–13444. [Google Scholar] [CrossRef]
  17. Chen, R.; Hao, J.; Yu, C.; Zheng, Q.; Qiu, X.; Wang, S.; Chen, Y.; Wang, K.; Lin, D.; Yang, Y.; et al. Dynamic sampling iterative phase retrieval for holographic data storage. Opt. Express 2021, 29, 6726–6736. [Google Scholar] [CrossRef]
  18. Chen, R.; Hao, J.; Wang, J.; Lin, Y.; Wang, K.; Lin, D.; Lin, X.; Tan, X. Phase retrieval in holographic data storage by expanded spectrum combined with dynamic sampling method. Sci. Rep. 2023, 13, 18912. [Google Scholar] [CrossRef]
  19. Yoneda, N.; Saita, Y.; Komuro, K.; Nobukawa, T.; Nomura, T. Transport-of-intensity holographic data storage based on a computer-generated hologram. Appl. Opt. 2018, 57, 8836–8840. [Google Scholar] [CrossRef]
  20. Bunsen, M.; Tateyama, S. Detection method for the complex amplitude of a signal beam with intensity and phase modulation using the transport of intensity equation for holographic data storage. Opt. Express 2019, 27, 24029–24042. [Google Scholar] [CrossRef]
  21. Yoneda, N.; Saita, Y.; Nomura, T. Holographic data storage based on compressive sensing. In Proceedings of the International Workshop on Holograhy and Related Technologies 2019 (IWH2019), Toyama, Japan, 5–7 November 2019; pp. 6–a–8. [Google Scholar]
  22. Liu, J.; Zhang, L.; Wu, A.; Tanaka, Y.; Shigaki, M.; Shimura, T.; Lin, X.; Tan, X. High noise margin decoding of holographic data page based on compressed sensing. Opt. Express 2020, 28, 7139–7151. [Google Scholar] [CrossRef]
  23. Shimobaba, T.; Kuwata, N.; Homma, M.; Takahashi, T.; Nagahama, Y.; Sano, M.; Hasegawa, S.; Hirayama, R.; Kakue, T.; Shiraki, A.; et al. Convolutional neural network-based data page classification for holographic memory. Appl. Opt. 2017, 56, 7327–7330. [Google Scholar] [CrossRef]
  24. Shimobaba, T.; Endo, Y.; Hirayama, R.; Nagahama, Y.; Takahashi, T.; Nishitsuji, T.; Kakue, T.; Shiraki, A.; Takada, N.; Masuda, N.; et al. Autoencoder-based holographic image restoration. Appl. Opt. 2017, 56, F27–F30. [Google Scholar] [CrossRef]
  25. Katano, Y.; Muroi, T.; Kinoshita, N.; Ishii, N.; Hayashi, N. Data demodulation using convolutional neural networks for holographic data storage. Jpn. J. Appl. Phys. 2018, 57, 09SC01. [Google Scholar] [CrossRef]
  26. Katano, Y.; Nobukawa, T.; Muroi, T.; Kinoshita, N.; Ishii, N. CNN-based demodulation for a complex amplitude modulation code in holographic data storage. Opt. Rev. 2021, 28, 662–672. [Google Scholar] [CrossRef]
  27. Hao, J.; Lin, X.; Lin, Y.; Song, H.; Chen, R.; Chen, M.; Wang, K.; Tan, X. Lensless phase retrieval based on deep learning used in holographic data storage. Opt. Lett. 2021, 46, 4168–4171. [Google Scholar] [CrossRef]
  28. Katano, Y.; Nobukawa, T.; Muroi, T.; Kinoshita, N.; Ishii, N. Efficient decoding method for holographic data storage combining convolutional neural network and spatially coupled low-density parity-check code. ITE Trans. Media Technol. Appl. 2021, 9, 161–168. [Google Scholar] [CrossRef]
  29. Kurokawa, S.; Yoshida, S. Demodulation scheme for constant-weight codes using convolutional neural network in holographic data storage. Opt. Rev. 2022, 29, 375–381. [Google Scholar] [CrossRef]
  30. Hao, J.; Lin, X.; Fujimura, R.; Hirayama, S.; Tanaka, Y.; Tan, X.; Shimura, T. Deep learning-based super-resolution holographic data storage. In Proceedings of the Optical Manipulation and Structured Materials Conference, Online, 18 October 2023; Volume 12606, pp. 118–121. [Google Scholar]
  31. Chijiwa, K.; Takabayashi, M. Deep learning-based design of additional patterns in self-referential holographic data storage. Opt. Rev. 2023, 1–13. [Google Scholar]
  32. Nguyen, T.A.; Lee, J. A Nonlinear Convolutional Neural Network-Based Equalizer for Holographic Data Storage Systems. Appl. Sci. 2023, 13, 13029. [Google Scholar] [CrossRef]
  33. Lin, Y.; Hao, J.; Ke, S.; Song, H.; Liu, H.; Lin, X.; Tan, X. Objective defocusing correction of collinear amplitude-modulated holographic data storage system based on deep learning. In Proceedings of the Optical Manipulation and Structured Materials Conference, Online, 18 October 2023; Volume 12606, pp. 3–8. [Google Scholar]
  34. Lin, X.; Rivenson, Y.; Yardimci, N.T.; Veli, M.; Luo, Y.; Jarrahi, M.; Ozcan, A. All-optical machine learning using diffractive deep neural networks. Science 2018, 361, 1004–1008. [Google Scholar] [CrossRef]
  35. Hao, J.; Ren, Y.; Zhang, Y.; Wang, K.; Li, H.; Tan, X.; Lin, X. Non-interferometric phase retrieval for collinear phase-modulated holographic data storage. Opt. Rev. 2020, 27, 419–426. [Google Scholar] [CrossRef]
  36. Watanabe, S.; Shimobaba, T.; Kakue, T.; Ito, T. Hyperparameter tuning of optical neural network classifiers for high-order Gaussian beams. Opt. Express 2022, 30, 11079–11089. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Holographic memory with a D2NN classifier.
Figure 1. Holographic memory with a D2NN classifier.
Photonics 11 00145 g001
Figure 2. Examples of (a) a data page with a reference pattern, (b) a reconstructed intensity data page, and (c) a reconstructed phase data page.
Figure 2. Examples of (a) a data page with a reference pattern, (b) a reconstructed intensity data page, and (c) a reconstructed phase data page.
Photonics 11 00145 g002
Figure 3. Assignment of 16 symbol areas (a), label image showing the 13th label as an example (b), and D2NN output intensity (c).
Figure 3. Assignment of 16 symbol areas (a), label image showing the 13th label as an example (b), and D2NN output intensity (c).
Photonics 11 00145 g003
Figure 4. Structure of the D2NN with equal distances for the diffractive layers (a), and training and validation curves for the accuracy (b) and loss value (c).
Figure 4. Structure of the D2NN with equal distances for the diffractive layers (a), and training and validation curves for the accuracy (b) and loss value (c).
Photonics 11 00145 g004
Figure 5. Confusion matrices: (a) before modifying the label images, (b) after modifying the label images.
Figure 5. Confusion matrices: (a) before modifying the label images, (b) after modifying the label images.
Photonics 11 00145 g005
Figure 6. Examples of label images. The upper row corresponds to labels 1, 2, 4, and 8. The lower row corresponds to labels 7, 11, 13, and 14. The Hamming distance of the label images in each row is one.
Figure 6. Examples of label images. The upper row corresponds to labels 1, 2, 4, and 8. The lower row corresponds to labels 7, 11, 13, and 14. The Hamming distance of the label images in each row is one.
Photonics 11 00145 g006
Figure 7. Modified label images. The upper row corresponds to labels 1, 2, 4, and 8. The label 0 area was supplemented with 50% brightness. The bottom row corresponds to labels 7, 11, 13, and 14. The label 15 area was supplemented with 50% brightness.
Figure 7. Modified label images. The upper row corresponds to labels 1, 2, 4, and 8. The label 0 area was supplemented with 50% brightness. The bottom row corresponds to labels 7, 11, 13, and 14. The label 15 area was supplemented with 50% brightness.
Photonics 11 00145 g007
Figure 8. Structure of the D2NN with TPE-optimized distance for the diffractive layers (a), and training and validation curves for the accuracy (b) and loss value (c).
Figure 8. Structure of the D2NN with TPE-optimized distance for the diffractive layers (a), and training and validation curves for the accuracy (b) and loss value (c).
Photonics 11 00145 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sakurai, T.; Ito, T.; Shimobaba, T. Diffractive Deep-Neural-Network-Based Classifier for Holographic Memory. Photonics 2024, 11, 145. https://doi.org/10.3390/photonics11020145

AMA Style

Sakurai T, Ito T, Shimobaba T. Diffractive Deep-Neural-Network-Based Classifier for Holographic Memory. Photonics. 2024; 11(2):145. https://doi.org/10.3390/photonics11020145

Chicago/Turabian Style

Sakurai, Toshihiro, Tomoyoshi Ito, and Tomoyoshi Shimobaba. 2024. "Diffractive Deep-Neural-Network-Based Classifier for Holographic Memory" Photonics 11, no. 2: 145. https://doi.org/10.3390/photonics11020145

APA Style

Sakurai, T., Ito, T., & Shimobaba, T. (2024). Diffractive Deep-Neural-Network-Based Classifier for Holographic Memory. Photonics, 11(2), 145. https://doi.org/10.3390/photonics11020145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop