You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

4 July 2023

Three-Dimensional Image Transmission of Integral Imaging through Wireless MIMO Channel

and
School of ICT, Robotics and Mechanical Engineering, Institute of Information and Telecommunication Convergence (IITC), Hankyong National University, 327 Chungang-ro, Anseong 17579, Kyonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Section Sensing and Imaging

Abstract

For the reconstruction of high-resolution 3D digital content in integral imaging, an efficient wireless 3D image transmission system is required to convey a large number of elemental images without a communication bottleneck. To support a high transmission rate, we herein propose a novel wireless three-dimensional (3D) image transmission and reception strategy based on the multiple-input multiple-output (MIMO) technique. By exploiting the spatial multiplexing capability, multiple elemental images are transmitted simultaneously through the wireless MIMO channel, and recovered with a linear receiver such as matched filter, zero forcing, or minimum mean squared error combiners. Using the recovered elemental images, a 3D image can be reconstructed using volumetric computational reconstruction (VCR) with non-uniform shifting pixels. Although the received elemental images are corrupted by the wireless channel and inter-stream interference, the averaging effect of the VCR can improve the visual quality of the reconstructed 3D images. The numerical results validate that the proposed system can achieve excellent 3D reconstruction performance in terms of the visual quality and peak sidelobe ratio though a large number of elemental images are transmitted simultaneously over the wireless MIMO channel.

1. Introduction

Recently, the transmission technique of three-dimensional (3D) digital content has become important in various applications such as virtual reality (VR), augmented reality (AR), 3D TV, the metaverse, and so on. Such 3D digital content can be generated by various methods such as stereoscope, integral imaging, holography, etc. Especially, integral imaging which was first proposed by G. Lippmann [], has been studied by many researchers. It can obtain 3D images using a lenslet array or camera array and provide 3D images without a coherent light source such as a laser and a special viewing device. In addition, it can provide 3D images with a full parallax and continuous viewing points passively. However, it has several drawbacks: low viewing resolution, narrow viewing angle, and shallow depth of field.
Many studies have been reported to overcome the drawbacks of the integral imaging [,,,,,,,,,,,]. To improve the viewing resolution of 3D images, the moving array lenslet technique (MALT) [] was proposed. It uses the afterimage effect of human eyes. However, it may not record a 3D dynamic scene because of the mechanical movement of the lenslet array. Synthetic aperture integral imaging (SAII) [], which uses a camera array, can improve the viewing resolution of 3D images since each elemental image has the same resolution as the image sensor. In [], the SAII is further merged with an axially distributed sensing method to improve the lateral and longitudinal resolutions of 3D objects. However, it is not cost-effective and there exists a synchronization problem between cameras. The viewing angle of 3D images in integral imaging can be determined by the focal length of the lenslet and pitch between the elemental images. To enhance the viewing angle, integral imaging with a low fill factor, which can increase the pitch between cameras, was proposed []. However, the intensity of the 3D images is reduced due to the low fill factor of the elemental images. The depth of field of 3D images in integral imaging depends on the characteristics of lenslet such as the focal length of the lenslet, distance between the lenslet and image sensor, and the diffraction limit. To solve this problem, depth-priority integral imaging was proposed []. It can provide the plane wave of each pixel by setting the distance between the lenslet and image sensor as the focal length of the lenslet. However, the viewing resolution of 3D images is degraded since the spot size of 3D images in this method is the same as the lenslet size.
Volumetric computational reconstruction (VCR) [,] of integral imaging may solve these drawbacks. Using high-resolution elemental images recorded by SAII, the VCR can reconstruct 3D images at the desired reconstruction depth by back-projecting elemental images through a virtual pinhole array, overlapping each other, and averaging them. Thus, it can provide 3D slicing images at various reconstruction depths, and suppress the noise of 3D images by its averaging effect. By introducing an adjustable parameter to generate a group of 3D images, a four-dimensional image structure can be generated by the computational reconstruction method in []. However, all of the aforementioned studies regarding integral imaging may suffer from space limitations, because the pickup and reconstruction process must be performed in the same place and cannot be implemented apart. In fact, wireless 3D image transmission is readily implementable by performing the SAII and VCR at the transmitter and receiver, respectively. Nonetheless, a sophisticated system design is required to transmit a large number of high-resolution 2D elemental images through the wireless channel and display a 3D image with a high reconstruction performance while overcoming wireless impairments [,,,,].
To this end, in this paper, we propose a novel 3D image wireless transmission system with the aid of the multiple-input multiple-output (MIMO) technique, which enables us to achieve a high transmission rate in wireless communication. In the proposed wireless 3D image transmission system, both the transmitter and receiver are equipped with multiple antennas to exploit the spatial multiplexing capability of a MIMO channel [,]. At the transmitter, individual 2D elemental images, picked up by multiple sensors, are converted to distinct data streams, and transmitted concurrently through the corresponding antennas. For the successful 3D image reconstruction, the receiver needs to recover multiple elemental images by overcoming the inter-stream interference caused by the simultaneous transmission. Accordingly, we develop a simple receiver architecture by applying various linear combiners, such as matched filter (MF), zero forcing (ZF), and minimum mean squared error (MMSE) []. Finally, using the recovered elemental images, the 3D image is reconstructed by applying the VCR with non-uniform shifting pixels. Based on the optical experiments, we verify that the proposed system is feasible for reconstructing 3D images and achieving sufficient peak sidelobe ratio (PSR) performance at the desired reconstruction depth. Furthermore, we find that the application of the MF combiner can achieve notable 3D reconstruction performance with practical computational complexity with the aid of the averaging effect of the VCR, despite severely poor wireless communication performance.
This paper is organized as follows. In Section 2, we present the related work briefly. Then, we describe the transmission of 3D integral imaging over a wireless MIMO channel in Section 3. To prove the proposed system, we show the experimental results in Section 4. Finally, we conclude with a summary in Section 5.
Notation: For a vector, superscript T denotes the transpose operation. For a matrix, superscripts and 1 represent the complex conjugate transpose and inverse operations, respectively. For a random variable, E stands for the expectation, i.e., statistical mean. An identity matrix with size N is denoted by I N .

3. Wireless 3D Image Transmission System over MIMO Channel

3.1. Wireless 3D Image Transmission System

In Figure 5, we illustrate the proposed wireless 3D transmission system, in which 2D elemental images, captured by M multiple sensors, are simultaneously transmitted to the receiver for the 3D image reconstruction. For the transmission and reception of elemental images, the transmitter and receiver are deployed with M and N antennas, respectively, where N M . Let H C N × M be a Rayleigh fading channel matrix, in which each entry follows a complex Gaussian distribution with zero mean and unit variance. The fading channel is assumed to be static during a 3D image transmission, while the channel independently varies after a complete transmission. Moreover, we assume that the receiver perfectly obtains channel state information (CSI) by applying proper estimation techniques.
Figure 5. Wireless 3D transmission system, in which 2D elemental images are transmitted and received over an N × M MIMO channel.
At the transmitter, M elemental images are picked up by the multiple sensors, and transmitted through the corresponding antennas. For m { 1 , , M } , the mth image is converted to a binary sequence. Let Q be a constellation set of a quadrature amplitude modulation (QAM), where | Q | = Q . By applying the QAM, every log 2 Q bits in the binary sequence are mapped to a complex symbol. For each time instance, let x m be a modulated symbol of the mth elemental image, where the average symbol energy is set to 1 / M for equal power allocation over all antennas (i.e., E [ | x m | 2 ] = 1 / M , m ). A transmitted vector is defined as x = [ x 1 , , x M ] T C M , in which the component x m is transmitted through the mth transmit antenna.
To successfully reconstruct the 3D image, the receiver needs to reliably recover the elemental images by overcoming the inter-stream interference arising from the simultaneous transmission of M streams. For each time instance, the received vector, denoted by y = [ y 1 , , y N ] T C N , is written as
y = Hx + v .
Here, v C N is an additive white Gaussian noise (AWGN) vector, where each component is an independent and identically distributed Gaussian random variable with zero mean and variance σ 2 . With respect to the noise variance, the operating SNR is defined as ρ = 1 / σ 2 . By assuming the perfect CSI, the receiver uses a linear combiner, denoted by F C M × N , to separate the received signal in Equation (3) into M streams. We consider three conventional combiners MF, ZF, and MMSE, which are defined as follows []:
F = 1 N H , for MF , H H 1 H , for ZF , H H + 1 ρ I M 1 H , for MMSE .
Multiplying the combining matrix F by the received vector yields
z = Gx + w ,
where the combined vector, channel, and noise are represented as z = Fy C M , G = FH C M × M , and w = Fv C M , respectively. For m { 1 , , M } , the mth component of the combined vector, denoted by z m , is represented as
z m = g m , m x m + i = 1 , i m M g i , m x i + w m ,
where g i , j is an entry corresponding to the ith row and jth column of G , and w m is the mth component of w . By treating the interference term i m g i , m x i in Equation (6) as noise, applying the ML detection leads to the estimation of the transmitted symbol, denoted by x ^ m , as
x ^ m = arg max x m Q z m g m , m x m 2 ,
for all m { 1 , , M } .
Because a binary sequence is readily obtained from x ^ m ’s for all time instances, the receiver is capable of estimating the mth elemental image by a binary-to-image conversion. Therefore, based on the process described in Section 2.1, the receiver can finally carry out the 3D reconstruction using M estimated elemental images which are transmitted over the N × M MIMO channel. Although the estimated 2D elemental images are corrupted due to the simultaneous transmission, a 3D image can be reconstructed with sufficient quality since the application of the VCR in (2) averages out statistically uncorrelated inter-stream interferences.

3.2. Discussion on the Linear Combiners

It is worth noting that we only need a matrix multiplication for the MF combining, which requires a computational complexity of O ( M 2 ) . Meanwhile, the implementation of the ZF and MMSE combiners in Equation (4) requires a complexity of O ( M 3 ) due to the inversion of a matrix with size M. Hence, the application of the MF combiner can be a computationally efficient way to recover simultaneously transmitted 2D elemental images when a large number of sensors (or, equivalently, transmit antennas) are deployed in the wireless 3D image transmission system over a MIMO channel.
Figure 6 shows the magnitude plot of the combined channel G when M = N = 100 . From the numerical results in Figure 6, we discuss the effect of linear receivers by comparing a main diagonal entry and off-diagonal entries, which correspond to the magnitudes of a desired channel and interference channels, respectively. Because the main diagonal entry is significantly larger than the off-diagonal entries, we note that the application of the linear combiners can mitigate inter-stream interference effectively. Specifically, the results of Figure 6a,d reveal that the MF can mitigate inter-stream interference because the combined channel G tends to be diagonalized for low- and high-operating SNRs, respectively [,]. Figure 6b,e show that the ZF is capable of eliminating inter-stream interference by forcing the combined channel G to be I N , regardless of the operating SNRs. Figure 6c,f indicate that the MMSE behaves similarly to the MF for a low-operating SNR ( ρ = 0 dB), while the combined channel is nearly diagonalized because the combining matrix of the MMSE approaches that of the ZF for a high-operating SNR ( ρ = 30 dB).
Figure 6. Magnitude plot of the combined channel G in Equation (5).
In Figure 7, we evaluate the symbol error rates (SERs) of three linear combiners based on a Monte Carlo simulation. Here, the SER is defined as a probability that an estimated symbol x ^ m is not equal to a transmitted symbol x m . Figure 7 shows that the SERs of three linear receivers decrease as the operating SNR ρ increases. As is well known in the area of wireless communication [,], the MMSE achieves superior performance while the MF shows poor performance. The ZF shows competitive performance though the SER is slightly degraded with respect to that of the MMSE. In particular, it is worth noting from Figure 7 that the SER of the MF is nearly one for all operating SNRs. In other words, assuming one pixel corresponds to eight bits and is modulated to a 256-QAM symbol, most pixels composing an 2D elemental image are erroneously recovered at the receiver with high probability. The numerical analysis reveals that the MF may not be properly combining for the successful transmission of 2D elemental images from the perspective of wireless communication performance. However, in the subsequent section, the experimental results show that the application of the MF combiner is capable of achieving sufficient 3D reconstruction performance for the integral imaging in the wireless multisensor image system.
Figure 7. SER comparison of linear combiners over 100 × 100 MIMO systems with 256-QAM.

4. Experimental Results

In this section, we present our experimental setup and results to prove the feasibility of the proposed 3D wireless transmission system. In the experiment, we used four different 3D objects with different positions to obtain the elemental images by the SAII. In addition, to generate 3D digital contents, we used the VCR with non-uniform shifting pixels []. For the analysis over a MIMO channel, we assumed that 100 sensors capture 2D elemental images at the transmitter, and the receiver reconstructs 3D image. Considering the transmitter and receiver are equipped with 100 antennas ( M = N = 100 ), we simulated the 3D image transmission and reception over a flat Rayleigh fading channel with three operating SNRs (0 dB, 15dB, and 30 dB).

4.1. Experimental Setup

To obtain the elemental images of 3D objects, we used SAII in the pickup process as depicted in Figure 8. Four different 3D objects (white snowman, orange woman with sword, skeleton with pumpkin hat, and robot toy) are placed at different positions (352 mm, 368 mm, 420 mm, and 431 mm, respectively). The elemental image set consists of 10 (H) × 10 (V) elemental images with 1920 (H) × 1276 (V) resolution. In the SAII, the focal length of the camera (f) is 50 mm, pitch between cameras (p) is 2 mm in the x and y directions, and the sensor size is 36 mm (H) × 24 mm (V).
Figure 8. Experimental setup.
After the SAII captures 100 elemental images, those were transformed to QAM symbols with a modulation order 256 ( Q = 256 ), and transmitted simultaneously through the corresponding transmit antennas. After combining a received vector with one of the linear combiners in Equation (4), the receiver demodulated each transmitted symbol by applying the ML detection in Equation (7). By converting the demodulated symbols, the receiver was capable of acquiring 100 elemental images.
Figure 9, Figure 10 and Figure 11 show the received 2D elemental images obtained by different linear receivers when the operating SNR was set to 0 dB, 15 dB, and 30 dB, respectively. As shown in Figure 9, Figure 10 and Figure 11, the quality of the received elemental images depends on not only the operating SNRs but also the applied linear combiners. As expected from the results in Figure 7, the application of the MMSE combiner shows the best detection quality for all operating SNRs since it is the optimal linear combiner which achieves the maximum SINR. When the ZF combiner is used, the detected elemental images seem fairly noisy, in particular for a low SNR (i.e., ρ = 0 dB), because of the noise boosting effect. Furthermore, we observe that the MF combiner shows the worst detection quality since the received elemental image is severely degraded by the remaining inter-stream interference, as anticipated from Figure 6 and Figure 7.
Figure 9. Elemental images received via different linear combiners when ρ = 0 dB.
Figure 10. Elemental images received via different linear combiners when ρ = 15 dB.
Figure 11. Elemental images received via different linear combiners when ρ = 30 dB.

4.2. Experimental Results

To reconstruct 3D images with the received elemental images, which are passed over the wireless MIMO channel, in this paper, we used the VCR with non-uniform shifting pixels. Although the received elemental images may be corrupted by not only the wireless channel but also inter-stream interference, these effects are expected to be mitigated by the VCR owing to the averaging effect. In addition, the VCR with non-uniform shifting pixels can provide more accurate reconstruction depths. Using Equations (1) and (2), 3D images at various depths are reconstructed with the received elemental images. In Figure 12, Figure 13 and Figure 14, the 3D reconstruction results show that the targeted 3D object is focused while others are out of focus. The results reveal that the VCR can effectively suppress the effects of the wireless channel and inter-stream interference inherent in the received 2D elemental images for all linear combiners. In particular, it is worth noting that the application of the MF combiner can achieve acceptable 3D reconstruction results with the aid of the VCR, in contrast to the severely degraded wireless communication performance in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
Figure 12. 3D images reconstructed at various depths for different linear combiners when ρ = 0 dB.
Figure 13. 3D images reconstructed at various depths for different linear combiners when ρ = 15 dB.
Figure 14. 3D images reconstructed at various depths for different linear combiners when ρ = 30 dB.
To prove the feasibility of the proposed system, we reconstructed 3D images at various reconstruction depths, implemented non-linear correlation [], and calculated the peak sidelobe ratio (PSR). The non-linear correlation is defined as
c ( x , y ) z r = | F 1 | I z r ( ξ , η ) | | R ( ξ , η ) | k e j ( θ I θ R ) | 2
where F 1 is the inverse Fourier transform, I z r ( ξ , η ) is the Fourier transform of the reconstructed 3D image at z r , R ( ξ , η ) is the Fourier transform of the reference object image, θ I is the phase of I z r ( ξ , η ) , θ R is the phase of R ( ξ , η ) , and k is a non-linearity factor for correlation. In this experiment, four object images were used as the reference object images and the non-linearity factor is k = 0.7 . Then, the PSR can be calculated by using the following []
P S R = c m a x c ¯ σ c
where c m a x is the maximum value, c ¯ is the average, and σ c is the standard deviation of the correlation. In Figure 15, Figure 16, Figure 17 and Figure 18, we numerically analyze the PSR value, using Equations (8) and (9), for various reconstruction depths. The results validate that the highest PSR values are found accurately at the corresponding position of the desired 3D object for all operating SNRs and applied linear combiners. Therefore, we confirm that the proposed wireless 3D transmission system can transmit 3D digital content through a wireless MIMO channel effectively. Among the three combiners, the MMSE combiner achieves the highest PSR regardless of the operating SNRs and reconstruction depth. Although the ZF combiner shows excellent performance for mid- and high-operating SNRs (i.e., 15 dB and 30 dB, respectively), the PSR performance is degraded with respect to that of the MF for a low-operating SNR (i.e, 0 dB) due to the noise boosting effect. In particular, despite the poor wireless communication performance, we note that the MF is also an eligible combining technique because a reasonable PSR value can be achieved while using the lowest computational complexity.
Figure 15. Peak sidelobe ratio for the first object at 352 mm.
Figure 16. Peak sidelobe ratio for the second object at 368 mm.
Figure 17. Peak sidelobe ratio for the third object at 420 mm.
Figure 18. Peak sidelobe ratio for the fourth object at 431 mm.

5. Conclusions

In this paper, we have considered 3D image transmission of integral imaging through a wireless MIMO channel. Due to the large number of elemental images required in integral imaging for 3D image reconstruction, we need a 3D image transmission technique able to support a high transmission rate. Accordingly, we proposed to transmit and receive a number of elemental images using the MIMO technique in order to reconstruct 3D images having reasonable visual quality via the VCR with non-uniform shifting pixels. The experimental results validated that the proposed techniques are capable of achieving excellent performance for the 3D image reconstruction. Specifically, the numerical analysis revealed that the MMSE combiner achieves the best performance for various operating SNRs and the depths of 3D objects, whereas the ZF combiner may be vulnerable to the AWGN because of the noise boosting effect. Despite the severely deteriorated wireless communication performance, the MF shows promising performance, with the lowest computational complexity owing to the averaging effect of the VCR. Nevertheless, as future work it would be worthwhile to improve the performance of the proposed system applied with the MF combiner in order to overcome the performance gap with the optimal combiner, i.e., MMSE. By deploying multiple antennas at the transceiver, we expect that the proposed system can efficiently convey 3D digital content for various applications such as AR, VR, metaverse, 3D TV, etc.

Author Contributions

Conceptualization, S.-C.L. and M.C.; methodology, S.-C.L. and M.C.; software, S.-C.L. and M.C.; formal analysis, M.C.; investigation, S.-C.L.; writing—original draft preparation, S.-C.L. and M.C.; writing—review and editing, S.-C.L. and M.C.; visualization, M.C.; supervision, M.C.; funding acquisition, S.-C.L. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part under the framework of international cooperation program managed by the National Research Foundation of Korea (NRF-2022K2A9A2A08000152, FY2022), and in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2022R1G1A1010641).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo-dimensional
3DThree-dimensional
ARAugmented reality
AWGNAdditive white Gaussian noise
CSIChannel state information
MALTMoving array lenslet technique
MFMatched filter
MIMOMultiple-input multiple-output
MLMaximum likelihood
MMSEMinimum mean squared error
PSRPeak sidelobe ratio
QAMQuadrature amplitude modulation
SAIISynthetic aperture integral imaging
SINRSignal-to-interference-plus-noise ratio
SNRSignal-to-noise ratio
V-BLASTVertical Bell Laboratories layered space–time
VCRVolumetric computational reconstruction
VRVirtual reality
ZFZero forcing

References

  1. Lippmann, G. La Photographie Integrale. Comp. Rend. Acad. Sci. 1908, 146, 446–451. [Google Scholar]
  2. Arai, J.; Okano, F.; Hoshino, H.; Yuyama, I. Gradient index lens array method based on real time integral photography for three dimensional images. Appl. Opt. 1998, 37, 2034–2045. [Google Scholar] [CrossRef] [PubMed]
  3. Martínez-Corral, M.; Javidi, B. Fundamentals of 3D imaging and displays: A tutorial on integral imaging, light-field, and plenoptic systems. Adv. Opt. Photon. 2018, 10, 512–566. [Google Scholar] [CrossRef]
  4. Xiao, X.; Daneshpanah, M.; Cho, M.; Javidi, B. 3D integral imaging using sparse sensors with unknown positions. IEEE J. Display Technol. 2010, 6, 614–619. [Google Scholar] [CrossRef]
  5. Cho, M.; Javidi, B. Optimization of 3D integral imaging system parameters. IEEE J. Disp. Technol. 2012, 8, 357–360. [Google Scholar] [CrossRef]
  6. Jang, J.-S.; Javidi, B. Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics. Opt. Lett. 2002, 27, 324–326. [Google Scholar] [CrossRef]
  7. Jang, J.-S.; Javidi, B. Three-dimensional synthetic aperture integral imaging. Opt. Lett. 2002, 27, 1144–1146. [Google Scholar] [CrossRef] [PubMed]
  8. Lee, J.; Cho, M. Three-Dimensional Integral Imaging with Enhanced Lateral and Longitudinal Resolutions Using Multiple Pickup Positions. Sensors 2022, 22, 9199. [Google Scholar] [CrossRef]
  9. Jang, J.-S.; Javidi, B. Improvement of viewing angle in integral imaging by use of moving lenslet arrays with low fill factor. Appl. Opt. 2003, 42, 1996–2002. [Google Scholar] [CrossRef]
  10. Jang, J.-S.; Javidi, B. Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes. Opt. Lett. 2003, 28, 1924–1926. [Google Scholar] [CrossRef]
  11. Cho, B.; Kopycki, P.; Martinez-Corral, M.; Cho, M. Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels. Opt. Laser Eng. 2018, 111, 114–121. [Google Scholar] [CrossRef]
  12. Inoue, K.; Cho, M. Fourier focusing in integral imaging with optimum visualization pixels. Opt. Laser Eng. 2020, 127, 105952. [Google Scholar] [CrossRef]
  13. Bae, J.; Yoo, H. Image Enhancement for Computational Integral Imaging Reconstruction via Four-Dimensional Image Structure. Sensors 2020, 20, 4795. [Google Scholar] [CrossRef]
  14. Charfi, Y.; Wakamiya, N.; Murata, M. Challenging issues in visual sensor networks. IEEE Wireless Commun. 2009, 16, 44–49. [Google Scholar] [CrossRef]
  15. Yeo, C.; Ramchandran, K. Robust distributed multiview video compression for wireless camera networks. IEEE Trans. Image Process. 2010, 19, 995–1008. [Google Scholar] [PubMed]
  16. Ye, Y.; Ci, S.; Katsaggelos, A.K.; Liu, Y.; Qian, Y. Wireless video surveillance: A survey. IEEE Access 2013, 1, 646–660. [Google Scholar]
  17. Kodera, S.; Fujihashi, T.; Saruwatari, S.; Watanabe, T. Multi-view video streaming with mobile cameras. In Proceedings of the 2014 IEEE Global Communications Conference, Austin, TX, USA, 12 December 2014. [Google Scholar]
  18. Nu, T.T.; Fujihashi, T.; Watanabe, T. Power-efficient video uploading for crowdsourced multi-view video streaming. In Proceedings of the 2018 IEEE Global Communications Conference, Abu Dhabi, United Arab Emirates, 9–13 December 2018. [Google Scholar]
  19. Foschini, G.J.; Gans, M.J. On limits of wireless communications in a fading environment when using multiple antennas. Wireless Pers. Commun. 1998, 6, 311–335. [Google Scholar] [CrossRef]
  20. Telatar, E. Capacity of multi-antenna Gaussian channels. Eur. Trans. Telecommun. 1999, 10, 585–595. [Google Scholar] [CrossRef]
  21. Tse, D.; Viswanath, P. Fundamentals of Wireless Communication; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  22. Andrews, J.G.; Buzzi, S.; Choi, W.; Hanly, S.V.; Lozano, A.; Soong, A.C.K.; Zhang, J.C. What will 5G Be? IEEE J. Sel. Areas Commun. 2014, 32, 1065–1082. [Google Scholar] [CrossRef]
  23. Tataria, H.; Shafi, M.; Molisch, A.F.; Dohler, M.; Sjöland, H.; Tufvesson, F. 6G wireless systems: Vision, requirements, challenges, insights, and opportunities. Proc. IEEE 2021, 109, 1166–1199. [Google Scholar] [CrossRef]
  24. Wolniansky, P.W.; Foschini, G.J.; Golden, G.D.; Valenzuela, R.A. V-BLAST: An architecture for realizing very high data rates over the rich-scattering wireless channel. In Proceedings of the URSI International Symposium on Signals, Systems, and Electronics, Pisa, Italy, 2 October 1998. [Google Scholar]
  25. Verdú, S. Multiuser Detection; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  26. Lupas, R.; Verdú, S. Linear multiuser detectors for synchronous code-division multiple-access channels. IEEE Trans. Inform. Theory 1989, 35, 123–136. [Google Scholar] [CrossRef]
  27. Madhow, U.; Honig, M.L. MMSE interference suppression for direct-sequence spread-spectrum CDMA. IEEE Trans. Commun. 1994, 42, 3178–3188. [Google Scholar] [CrossRef]
  28. Ngo, H.Q.; Larsson, E.G.; Marzetta, T.L. Energy and spectral efficiency of very large multiuser MIMO systems. IEEE Trans. Commun. 2013, 61, 1436–1449. [Google Scholar]
  29. Lim, Y.-G.; Chae, C.-B.; Caire, G. Performance Analysis of Massive MIMO for Cell-Boundary Users. IEEE Trans. Wireless Commun. 2015, 14, 6827–6842. [Google Scholar] [CrossRef]
  30. Javidi, B. Nonlinear joint power spectrum based optical correlation. Appl. Opt. 1989, 28, 2358–2367. [Google Scholar] [CrossRef]
  31. Cho, M.; Mahalanobis, A.; Javidi, B. 3D passive photon counting automatic target recognition using advanced correlation filters. Opt. Lett. 2011, 36, 861–863. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.