Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = non-Euclidean Fourier transform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1511 KB  
Article
An Indoor Visible Light Positioning System for Multi-Cell Networks
by Roger Alexander Martínez-Ciro, Francisco Eugenio López-Giraldo, José Martín Luna-Rivera and Atziry Magaly Ramírez-Aguilera
Photonics 2022, 9(3), 146; https://doi.org/10.3390/photonics9030146 - 1 Mar 2022
Cited by 19 | Viewed by 4041
Abstract
Indoor positioning systems based on visible light communication (VLC) using white light-emitting diodes (WLEDs) have been widely studied in the literature. In this paper, we present an indoor visible-light positioning (VLP) system based on red–green–blue (RGB) LEDs and a frequency division multiplexing (FDM) [...] Read more.
Indoor positioning systems based on visible light communication (VLC) using white light-emitting diodes (WLEDs) have been widely studied in the literature. In this paper, we present an indoor visible-light positioning (VLP) system based on red–green–blue (RGB) LEDs and a frequency division multiplexing (FDM) scheme. This system combines the functions of an FDM scheme at the transmitters (RGB LEDs) and a received signal strength (RSS) technique to estimate the receiver position. The contribution of this work is two-fold. First, a new VLP system with RGB LEDs is proposed for a multi-cell network. Here, the RGB LEDs allow the exploitation of the chromatic space to transmit the VLP information. In addition, the VLC receiver leverages the responsivity of a single photodiode for estimating the FDM signals in RGB lighting channels. A second contribution is the derivation of an expression to calculate the optical power received by the photodiode for each incident RGB light. To this end, we consider a VLC channel model that includes both line-of-sight (LOS) and non-line-of-sight (NLOS) components. The fast Fourier transform (FFT) estimates the powers and frequencies of the received FDM signal. The receiver uses these optical signal powers in the RSS-based localization application to calculate the Euclidean distances and the frequencies for the RGB LED position. Subsequently, the receiver’s location is estimated using the Euclidean distances and RGB LED positions via a trilateration algorithm. Finally, Monte Carlo simulations are performed to evaluate the error performance of the proposed VLP system in a multi-cell scenario. The results show a high positioning accuracy performance for different color points. The average positioning error for all chromatic points was less than 2.2 cm. These results suggest that the analyzed VLP system could be used in application scenarios where white light balance or luminaire color planning are also the goals. Full article
(This article belongs to the Special Issue Visible Light Communication (VLC))
Show Figures

Figure 1

30 pages, 462 KB  
Article
On the Connection between Spherical Laplace Transform and Non-Euclidean Fourier Analysis
by Enrico De Micheli
Mathematics 2020, 8(2), 287; https://doi.org/10.3390/math8020287 - 20 Feb 2020
Cited by 2 | Viewed by 3650
Abstract
We prove that, if the coefficients of a Fourier–Legendre expansion satisfy a suitable Hausdorff-type condition, then the series converges to a function which admits a holomorphic extension to a cut-plane. Next, we introduce a Laplace-type transform (the so-called Spherical Laplace Transform) of [...] Read more.
We prove that, if the coefficients of a Fourier–Legendre expansion satisfy a suitable Hausdorff-type condition, then the series converges to a function which admits a holomorphic extension to a cut-plane. Next, we introduce a Laplace-type transform (the so-called Spherical Laplace Transform) of the jump function across the cut. The main result of this paper is to establish the connection between the Spherical Laplace Transform and the Non-Euclidean Fourier Transform in the sense of Helgason. In this way, we find a connection between the unitary representation of SO ( 3 ) and the principal series of the unitary representation of SU ( 1 , 1 ) . Full article
(This article belongs to the Special Issue Mathematical Physics II)
Show Figures

Figure 1

23 pages, 1985 KB  
Article
Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network
by Jacopo Aguzzi, Corrado Costa, Katleen Robert, Marjolaine Matabos, Francesca Antonucci, S. Kim Juniper and Paolo Menesatti
Sensors 2011, 11(11), 10534-10556; https://doi.org/10.3390/s111110534 - 4 Nov 2011
Cited by 40 | Viewed by 11179
Abstract
The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) [...] Read more.
The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage and Fractal Dimension. A constant Region of Interest (ROI) was defined and background extraction by a Gaussian Blurring Filter was performed. Image subtraction within ROI was followed by the sum of the RGB channels matrices. Percent Coverage was calculated on the resulting image. Fractal Dimension was estimated using the box-counting method. The images were then resized to a dimension in pixels equal to a power of 2, allowing subdivision into sub-multiple quadrants. In comparisons of manual and automated Percent Coverage and Fractal Dimension estimates, the former showed an overestimation tendency for both parameters. The primary limitations on the automatic analysis of benthic images were habitat variations in sediment texture and water column turbidity. The application of filters for background corrections is a required preliminary step for the efficient recognition of animals and bacterial mat patches. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Back to TopTop