Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (77)

Search Parameters:
Keywords = de-quantization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 1503 KB  
Article
High Spectrum Efficiency and High Security Radio-Over-Fiber Systems with Compressive-Sensing-Based Chaotic Encryption
by Zhanhong Wang, Lu Zhang, Jiahao Zhang, Oskars Ozolins, Xiaodan Pang and Xianbin Yu
Micromachines 2026, 17(1), 80; https://doi.org/10.3390/mi17010080 - 7 Jan 2026
Viewed by 259
Abstract
With the increasing demand for high throughput and ultra-dense small cell deployment in the next-generation communication networks, spectrum resources are becoming increasingly strained. At the same time, the security risks posed by eavesdropping remain a significant concern, particularly due to the broadcast-access property [...] Read more.
With the increasing demand for high throughput and ultra-dense small cell deployment in the next-generation communication networks, spectrum resources are becoming increasingly strained. At the same time, the security risks posed by eavesdropping remain a significant concern, particularly due to the broadcast-access property of optical fronthaul networks. To address these challenges, we propose a high-security, high-spectrum efficiency radio-over-fiber (RoF) system in this paper, which leverages compressive sensing (CS)-based algorithms and chaotic encryption. An 8 Gbit/s RoF system is experimentally demonstrated, with 10 km optical fiber transmission and 20 GHz radio frequency (RF) transmission. In our experiment, spectrum efficiency is enhanced by compressing transmission data and reducing the quantization bit requirements, while security is maintained with minimal degradation in signal quality. The system could recover the signal correctly after dequantization with 6-bit fronthaul quantization, achieving a structural similarity index (SSIM) of 0.952 for the legitimate receiver (Bob) at a compression ratio of 0.75. In contrast, the SSIM for the unauthorized receiver (Eve) is only 0.073, highlighting the effectiveness of the proposed security approach. Full article
Show Figures

Figure 1

24 pages, 8432 KB  
Article
Noise-Resilient Masked Face Detection Using Quantized DnCNN and YOLO
by Rockhyun Choi, Hyunki Lee, Bong-seok Kim, Sangdong Kim and Min Young Kim
Electronics 2026, 15(1), 143; https://doi.org/10.3390/electronics15010143 - 29 Dec 2025
Viewed by 278
Abstract
This study presents a noise-resilient masked-face detection framework optimized for the NVIDIA Jetson AGX Orin, which improves detection precision by approximately 30% under severe Gaussian noise (variance 0.10) while reducing denoising latency by over 42% and increasing end-to-end throughput by more than 30%. [...] Read more.
This study presents a noise-resilient masked-face detection framework optimized for the NVIDIA Jetson AGX Orin, which improves detection precision by approximately 30% under severe Gaussian noise (variance 0.10) while reducing denoising latency by over 42% and increasing end-to-end throughput by more than 30%. The proposed system integrates a lightweight DnCNN-based denoising stage with the YOLOv11 detector, employing Quantize-Dequantize (QDQ)-based INT8 post-training quantization and a parallel CPU–GPU execution pipeline to maximize edge efficiency. The experimental results demonstrate that denoising preprocessing substantially restores detection accuracy under low signal quality. Furthermore, comparative evaluations confirm that 8-bit quantization achieves a favorable accuracy–efficiency trade-off with only minor precision degradation relative to 16-bit inference, proving the framework’s robustness and practicality for real-time, resource-constrained edge AI applications. Full article
(This article belongs to the Special Issue Artificial Intelligence, Computer Vision and 3D Display)
Show Figures

Figure 1

12 pages, 271 KB  
Article
Feynman Path Integral and Landau Density Matrix in Probability Representation of Quantum States
by Olga V. Man’ko
Physics 2025, 7(4), 66; https://doi.org/10.3390/physics7040066 - 12 Dec 2025
Viewed by 438
Abstract
The quantizer–dequantizer method is employed. Using the construction of probability distributions describing density operators of a quantum system states, the connection between the Feynman path integral and the time evolution of the density operator (Landau density matrix) as well as the wave function [...] Read more.
The quantizer–dequantizer method is employed. Using the construction of probability distributions describing density operators of a quantum system states, the connection between the Feynman path integral and the time evolution of the density operator (Landau density matrix) as well as the wave function of the stateconsidered. For single–mode systems with continuous variables, a tomographic propagator is introduced in the probability representation of quantum mechanics. An explicit expression for the probability in terms of the Green function of the Schrödinger equation is obtained. Equations for the Green functions defined by arbitrary integrals of motion are derived. Examples of probability distributions describing the evolution of state of a free particle, as well as states of systems with integrals of motion that depend on time (oscillator type) are discussed. Full article
25 pages, 441 KB  
Article
A Non-Canonical Classical Mechanics
by Shi-Dong Liang
AppliedMath 2025, 5(4), 173; https://doi.org/10.3390/appliedmath5040173 - 5 Dec 2025
Viewed by 381
Abstract
Based on noncommutative relations and the Dirac canonical dequantization scheme, I generalize the canonical Poisson bracket to a deformed Poisson bracket and develop a non-canonical formulation of the Poisson, Hamilton, and Lagrange equations in the deformed Poisson and symplectic spaces. I find that [...] Read more.
Based on noncommutative relations and the Dirac canonical dequantization scheme, I generalize the canonical Poisson bracket to a deformed Poisson bracket and develop a non-canonical formulation of the Poisson, Hamilton, and Lagrange equations in the deformed Poisson and symplectic spaces. I find that both of these dynamical equations are the coupling systems of differential equations. The noncommutivity induces the velocity-dependent potential. These formulations give the Noether and Virial theorems in the deformed symplectic space. I find that the Lagrangian invariance and its corresponding conserved quantity depend on the deformed parameters and some points in the configuration space for a continuous infinitesimal coordinate transformation. These formulations provide a non-canonical framework of classical mechanics not only for insight into noncommutative quantum mechanics, but also for exploring some mysteries and phenomena beyond those in the canonical symplectic space. Full article
Show Figures

Figure 1

29 pages, 419 KB  
Review
Modified Gravity with Nonminimal Curvature–Matter Couplings: A Framework for Gravitationally Induced Particle Creation
by Francisco S. N. Lobo, Tiberiu Harko and Miguel A. S. Pinto
Universe 2025, 11(11), 356; https://doi.org/10.3390/universe11110356 - 28 Oct 2025
Viewed by 1535
Abstract
Modified gravity theories with a nonminimal coupling between curvature and matter offer a compelling alternative to dark energy and dark matter by introducing an explicit interaction between matter and curvature invariants. Two of the main consequences of such an interaction are the emergence [...] Read more.
Modified gravity theories with a nonminimal coupling between curvature and matter offer a compelling alternative to dark energy and dark matter by introducing an explicit interaction between matter and curvature invariants. Two of the main consequences of such an interaction are the emergence of an additional force and the non-conservation of the energy–momentum tensor, which can be interpreted as an energy exchange between matter and geometry. By adopting this interpretation, one can then take advantage of many different approaches in order to investigate the phenomenon of gravitationally induced particle creation. One of these approaches relies on the so-called irreversible thermodynamics of open systems formalism. By considering the scalar–tensor formulation of one of these theories, we derive the corresponding particle creation rate, creation pressure, and entropy production, demonstrating that irreversible particle creation can drive a late-time de Sitter acceleration through a negative creation pressure, providing a natural alternative to the cosmological constant. Furthermore, we demonstrate that the generalized second law of thermodynamics holds: the total entropy, from both the apparent horizon and enclosed matter, increases monotonically and saturates in the de Sitter phase, imposing constraints on the allowed particle production dynamics. Furthermore, we present brief reviews of other theoretical descriptions of matter creation processes. Specifically, we consider approaches based on the Boltzmann equation and quantum-based aspects and discuss the generalization of the Klein–Gordon equation, as well as the problem of its quantization in time-varying gravitational fields. Hence, gravitational theories with nonminimal curvature–matter couplings present a unified and testable framework, connecting high-energy gravitational physics with cosmological evolution and, possibly, quantum gravity, while remaining consistent with local tests through suitable coupling functions and screening mechanisms. Full article
24 pages, 1135 KB  
Article
Birth of an Isotropic and Homogeneous Universe with a Running Cosmological Constant
by A. Oliveira Castro Júnior, A. Corrêa Diniz, G. Oliveira-Neto and G. A. Monerat
Universe 2025, 11(9), 310; https://doi.org/10.3390/universe11090310 - 11 Sep 2025
Viewed by 623
Abstract
The present work discusses the birth of the Universe via quantum tunneling through a potential barrier, based on quantum cosmology, taking a running cosmological constant into account. We consider the Friedmann–Lemaître–Robertson–Walker (FLRW) metric with positively curved spatial sections (k=1) [...] Read more.
The present work discusses the birth of the Universe via quantum tunneling through a potential barrier, based on quantum cosmology, taking a running cosmological constant into account. We consider the Friedmann–Lemaître–Robertson–Walker (FLRW) metric with positively curved spatial sections (k=1) and the matter’s content is a dust perfect fluid. The model was quantized by the Dirac formalism, leading to a Wheeler–DeWitt equation. We solve that equation both numerically and using a WKB approximation. We study the behavior of tunneling probabilities TPWKB and TPint by varying the energy E of the dust perfect fluid, the phenomenological parameter ν, the present value of the Hubble function H0, and the constant energy density ρΛ0, with the last three parameters all being associated with the running cosmological constant. We observe that both tunneling probabilities, TPWKB and TPint, decrease as one increases ν. We also note that TPWKB and TPint grow as E increases, indicating that the Universe is more likely to be born with higher dust energy E values. The same is observed for the parameter ρΛ0, that is, TPWKB and TPint are larger for higher values of ρΛ0. Finally, the tunneling probabilities decrease as one increases the value of H0. Therefore, the best conditions for the Universe to be born, in the present model, would be to have the highest possible values for E and Λ and the lowest possible values for ν and H0. Full article
(This article belongs to the Section Cosmology)
Show Figures

Figure 1

14 pages, 3378 KB  
Article
The pcGR Within the Hořava-Lifshitz Gravity and the Wheeler-deWitt Quantization
by Peter O. Hess, César A. Zen Vasconcellos and Dimiter Hadjimichef
Galaxies 2025, 13(4), 85; https://doi.org/10.3390/galaxies13040085 - 1 Aug 2025
Cited by 1 | Viewed by 1392
Abstract
We investigate pseudo-complex General Relativity (pcGR)—a coordinate-extended formulation of General Relativity (GR)—within the framework of Hořava-Lifshitz gravity, a regularized theory featuring anisotropic scaling. The pcGR framework bridges GR with modified gravitational theories through the introduction of a minimal length scale. Focusing on Schwarzschild [...] Read more.
We investigate pseudo-complex General Relativity (pcGR)—a coordinate-extended formulation of General Relativity (GR)—within the framework of Hořava-Lifshitz gravity, a regularized theory featuring anisotropic scaling. The pcGR framework bridges GR with modified gravitational theories through the introduction of a minimal length scale. Focusing on Schwarzschild black holes, we derive the Wheeler-deWitt equation, obtaining a quantized description of pcGR. Using perturbative methods and semi-classical approximations, we analyze the solutions of the equations and their physical implications. A key finding is the avoidance of the central singularity due to nonlinear interaction terms in the Hořava-Lifshitz action. Notably, extrinsic curvature (kinetic energy) contributions prove essential for singularity resolution, even in standard GR. Furthermore, the theory offers new perspectives on dark energy, proposing an alternative mechanism for its accumulation. Full article
(This article belongs to the Special Issue Cosmology and the Quantum Vacuum—2nd Edition)
Show Figures

Figure 1

17 pages, 1214 KB  
Article
EECNet: An Efficient Edge Computing Network for Transmission Line Ice Thickness Recognition
by Yu Zhang, Yangyang Jiao, Yinke Dou, Liangliang Zhao, Qiang Liu and Yang Liu
Processes 2025, 13(7), 2033; https://doi.org/10.3390/pr13072033 - 26 Jun 2025
Cited by 2 | Viewed by 680
Abstract
The recognition of ice thickness on transmission lines serves as a prerequisite for controlling de-icing robots to carry out precise de-icing operations. To address the issue that existing edge computing terminals fail to meet the demands of ice thickness recognition algorithms, this paper [...] Read more.
The recognition of ice thickness on transmission lines serves as a prerequisite for controlling de-icing robots to carry out precise de-icing operations. To address the issue that existing edge computing terminals fail to meet the demands of ice thickness recognition algorithms, this paper introduces an Efficient Edge Computing Network (EECNet) specifically designed for identifying ice thickness on transmission lines. Firstly, pruning is applied to the Efficient Neural Network (ENet), removing redundant components within the encoder to decrease both the computational complexity and the number of parameters in the model. Secondly, a Dilated Asymmetric Bottleneck Module (DABM) is proposed. By integrating different types of convolutions, this module effectively strengthens the model’s capability to extract features from ice-covered transmission lines. Then, an Efficient Partial Conv Module (EPCM) is designed, introducing an adaptive partial convolution selection mechanism that innovatively combines attention mechanisms with partial convolutions. This design enhances the model’s ability to select important feature channels. The method involves segmenting ice-covered images to obtain iced regions and then calculating the ice thickness using the iced area and known cable parameters. Experimental validation on an ice-covered transmission line dataset shows that EECNet achieves a segmentation accuracy of 92.7% in terms of the Mean Intersection over Union (mIoU) and an F1-Score of 96.2%, with an ice thickness recognition error below 3.4%. Compared to ENet, the model’s parameter count is reduced by 41.7%, and the detection speed on OrangePi 5 Pro is improved by 27.3%. After INT8 quantization, the detection speed is increased by 26.3%. These results demonstrate that EECNet not only enhances the recognition speed on edge equipment but also maintains high-precision ice thickness recognition. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

18 pages, 3916 KB  
Article
TinyML-Based Real-Time Drift Compensation for Gas Sensors Using Spectral–Temporal Neural Networks
by Adir Krayden, M. Avraham, H. Ashkar, T. Blank, S. Stolyarova and Yael Nemirovsky
Chemosensors 2025, 13(7), 223; https://doi.org/10.3390/chemosensors13070223 - 20 Jun 2025
Cited by 3 | Viewed by 4129
Abstract
The implementation of low-cost sensitive and selective gas sensors for monitoring fruit ripening and quality strongly depends on their long-term stability. Gas sensor drift undermines the long-term reliability of low-cost sensing platforms, particularly in precision agriculture. We present a real-time drift compensation framework [...] Read more.
The implementation of low-cost sensitive and selective gas sensors for monitoring fruit ripening and quality strongly depends on their long-term stability. Gas sensor drift undermines the long-term reliability of low-cost sensing platforms, particularly in precision agriculture. We present a real-time drift compensation framework based on a lightweight Temporal Convolutional Neural Network (TCNN) combined with a Hadamard spectral transform. The model operates causally on incoming sensor data, achieving a mean absolute error below 1 mV on long-term recordings (equivalent to <1 particle per million (ppm) gas concentration). Through quantization, we compress the model by over 70%, without sacrificing accuracy. Demonstrated on a combustion-type gas sensor system (dubbed GMOS) for ethylene monitoring, our approach enables continuous, drift-corrected operation without the need for recalibration or dependence on cloud-based services, offering a generalizable solution for embedded environmental sensing—in food transportation containers, cold storage facilities, de-greening rooms and directly in the field. Full article
Show Figures

Figure 1

17 pages, 2052 KB  
Article
Linear Continuous-Time Regression and Dequantizer for Lithium-Ion Battery Cells with Compromised Measurement Quality
by Zoltan Mark Pinter, Mattia Marinelli, M. Scott Trimboli and Gregory L. Plett
World Electr. Veh. J. 2025, 16(3), 116; https://doi.org/10.3390/wevj16030116 - 20 Feb 2025
Cited by 1 | Viewed by 983
Abstract
Battery parameter identification is a key challenge for battery management systems, as parameterizing lithium-ion batteries is resource-intensive. Electrical circuit models (ECMs) provide an alternative, but their parameters change with physical conditions and battery age, necessitating regular parameter identification. This paper presents two modular [...] Read more.
Battery parameter identification is a key challenge for battery management systems, as parameterizing lithium-ion batteries is resource-intensive. Electrical circuit models (ECMs) provide an alternative, but their parameters change with physical conditions and battery age, necessitating regular parameter identification. This paper presents two modular algorithms to improve data quality and enable fast, robust parameter identification. First, the dequantizer algorithm restores the time series generating the noisy, quantized data using the inverse normal distribution function. Then, the Linear Continuous-Time Regression (LCTR) algorithm extracts exponential parameters from first-order or overdamped second-order systems, deducing ECM parameters and guaranteeing optimality with respect to RMSE. The parameters have low sensitivity to measurement noise since they are continuous-time. Sensitivity analyses confirm the algorithms’ suitability for battery management across various Gaussian measurement noise, accuracy, time constants and state-of-charge (SoC), using evaluation metrics like root-mean-square-error (RMSE) (<2 mV), relative time constant errors, and steady-state error. If the coarseness of rounding is not extreme, the steady-state is restored within a fraction of a millivolt. While a slight overestimation in the lower time constants occurs for overdamped systems, the algorithms outperform the conventional benchmark for first-order systems. Their robustness is further validated in real-life applications, highlighting their potential to enhance commercial battery management systems. Full article
Show Figures

Figure 1

27 pages, 400 KB  
Article
Extending Solutions and the Equations of Quantum Gravity Past the Big Bang Singularity
by Claus Gerhardt
Symmetry 2025, 17(2), 262; https://doi.org/10.3390/sym17020262 - 9 Feb 2025
Viewed by 1171
Abstract
We recently proved that in our model of quantum gravity, the solutions to the quantized version of the full Einstein equations or to the Wheeler–DeWitt equation could be expressed as products of spatial and temporal eigenfunctions, or eigendistributions, of self-adjoint operators acting in [...] Read more.
We recently proved that in our model of quantum gravity, the solutions to the quantized version of the full Einstein equations or to the Wheeler–DeWitt equation could be expressed as products of spatial and temporal eigenfunctions, or eigendistributions, of self-adjoint operators acting in corresponding separable Hilbert spaces. Moreover, near the big bang singularity, we derived sharp asymptotic estimates for the temporal eigenfunctions. In this paper, we show that, by using these estimates, there exists a complete sequence of unitarily equivalent eigenfunctions which can be extended past the singularity by even or odd mirroring as sufficiently smooth functions such that the extended functions are solutions of the appropriately extended equations valid in R in the classical sense. We also use this phenomenon to explain the missing antimatter. Full article
(This article belongs to the Section Physics)
34 pages, 1240 KB  
Article
Towards a Unitary Formulation of Quantum Field Theory in Curved Spacetime: The Case of de Sitter Spacetime
by K. Sravan Kumar and João Marto
Symmetry 2025, 17(1), 29; https://doi.org/10.3390/sym17010029 - 27 Dec 2024
Cited by 11 | Viewed by 2852
Abstract
Before we ask what the quantum gravity theory is, there is a legitimate quest to formulate a robust quantum field theory in curved spacetime (QFTCS). Several conceptual problems, especially unitarity loss (pure states evolving into mixed states), have raised concerns over several decades. [...] Read more.
Before we ask what the quantum gravity theory is, there is a legitimate quest to formulate a robust quantum field theory in curved spacetime (QFTCS). Several conceptual problems, especially unitarity loss (pure states evolving into mixed states), have raised concerns over several decades. In this paper, acknowledging the fact that time is a parameter in quantum theory, which is different from its status in the context of General Relativity (GR), we start with a “quantum first approach” and propose a new formulation for QFTCS based on the discrete spacetime transformations which offer a way to achieve unitarity. We rewrite the QFT in Minkowski spacetime with a direct-sum Fock space structure based on the discrete spacetime transformations and geometric superselection rules. Applying this framework to QFTCS, in the context of de Sitter (dS) spacetime, we elucidate how this approach to quantization complies with unitarity and the observer complementarity principle. We then comment on understanding the scattering of states in de Sitter spacetime. Furthermore, we discuss briefly the implications of our QFTCS approach to future research in quantum gravity. Full article
(This article belongs to the Special Issue Quantum Gravity and Cosmology: Exploring the Astroparticle Interface)
Show Figures

Figure 1

19 pages, 3253 KB  
Article
Federated Collaborative Learning with Sparse Gradients for Heterogeneous Data on Resource-Constrained Devices
by Mengmeng Li, Xin He and Jinhua Chen
Entropy 2024, 26(12), 1099; https://doi.org/10.3390/e26121099 - 16 Dec 2024
Cited by 1 | Viewed by 1959
Abstract
Federated learning enables devices to train models collaboratively while protecting data privacy. However, the computing power, memory, and communication capabilities of IoT devices are limited, making it difficult to train large-scale models on these devices. To train large models on resource-constrained devices, federated [...] Read more.
Federated learning enables devices to train models collaboratively while protecting data privacy. However, the computing power, memory, and communication capabilities of IoT devices are limited, making it difficult to train large-scale models on these devices. To train large models on resource-constrained devices, federated split learning allows for parallel training of multiple devices by dividing the model into different devices. However, under this framework, the client is heavily dependent on the server’s computing resources, and a large number of model parameters must be transmitted during communication, which leads to low training efficiency. In addition, due to the heterogeneous distribution among clients, it is difficult for the trained global model to apply to all clients. To address these challenges, this paper designs a sparse gradient collaborative federated learning model for heterogeneous data on resource-constrained devices. First, the sparse gradient strategy is designed by introducing the position Mask to reduce the traffic. To minimize accuracy loss, the dequantization strategy is applied to restore the original dense gradient tensor. Second, the influence of each client on the global model is measured by Euclidean distance, and based on this, the aggregation weight is assigned to each client, and an adaptive weight strategy is developed. Finally, the sparse gradient quantization method is combined with an adaptive weighting strategy, and a collaborative federated learning algorithm is designed for heterogeneous data distribution. Extensive experiments demonstrate that the proposed algorithm achieves high classification efficiency, effectively addressing the challenges posed by data heterogeneity. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

23 pages, 106560 KB  
Article
RLUNet: Overexposure-Content-Recovery-Based Single HDR Image Reconstruction with the Imaging Pipeline Principle
by Yiru Zheng, Wei Wang, Xiao Wang and Xin Yuan
Appl. Sci. 2024, 14(23), 11289; https://doi.org/10.3390/app142311289 - 3 Dec 2024
Cited by 1 | Viewed by 2829
Abstract
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms [...] Read more.
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms often fail to recover fine details and do not adequately address the fundamental principles of the LDR imaging pipeline. To overcome these limitations, the Reversing Lossy UNet (RLUNet) has been proposed, aiming to effectively balance dynamic range expansion and recover overexposed areas through a deeper understanding of LDR image pipeline principles. The RLUNet model comprises the Reverse Lossy Network, which is designed according to the LDR–HDR framework and focuses on reconstructing HDR images by recovering overexposed regions, dequantizing, linearizing the mapping, and suppressing compression artifacts. This framework, grounded in the principles of the LDR imaging pipeline, is designed to reverse the operations involved in lossy image operations. Furthermore, the integration of the Texture Filling Module (TFM) block with the Recovery of Overexposed Regions (ROR) module in the RLUNet model enhances the visual performance and detail texture of the overexposed areas in the reconstructed HDR image. The experiments demonstrate that the proposed RLUNet model outperforms various state-of-the-art methods on different testsets. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

28 pages, 1121 KB  
Article
Comparing Analytic and Numerical Studies of Tensor Perturbations in Loop Quantum Cosmology
by Guillermo A. Mena Marugán, Antonio Vicente-Becerril and Jesús Yébana Carrilero
Universe 2024, 10(9), 365; https://doi.org/10.3390/universe10090365 - 11 Sep 2024
Cited by 5 | Viewed by 1840
Abstract
We investigate the implications of different quantization approaches in Loop Quantum Cosmology for the primordial power spectrum of tensor modes. Specifically, we consider the hybrid and dressed metric approaches to derive the effective mass that governs the evolution of the tensor modes. Our [...] Read more.
We investigate the implications of different quantization approaches in Loop Quantum Cosmology for the primordial power spectrum of tensor modes. Specifically, we consider the hybrid and dressed metric approaches to derive the effective mass that governs the evolution of the tensor modes. Our study comprehensively examines the two resulting effective masses and how to estimate them in order to obtain approximated analytic solutions to the tensor perturbation equations. Since Loop Quantum Cosmology incorporates preinflationary effects in the dynamics of the perturbations, we do not have at our disposal a standard choice of privileged vacuum, like the Bunch–Davies state in quasi-de Sitter inflation. We then select the vacuum state by a recently proposed criterion which removes unwanted oscillations in the power spectrum and guarantees an asymptotic diagonalization of the Hamiltonian in the ultraviolet. This vacuum is usually called the NO-AHD (from the initials of Non-Oscillating with Asymptotic Hamiltonian Diagonalization) vacuum. Consequently, we compute the power spectrum by using our analytic approximations and by introducing a suitable numerical procedure, adopting in both cases an NO-AHD vacuum. With this information, we compare the different spectra obtained from the hybrid and the dressed metric approaches, as well as from the analytic and numerical procedures. In particular, this proves the remarkable accuracy of our approximations. Full article
Show Figures

Figure 1

Back to TopTop