Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (79)

Search Parameters:
Keywords = Planck’s law

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 386 KiB  
Article
A Horizon-as-Apparatus Model That Reproduces Black Hole Thermodynamics
by Daegene Song
Entropy 2025, 27(8), 859; https://doi.org/10.3390/e27080859 - 14 Aug 2025
Viewed by 248
Abstract
We present a measurement-driven model in which the black hole horizon functions as a classical apparatus, with Planck-scale patches acting as detectors for quantum field modes. This approach reproduces the Bekenstein–Hawking area law SBH=A4p2 and provides [...] Read more.
We present a measurement-driven model in which the black hole horizon functions as a classical apparatus, with Planck-scale patches acting as detectors for quantum field modes. This approach reproduces the Bekenstein–Hawking area law SBH=A4p2 and provides a concrete statistical interpretation of the 1/4 factor, while adhering to established principles rather than deriving the entropy anew from first principles. Each patch generates a thermal ensemble (∼0.25 nat per mode), and summing over area-scaling patches yields the total entropy. Quantum simulations incorporating a realistic Hawking spectrum produce Sk=0.257 nat (3% above 0.25 nat), and we outline testable predictions for analogue systems. Our main contribution is the horizon-as-apparatus mechanism and its information-theoretic bookkeeping. Full article
(This article belongs to the Special Issue Coarse and Fine-Grained Aspects of Gravitational Entropy)
Show Figures

Figure 1

11 pages, 2547 KiB  
Article
Simultaneous Remote Non-Invasive Blood Glucose and Lactate Measurements by Mid-Infrared Passive Spectroscopic Imaging
by Ruka Kobashi, Daichi Anabuki, Hibiki Yano, Yuto Mukaihara, Akira Nishiyama, Kenji Wada, Akiko Nishimura and Ichiro Ishimaru
Sensors 2025, 25(15), 4537; https://doi.org/10.3390/s25154537 - 22 Jul 2025
Viewed by 447
Abstract
Mid-infrared passive spectroscopic imaging is a novel non-invasive and remote sensing method based on Planck’s law. It enables the acquisition of component-specific information from the human body by measuring naturally emitted thermal radiation in the mid-infrared region. Unlike active methods that require an [...] Read more.
Mid-infrared passive spectroscopic imaging is a novel non-invasive and remote sensing method based on Planck’s law. It enables the acquisition of component-specific information from the human body by measuring naturally emitted thermal radiation in the mid-infrared region. Unlike active methods that require an external light source, our passive approach harnesses the body’s own emission, thereby enabling safe, long-term monitoring. In this study, we successfully demonstrated the simultaneous, non-invasive measurements of blood glucose and lactate levels of the human body using this method. The measurements, conducted over approximately 80 min, provided emittance data derived from mid-infrared passive spectroscopy that showed a temporal correlation with values obtained using conventional blood collection sensors. Furthermore, to evaluate localized metabolic changes, we performed k-means clustering analysis of the spectral data obtained from the upper arm. This enabled visualization of time-dependent lactate responses with spatial resolution. These results demonstrate the feasibility of multi-component monitoring without physical contact or biological sampling. The proposed technique holds promise for translation to medical diagnostics, continuous health monitoring, and sports medicine, in addition to facilitating the development of next-generation healthcare technologies. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2025)
Show Figures

Figure 1

32 pages, 735 KiB  
Article
Dynamic Balance: A Thermodynamic Principle for the Emergence of the Golden Ratio in Open Non-Equilibrium Steady States
by Alejandro Ruiz
Entropy 2025, 27(7), 745; https://doi.org/10.3390/e27070745 - 11 Jul 2025
Viewed by 656
Abstract
We develop a symmetry-based variational theory that shows the coarse-grained balance of work inflow to heat outflow in a driven, dissipative system relaxed to the golden ratio. Two order-2 Möbius transformations—a self-dual flip and a self-similar shift—generate a discrete non-abelian subgroup of [...] Read more.
We develop a symmetry-based variational theory that shows the coarse-grained balance of work inflow to heat outflow in a driven, dissipative system relaxed to the golden ratio. Two order-2 Möbius transformations—a self-dual flip and a self-similar shift—generate a discrete non-abelian subgroup of PGL(2,Q(5)). Requiring any smooth, strictly convex Lyapunov functional to be invariant under both maps enforces a single non-equilibrium fixed point: the golden mean. We confirm this result by (i) a gradient-flow partial-differential equation, (ii) a birth–death Markov chain whose continuum limit is Fokker–Planck, (iii) a Martin–Siggia–Rose field theory, and (iv) exact Ward identities that protect the fixed point against noise. Microscopic kinetics merely set the approach rate; three parameter-free invariants emerge: a 62%:38% split between entropy production and useful power, an RG-invariant diffusion coefficient linking relaxation time and correlation length Dα=ξz/τ, and a ϑ=45 eigen-angle that maps to the golden logarithmic spiral. The same dual symmetry underlies scaling laws in rotating turbulence, plant phyllotaxis, cortical avalanches, quantum critical metals, and even de-Sitter cosmology, providing a falsifiable, unifying principle for pattern formation far from equilibrium. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

19 pages, 1419 KiB  
Article
Revisiting the Relationship Between the Scale Factor (a(t)) and Cosmic Time (t) Using Numerical Analysis
by Artur Chudzik
Mathematics 2025, 13(14), 2233; https://doi.org/10.3390/math13142233 - 9 Jul 2025
Viewed by 654
Abstract
Background: Current cosmological fits typically assume a direct relation between cosmic time (t) and the scale factor (a(t)), yet this ansatz remains largely untested across diverse observations. Objectives: We (i) test whether a single power-law scaling [...] Read more.
Background: Current cosmological fits typically assume a direct relation between cosmic time (t) and the scale factor (a(t)), yet this ansatz remains largely untested across diverse observations. Objectives: We (i) test whether a single power-law scaling (a(t)tα) can reproduce late- and early-time cosmological data and (ii) explore whether a dynamically evolving (α(t)), modeled as a scalar–tensor field, naturally induces directional asymmetry in cosmic evolution. Methods: We fit a constant-α model to four independent datasets: 1701 Pantheon+SH0ES supernovae, 162 gamma-ray bursts, 32 cosmic chronometers, and the Planck 2018 TT spectrum (2507 points). The CMB angular spectrum is mapped onto a logarithmic distance-like scale (μ=log10D), allowing for unified likelihood analysis. Each dataset yields slightly different preferred values for H0 and α; therefore, we also perform a global combined fit. For scalar–tensor dynamics, we integrate α(t) under three potentials—quadratic, cosine, and parity breaking (α3sinα)—and quantify directionality via forward/backward evolution and Lyapunov exponents. Results: (1) The constant-α model achieves good fits across all datasets. In combined analysis, it yields H070kms1Mpc1 and α1.06, outperforming ΛCDM globally (ΔAIC401254), though ΛCDM remains favored for some low-redshift chronometer data. High-redshift GRB and CMB data drive the improved fit. Numerical likelihood evaluations are approximately three times faster than for ΛCDM. (2) Dynamical α(t) models exhibit time-directional behavior: under asymmetric potentials, forward evolution displays finite Lyapunov exponents (λL103), while backward trajectories remain confined (λL<0), realizing classical arrow-of-time emergence without entropy or quantum input. Limitations: This study addresses only homogeneous background evolution; perturbations and physical derivations of potentials remain open questions. Conclusions: The time-scaling approach offers a computationally efficient control scenario in cosmological model testing. Scalar–tensor extensions naturally introduce classical time asymmetry that is numerically accessible and observationally testable within current datasets. Code and full data are available. Full article
Show Figures

Figure 1

30 pages, 10022 KiB  
Article
A Camera Calibration Method for Temperature Measurements of Incandescent Objects Based on Quantum Efficiency Estimation
by Vittorio Sala, Ambra Vandone, Michele Banfi, Federico Mazzucato, Stefano Baraldo and Anna Valente
Sensors 2025, 25(10), 3094; https://doi.org/10.3390/s25103094 - 14 May 2025
Viewed by 740
Abstract
High-temperature thermal images enable monitoring and controlling processes in metal, semiconductors, and ceramic manufacturing but also monitor activities of volcanoes or contrasting wildfires. Infrared thermal cameras require knowledge of the emissivity coefficient, while multispectral pyrometers provide fast and accurate temperature measurements with limited [...] Read more.
High-temperature thermal images enable monitoring and controlling processes in metal, semiconductors, and ceramic manufacturing but also monitor activities of volcanoes or contrasting wildfires. Infrared thermal cameras require knowledge of the emissivity coefficient, while multispectral pyrometers provide fast and accurate temperature measurements with limited spatial resolution. Bayer-pattern cameras offer a compromise by capturing multiple spectral bands with high spatial resolution. However, temperature estimation from color remains challenging due to spectral overlaps among the color filters in the Bayer pattern, and a widely accepted calibration method is still missing. In this paper, the quantum efficiency of an imaging system including the camera sensor, lens, and filters is inferred from a sequence of images acquired by looking at a black body source between 700 °C and 1100 °C. The physical model of the camera, based on the Planck law and the optimized quantum efficiency, allows the calculation of the Planckian locus in the color space of the camera. A regression neural network, trained on a synthetic dataset representing the Planckian locus, predicts temperature pixel by pixel in the 700 °C to 3500 °C range from live images. Experiments done with a color camera, a multispectral camera, and a furnace for heat treatment of metals as ground truth show that our calibration procedure leads to temperature prediction with accuracy and precision of a few tens of Celsius degrees in the calibration temperature range. Tests on a temperature-calibrated halogen bulb prove good generalization capability to a wider temperature range while being robust to noise. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

34 pages, 397 KiB  
Article
Hilbert Bundles and Holographic Space–Time Models
by Tom Banks
Astronomy 2025, 4(2), 7; https://doi.org/10.3390/astronomy4020007 - 22 Apr 2025
Viewed by 799
Abstract
We reformulate holographic space–time models in terms of Hilbert bundles over the space of the time-like geodesics in a Lorentzian manifold. This reformulation resolves the issue of the action of non-compact isometry groups on finite-dimensional Hilbert spaces. Following Jacobson, I view the background [...] Read more.
We reformulate holographic space–time models in terms of Hilbert bundles over the space of the time-like geodesics in a Lorentzian manifold. This reformulation resolves the issue of the action of non-compact isometry groups on finite-dimensional Hilbert spaces. Following Jacobson, I view the background geometry as a hydrodynamic flow, whose connection to an underlying quantum system follows from the Bekenstein–Hawking relation between area and entropy, generalized to arbitrary causal diamonds. The time-like geodesics are equivalent to the nested sequences of causal diamonds, and the area of the holoscreen (The holoscreen is the maximal d2 volume (“area”) leaf of a null foliation of the diamond boundary. I use the term area to refer to its volume.) encodes the entropy of a certain density matrix on a finite-dimensional Hilbert space. I review arguments that the modular Hamiltonian of a diamond is a cutoff version of the Virasoro generator L0 of a 1+1-dimensional CFT of a large central charge, living on an interval in the longitudinal coordinate on the diamond boundary. The cutoff is chosen so that the von Neumann entropy is lnD, up to subleading corrections, in the limit of a large-dimension diamond Hilbert space. I also connect those arguments to the derivation of the ’t Hooft commutation relations for horizon fluctuations. I present a tentative connection between the ’t Hooft relations and U(1) currents in the CFTs on the past and future diamond boundaries. The ’t Hooft relations are related to the Schwinger term in the commutator of the vector and axial currents. The paper in can be read as evidence that the near-horizon dynamics for causal diamonds much larger than the Planck scale is equivalent to a topological field theory of the ’t Hooft CR plus small fluctuations in the transverse geometry. Connes’ demonstration that the Riemannian geometry is encoded in the Dirac operator leads one to a completely finite theory of transverse geometry fluctuations, in which the variables are fermionic generators of a superalgebra, which are the expansion coefficients of the sections of the spinor bundle in Dirac eigenfunctions. A finite cutoff on the Dirac spectrum gives rise to the area law for entropy and makes the geometry both “fuzzy” and quantum. Following the analysis of Carlip and Solodukhin, I model the expansion coefficients as two-dimensional fermionic fields. I argue that the local excitations in the interior of a diamond are constrained states where the spinor variables vanish in the regions of small area on the holoscreen. This leads to an argument that the quantum gravity in asymptotically flat space must be exactly supersymmetric. Full article
Show Figures

Figure 1

16 pages, 4328 KiB  
Article
Laser Annealing of Si Wafers Based on a Pulsed CO2 Laser
by Ziming Wang, Guochang Wang, Mingkun Liu, Sicheng Li, Zhenzhen Xie, Liemao Hu, Hui Li, Fangjin Ning, Wanli Zhao, Changjun Ke, Zhiyong Li and Rongqing Tan
Photonics 2025, 12(4), 359; https://doi.org/10.3390/photonics12040359 - 10 Apr 2025
Viewed by 1123
Abstract
Laser annealing plays a significant role in the fabrication of scaled-down semiconductor devices by activating dopant ions and rearranging silicon atoms in ion-implanted silicon wafers, thereby improving material properties. Precise temperature control is crucial in wafer annealing, particularly for repeated processes where repeatability [...] Read more.
Laser annealing plays a significant role in the fabrication of scaled-down semiconductor devices by activating dopant ions and rearranging silicon atoms in ion-implanted silicon wafers, thereby improving material properties. Precise temperature control is crucial in wafer annealing, particularly for repeated processes where repeatability affects uniformity. In this study, we employ a three-dimensional time-dependent thermal simulation model to numerically analyze the multiple static laser annealing processes based on a CO2 laser with a center wavelength of 9.3 μm and a pulse repetition rate of 10 kHz. The heat transfer equation is solved using a multiphysics coupling approach to accurately simulate the effects of different numbers of CO2 laser pulses on wafer temperature rise and repeatability. Additionally, a pyrometer is used to collect and convert the surface temperature of the wafer. Radiation intensity is converted to temperature via Planck’s law for real-time monitoring. Post-processing is performed to fit the measured temperature and the actual temperature into a linear relationship, aiding in obtaining the actual temperature under small beam spots. According to the simulation conditions, a wafer annealing device using a CO2 laser as the light source was independently built for verification, and a stable and uniform annealing effect was realized. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

12 pages, 10013 KiB  
Article
Transient Thermal Energy Harvesting at a Single Temperature Using Nonlinearity
by Tamzeed B. Amin, James M. Mangum, Md R. Kabir, Syed M. Rahman, Ashaduzzaman, Pradeep Kumar, Luis L. Bonilla and Paul M. Thibado
Entropy 2025, 27(4), 374; https://doi.org/10.3390/e27040374 - 31 Mar 2025
Viewed by 407
Abstract
The authors present an in-depth theoretical study of two nonlinear circuits capable of transient thermal energy harvesting at one temperature. The first circuit has a storage capacitor and diode connected in series. The second circuit has three storage capacitors, and two diodes arranged [...] Read more.
The authors present an in-depth theoretical study of two nonlinear circuits capable of transient thermal energy harvesting at one temperature. The first circuit has a storage capacitor and diode connected in series. The second circuit has three storage capacitors, and two diodes arranged for full wave rectification. The authors solve both Ito–Langevin and Fokker–Planck equations for both circuits using a large parameter space including capacitance values and diode quality. Surprisingly, using diodes one can harvest thermal energy at a single temperature by charging capacitors. However, this is a transient phenomenon. In equilibrium, the capacitor charge is zero, and this solution alone satisfies the second law of thermodynamics. The authors found that higher quality diodes provide more stored charge and longer lifetimes. Harvesting thermal energy from the ambient environment using diode nonlinearity requires capacitors to be charged but then disconnected from the circuit before they have time to discharge. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

15 pages, 4840 KiB  
Article
Detailed Modeling of Surface-Plasmon Resonance Spectrometer Response for Accurate Correction
by Ricardo David Araguillin-López, Angel Dickerson Méndez-Cevallos and César Costa-Vera
Sensors 2025, 25(3), 894; https://doi.org/10.3390/s25030894 - 1 Feb 2025
Cited by 2 | Viewed by 1762
Abstract
This work identifies and models the inline devices in an experimental surface-plasmon resonance spectroscopy setup to determine the system’s transfer function. This allows for the comparison of theoretical and experimental responses and the analysis of the dynamics of the components of an analyte [...] Read more.
This work identifies and models the inline devices in an experimental surface-plasmon resonance spectroscopy setup to determine the system’s transfer function. This allows for the comparison of theoretical and experimental responses and the analysis of the dynamics of the components of an analyte placed on the sensor at the nanometer scale. The transfer functions of individual components, including the light source, polarizers, spectrometer, optical fibers, and the SPR sensor, were determined experimentally and theoretically. The theoretical model employed Planck’s law for the light source, manufacturer specifications for some components, and experimental characterization for others, such as the polarizers and optical fibers. The SPR sensor was modeled using characteristic matrix theory, incorporating the optical constants of the prism, gold film, chromium adhesive layer, and analyte. The combined transfer functions created a comprehensive model of the entire experimental system. This model successfully reproduced the experimental SPR spectrum with a similarity greater than 95%. The system’s operational range was also extended, constrained by the signal-to-noise ratio at the spectrum’s edges. The detailed model allows for the accurate correction of the measured spectra, which will be essential for the further analysis of nanosuspensions and molecules dissolved in liquids. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

32 pages, 3170 KiB  
Article
Inequality in the Distribution of Wealth and Income as a Natural Consequence of the Equal Opportunity of All Members in the Economic System Represented by a Scale-Free Network
by John G. Ingersoll
Economies 2024, 12(9), 232; https://doi.org/10.3390/economies12090232 - 29 Aug 2024
Cited by 1 | Viewed by 2257
Abstract
The purpose of this work is to examine the nature of the historically observed and empirically described by the Pareto law inequality in the distribution of wealth and income in an economic system. This inequality is presumed to be the result of unequal [...] Read more.
The purpose of this work is to examine the nature of the historically observed and empirically described by the Pareto law inequality in the distribution of wealth and income in an economic system. This inequality is presumed to be the result of unequal opportunity by its members. An analytical model of the economic system consisting of a large number of actors, all having equal access to its total wealth (or income) has been developed that is formally represented by a scale-free network comprised of nodes (actors) and links (states of wealth or income). The dynamic evolution of the complex network can be mapped in turn, as is known, into a system of quantum particles (links) distributed among various energy levels (nodes) in thermodynamic equilibrium. The distribution of quantum particles (photons) at different energy levels in the physical system is then derived based on statistical thermodynamics with the attainment of maximal entropy for the system to be in a dynamic equilibrium. The resulting Planck-type distribution of the physical system mapped into a scale-free network leads naturally into the Pareto law distribution of the economic system. The conclusions of the scale-free complex network model leading to the analytical derivation of the empirical Pareto law are multifold. First, any complex economic system behaves akin to a scale-free complex network. Second, equal access or opportunity leads to unequal outcomes. Third, the optimal value for the Pareto index is obtained that ensures the optimal, albeit unequal, outcome of wealth and income distribution. Fourth, the optimal value for the Gini coefficient can then be calculated and be compared to the empirical values of that coefficient for wealth and income to ascertain how close an economic system is to its optimal distribution of income and wealth among its members. Fifth, in an economic system with equal opportunity for all its members there should be no difference between the resulting income and wealth distributions. Examination of the wealth and income distributions described by the Gini coefficient of national economies suggests that income and particularly wealth are far off from their optimal value. We conclude that the equality of opportunity should be the fundamental guiding principle of any economic system for the optimal distribution of wealth and income. The practical application of this conclusion is that societies ought to shift focus from policies such as taxation and payment transfers purporting to produce equal outcomes for all, a goal which is unattainable and wasteful, to policies advancing among others education, health care, and affordable housing for all as well as the re-evaluation of rules and institutions such that all members in the economic system have equal opportunity for the optimal utilization of resources and the distribution of wealth and income. Future research efforts should develop the scale-free complex network model of the economy as a complement to the current standard models. Full article
(This article belongs to the Special Issue Innovation, Reallocation and Economy Growth)
Show Figures

Figure 1

11 pages, 387 KiB  
Article
On the Speed of Light as a Key Element in the Structure of Quantum Mechanics
by Tomer Shushi
Foundations 2024, 4(3), 411-421; https://doi.org/10.3390/foundations4030026 - 13 Aug 2024
Viewed by 1447
Abstract
We follow the assumption that relativistic causality is a key element in the structure of quantum mechanics and integrate the speed of light, c, into quantum mechanics through the postulate that the (reduced) Planck constant is a function of c with a [...] Read more.
We follow the assumption that relativistic causality is a key element in the structure of quantum mechanics and integrate the speed of light, c, into quantum mechanics through the postulate that the (reduced) Planck constant is a function of c with a leading order of the form cΛ/cp for a constant Λ>0, and p>1. We show how the limit c implies classicality in quantum mechanics and explain why p has to be larger than 1. As the limit c breaks down both relativity theory and quantum mechanics, as followed by the proposed model, it can then be understood through similar conceptual physical laws. We further show how the position-dependent speed of light gives rise to an effective curved space in quantum systems and show that a stronger gravitational field implies higher quantum uncertainties, followed by the varied c. We then discuss possible ways to find experimental evidence of the proposed model using set-ups to test the varying speed of light models and examine analogies of the model based on electrons in semiconductor heterostructures. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

26 pages, 12313 KiB  
Article
Simulation Analysis on the Characteristics of Aerosol Particles to Inhibit the Infrared Radiation of Exhaust Plumes
by Wei Li, Yurou Wang, Lei Zhang, Baohai Gao and Mingjian He
Materials 2024, 17(14), 3505; https://doi.org/10.3390/ma17143505 - 15 Jul 2024
Cited by 3 | Viewed by 1346
Abstract
Aerosol infrared stealth technology is a highly effective method to reduce the intensity of infrared radiation by releasing aerosol particles around the hot exhaust plume. This paper uses a Computational Fluid Dynamics (CFD) two-phase flow model to simulate the exhaust plume fields of [...] Read more.
Aerosol infrared stealth technology is a highly effective method to reduce the intensity of infrared radiation by releasing aerosol particles around the hot exhaust plume. This paper uses a Computational Fluid Dynamics (CFD) two-phase flow model to simulate the exhaust plume fields of three kinds of engine nozzles containing aerosol particles. The Planck-weighted narrow spectral band gas model and the Reverse Monte Carlo method are used for infrared radiation transfer calculations to analyze the influencing factors and laws for the suppression of the infrared radiation properties of exhaust plumes by four typical aerosol particles. The simulation calculation results show that the radiation suppression efficiency of aerosol particles on the exhaust plume reaches its maximum value at a detection angle (ϕ) of 0° and decreases with increasing ϕ, reaching its minimum value at 90°. Reducing the aerosol particle size and increasing the aerosol mass flux can enhance the suppression effect. In the exhaust plume studied in this paper, the radiation suppression effect is best when the particle size is 1 μm and the mass flux is 0.08 kg/s. In addition, the inhibition of aerosol particles varies among different materials, with graphite having the best inhibition effect, followed by H2O, MgO, and SiO2. Solid particles will increase the radiation intensity and change the spectral radiation characteristics of the exhaust plume at detection angles close to the vertical nozzle axis due to the scattering effect. Finally, this paper analyzed the suppression effects of three standard nozzle configurations under the same aerosol particle condition and found that the S-bend nozzle provides better suppression. Full article
Show Figures

Figure 1

9 pages, 2884 KiB  
Comment
Comment on Yu et al. Land Surface Temperature Retrieval from Landsat 8 TIRS—Comparison between Radiative Transfer Equation-Based Method, Split Window Algorithm and Single Channel Method. Remote Sens. 2014, 6, 9829–9852
by Almustafa Abd Elkader Ayek and Bilel Zerouali
Remote Sens. 2024, 16(14), 2514; https://doi.org/10.3390/rs16142514 - 9 Jul 2024
Viewed by 2026
Abstract
Accurate land surface temperature (LST) retrieval from satellite data is pivotal in environmental monitoring and scientific research. This study addresses the impact of variability in the effective wavelengths used for LST retrieval from the Thermal Infrared Sensor (TIRS) data of Landsat 8. We [...] Read more.
Accurate land surface temperature (LST) retrieval from satellite data is pivotal in environmental monitoring and scientific research. This study addresses the impact of variability in the effective wavelengths used for LST retrieval from the Thermal Infrared Sensor (TIRS) data of Landsat 8. We conduct a detailed analysis comparing the effective wavelengths reported by Yu et al. (2014) and those derived from data provided by the USGS. Our analysis reveals significant variability in the effective wavelengths for bands 10 and 11 of Landsat 8. By applying Planck’s Law and utilizing the K1 and K2 coefficients available in the metadata of Landsat 8 products, we derive the effective wavelengths for bands 10 and 11. We also rederive the effective wavelength by integrating the spectral response function of the TIRS1 sensor. Our findings indicate that the effective wavelength for band 10 is 10.814 μm, aligning with the USGS data, while the effective wavelength for band 11 is 12.013 μm. We discuss the implications of these corrected effective wavelengths on the accuracy of LST retrieval algorithms, particularly the single channel algorithm (SC) and the radiative transfer equation (RT) employed by Yu et al. The importance of using precise effective wavelengths in satellite-based temperature retrieval is emphasized, to ensure the reliability and consistency of results. This analysis underscores the critical role of accurate spectral calibration parameters in remote sensing studies and provides valuable insights in the field of land surface temperature retrieval from Landsat 8 TIRS data. Full article
Show Figures

Figure 1

11 pages, 4599 KiB  
Communication
Emission Spectroscopy-Based Sensor System to Correlate the In-Cylinder Combustion Temperature of a Diesel Engine to NOx Emissions
by Jürgen Wultschner, Ingo Schmitz, Stephan Révidat, Johannes Ullrich and Thomas Seeger
Sensors 2024, 24(8), 2459; https://doi.org/10.3390/s24082459 - 11 Apr 2024
Viewed by 1438
Abstract
Due to a rising importance of the reduction of pollutant, produced by conventional energy technologies, the knowledge of pollutant forming processes during a combustion is of great interest. In this study the in-cylinder temperature, of a near series diesel engine, is examined with [...] Read more.
Due to a rising importance of the reduction of pollutant, produced by conventional energy technologies, the knowledge of pollutant forming processes during a combustion is of great interest. In this study the in-cylinder temperature, of a near series diesel engine, is examined with a minimal invasive emission spectroscopy sensor. The soot, nearly a black body radiator, emits light, which is spectrally detected and evaluated with a modified function of Planck’s law. The results show a good correlation between the determined temperatures and the NOx concentration, measured in the exhaust gas of the engine, during a variety of engine operating points. A standard deviation between 25 K and 49 K was obtained for the in-cylinder temperature measurements. Full article
(This article belongs to the Special Issue Optical Spectroscopy for Sensing, Monitoring and Analysis)
Show Figures

Figure 1

12 pages, 271 KiB  
Article
Optimal Fourth-Order Methods for Multiple Zeros: Design, Convergence Analysis and Applications
by Sunil Kumar, Janak Raj Sharma and Lorentz Jäntschi
Axioms 2024, 13(3), 143; https://doi.org/10.3390/axioms13030143 - 23 Feb 2024
Cited by 2 | Viewed by 1590
Abstract
Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study presents derivative-free iterative methods for finding multiple zeros with an ideal fourth-order convergence [...] Read more.
Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study presents derivative-free iterative methods for finding multiple zeros with an ideal fourth-order convergence rate. Furthermore, the study explores applications of the methods in both real-life and academic contexts. In particular, we examine the convergence of the methods by applying them to the problems, namely Van der Waals equation of state, Planck’s law of radiation, the Manning equation for isentropic supersonic flow and some academic problems. Numerical results reveal that the proposed derivative-free methods are more efficient and consistent than existing methods. Full article
(This article belongs to the Special Issue Applied Mathematics and Numerical Analysis: Theory and Applications)
Back to TopTop