Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (131)

Search Parameters:
Keywords = moment invariants

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3979 KiB  
Article
Generation and Classification of Novel Segmented Control Charts (SCC) Based on Hu’s Invariant Moments and the K-Means Algorithm
by Roberto Baeza-Serrato
Appl. Sci. 2025, 15(15), 8550; https://doi.org/10.3390/app15158550 (registering DOI) - 1 Aug 2025
Abstract
Control charts (CCs) are one of the most important techniques in statistical process control (SPC) used to monitor the behavior of critical variables. SPC is based on the averages of the samples taken. In this way, not every measurement is observed, and errors [...] Read more.
Control charts (CCs) are one of the most important techniques in statistical process control (SPC) used to monitor the behavior of critical variables. SPC is based on the averages of the samples taken. In this way, not every measurement is observed, and errors in measurements or out-of-control behaviors that are not shown graphically can be hidden. This research proposes a novel segmented control chart (SCC) that considers each measurement of the samples, expressed in matrix form. The vision system technique is used to segment measurements by shading and segmenting into binary values based on the control limits of SPC. Once the matrix is segmented, the seven main features of the matrix are extracted using the translation-, scale-, and rotation-invariant Hu moments of the segmented matrices. Finally, a grouping is made to classify the samples in clear and simple language as excellent, good, or regular using the k-means algorithm. The results visually display the total pattern behavior of the samples and their interpretation when they are classified intelligently. The proposal can be replicated in any production sector and strengthen the control of the sampling process. Full article
Show Figures

Figure 1

33 pages, 4531 KiB  
Article
Development of the Theory of Additional Impact on the Deformation Zone from the Side of Rolling Rolls
by Valeriy Chigirinsky, Irina Volokitina, Abdrakhman Naizabekov, Sergey Lezhnev and Sergey Kuzmin
Symmetry 2025, 17(8), 1188; https://doi.org/10.3390/sym17081188 - 25 Jul 2025
Viewed by 131
Abstract
The model explicitly incorporates boundary conditions that account for the complex interplay between sections experiencing varying degrees of reduction. This interaction significantly influences the overall deformation behavior and force loading. The control effect is associated with boundary conditions determined by the unevenness of [...] Read more.
The model explicitly incorporates boundary conditions that account for the complex interplay between sections experiencing varying degrees of reduction. This interaction significantly influences the overall deformation behavior and force loading. The control effect is associated with boundary conditions determined by the unevenness of the compression, which have certain quantitative and qualitative characteristics. These include additional loading, which is less than the main load, which implements the process of plastic deformation, and the ratio of control loads from the entrance and exit of the deformation site. According to this criterion, it follows from experimental data that the controlling effect on the plastic deformation site occurs with a ratio of additional and main loading in the range of 0.2–0.8. The next criterion is the coefficient of support, which determines the area of asymmetry of the force load and is in the range of 2.00–4.155. Furthermore, the criterion of the regulating force ratio at the boundaries of the deformation center forming a longitudinal plastic shear is within the limits of 2.2–2.5 forces and 1.3–1.4 moments of these forces. In this state, stresses and deformations of the plastic medium are able to realize the effects of plastic shaping. The force effect reduces with an increase in the unevenness of the deformation. This is due to a change in height of the longitudinal interaction of the disparate sections of the strip. There is an appearance of a new quality of loading—longitudinal plastic shear along the deformation site. The unbalanced additional force action at the entrance of the deformation source is balanced by the force source of deformation, determined by the appearance of a functional shift in the model of the stress state of the metal. The developed theory, using the generalized method of an argument of functions of a complex variable, allows us to characterize the functional shift in the deformation site using invariant Cauchy–Riemann relations and Laplace differential equations. Furthermore, the model allows for the investigation of material properties such as the yield strength and strain hardening, influencing the size and characteristics of the identified limit state zone. Future research will focus on extending the model to incorporate more complex material behaviors, including viscoelastic effects, and to account for dynamic loading conditions, more accurately reflecting real-world milling processes. The detailed understanding gained from this model offers significant potential for optimizing mill roll designs and processes for enhanced efficiency and reduced energy consumption. Full article
(This article belongs to the Special Issue Symmetry in Finite Element Modeling and Mechanics)
Show Figures

Figure 1

17 pages, 939 KiB  
Article
Whole-Body 3D Pose Estimation Based on Body Mass Distribution and Center of Gravity Constraints
by Fan Wei, Guanghua Xu, Qingqiang Wu, Penglin Qin, Leijun Pan and Yihua Zhao
Sensors 2025, 25(13), 3944; https://doi.org/10.3390/s25133944 - 25 Jun 2025
Viewed by 520
Abstract
Estimating the 3D pose of a human body from monocular images is crucial for computer vision applications, but the technique remains challenging due to depth ambiguity and self-occlusion. Traditional methods often suffer from insufficient prior knowledge and weak constraints, resulting in inaccurate 3D [...] Read more.
Estimating the 3D pose of a human body from monocular images is crucial for computer vision applications, but the technique remains challenging due to depth ambiguity and self-occlusion. Traditional methods often suffer from insufficient prior knowledge and weak constraints, resulting in inaccurate 3D keypoint estimation. In this paper, we propose a method for whole-body 3D pose estimation based on a Transformer architecture, integrating body mass distribution and center of gravity constraints. The method maps the pose to the center of gravity position using the anatomical mass ratio of the human body and computes the segment-level center of gravity using the moment synthesis method. A combined loss function is designed to enforce consistency between the predicted keypoints and the center of gravity position, as well as the invariance of limb length. Extensive experiments on the Human 3.6M WholeBody dataset demonstrate that the proposed method achieves state-of-the-art performance, with a whole-body mean joint position error (MPJPE) of 44.49 mm, which is 60.4% lower than the previous Large Simple Baseline method. Notably, it reduces the body part keypoints’ MPJPE from 112.6 to 40.41, showcasing the enhanced robustness and effectiveness to occluded scenes. This study highlights the effectiveness of integrating physical constraints into deep learning frameworks for accurate 3D pose estimation. Full article
Show Figures

Figure 1

12 pages, 304 KiB  
Article
The Well-Posedness and Ergodicity of a CIR Equation Driven by Pure Jump Noise
by Xu Liu, Xingfu Hong, Fujing Tian, Chufan Xiao and Hao Wen
Mathematics 2025, 13(12), 1938; https://doi.org/10.3390/math13121938 - 11 Jun 2025
Viewed by 274
Abstract
The current paper is devoted to the dynamical property of the stochastic Cox–Ingersoll–Ross (CIR) model with pure jump noise, which is an extension of the CIR model. Firstly, we characterize the existence and 2-moment of the CIR process with a pure jump process. [...] Read more.
The current paper is devoted to the dynamical property of the stochastic Cox–Ingersoll–Ross (CIR) model with pure jump noise, which is an extension of the CIR model. Firstly, we characterize the existence and 2-moment of the CIR process with a pure jump process. Consequently, we provide sufficient conditions for the compensated Poisson random measure under which the CIR process with a pure jump process is ergodic. Moreover, the stationary solution can be constructed from the invariant measure. Some numerical simulations are provided to visualize the theoretical results. Full article
(This article belongs to the Section C1: Difference and Differential Equations)
Show Figures

Figure 1

15 pages, 362 KiB  
Article
Revision of the Screening Robust Estimation Method for the Retrospective Analysis of Normal Processes
by Víctor H. Morales, Carlos A. Panza and Roberto J. Herrera
Processes 2025, 13(5), 1381; https://doi.org/10.3390/pr13051381 - 30 Apr 2025
Viewed by 280
Abstract
This paper provides a comprehensive review of the Screening Robust Estimation Method (SREM) for normal processes in Phase I. Particular emphasis is placed on the exponentially weighted moving average (EWMA) control chart, which is employed in the retrospective monitoring stage. A central concern [...] Read more.
This paper provides a comprehensive review of the Screening Robust Estimation Method (SREM) for normal processes in Phase I. Particular emphasis is placed on the exponentially weighted moving average (EWMA) control chart, which is employed in the retrospective monitoring stage. A central concern addressed in this discussion is that the EWMA chart is said to be based on the false alarm rate (FAR) design criterion, but it is accurately implemented. It is well established that FAR-based methods for Phase I monitoring assume that the distribution of the monitoring statistic remains invariant at each sampling instance. However, the EWMA statistic inherently introduces a sequential dependence among all sampling moments, which contradicts this assumption. This paper shows that a modified EWMA chart, incorporating probability-based control limits rather than the conventional formulation utilized in the SREM, enhances the monitoring of normal processes in Phase I. Through simulations, it is established that the new design proposal of an EWMA chart for Phase I monitoring has a slightly narrower decision threshold among the control methodologies included in the study. The new chart is as effective as the traditional X¯ charts in detecting localized increases in the target value of the mean of a normal process, and outperforms them when other types of anomalies are present in the available preliminary samples. Full article
Show Figures

Figure 1

54 pages, 1932 KiB  
Article
Fokker–Planck Model-Based Central Moment Lattice Boltzmann Method for Effective Simulations of Thermal Convective Flows
by William Schupbach and Kannan Premnath
Energies 2025, 18(8), 1890; https://doi.org/10.3390/en18081890 - 8 Apr 2025
Viewed by 418
Abstract
The Fokker–Planck (FP) equation represents the drift and diffusive processes in kinetic models. It can also be regarded as a model for the collision integral of the Boltzmann-type equation to represent thermo-hydrodynamic processes in fluids. The lattice Boltzmann method (LBM) is a drastically [...] Read more.
The Fokker–Planck (FP) equation represents the drift and diffusive processes in kinetic models. It can also be regarded as a model for the collision integral of the Boltzmann-type equation to represent thermo-hydrodynamic processes in fluids. The lattice Boltzmann method (LBM) is a drastically simplified discretization of the Boltzmann equation for simulating complex fluid motions and beyond. We construct new two FP-based LBMs, one for recovering the Navier–Stokes equations for fluid dynamics and the other for simulating the energy equation, where, in each case, the effect of collisions is represented as relaxations of different central moments to their respective attractors. Such attractors are obtained by matching the changes in various discrete central moments due to collision with the continuous central moments prescribed by the FP model. As such, the resulting central moment attractors depend on the lower-order moments and the diffusion tensor parameters, and significantly differ from those based on the Maxwell distribution. The diffusion tensor parameters for evolving higher moments in simulating fluid motions at relatively low viscosities are chosen based on a renormalization principle. Moreover, since the number of collision invariants of the FP-based LBMs for fluid motions and energy transport are different, the forms of the respective attractors are quite distinct. The use of such central moment formulations in modeling the collision step offers significant improvements in numerical stability, especially for simulations of thermal convective flows under a wide range of variations in the transport coefficients of the fluid. We develop new FP central moment LBMs for thermo-hydrodynamics in both two and three dimensions, and demonstrate the ability of our approach to simulate various cases involving thermal convective buoyancy-driven flows especially at high Rayleigh numbers with good quantitative accuracy. Moreover, we show significant improvements in the numerical stability of our FP central moment LBMs when compared to other existing central moment LBMs using the Maxwell distribution in achieving high Peclet numbers for mixed convection flows involving shear effects. Full article
(This article belongs to the Special Issue Numerical Heat Transfer and Fluid Flow 2024)
Show Figures

Figure 1

33 pages, 6850 KiB  
Article
Microsurface Defect Recognition via Microlaser Line Projection and Affine Moment Invariants
by J. Apolinar Muñoz Rodríguez
Coatings 2025, 15(4), 385; https://doi.org/10.3390/coatings15040385 - 25 Mar 2025
Viewed by 260
Abstract
Advanced non-destructive techniques play an important role in detecting surface defects in the context of additive manufacturing, with non-destructive technologies providing surface data for the recognition of surface defects. In this line, it is necessary to implement microscope vision technology for the inspection [...] Read more.
Advanced non-destructive techniques play an important role in detecting surface defects in the context of additive manufacturing, with non-destructive technologies providing surface data for the recognition of surface defects. In this line, it is necessary to implement microscope vision technology for the inspection of surface defects. This study proposes an approach for microsurface defect recognition using affine moment invariants based on microlaser line contouring, allowing for the detection of microscopic holes and scratches. For this purpose, the surface is represented by a Bezier surface to characterize microsurface defects through patterns of affine moment invariants after the surface is contoured via microlaser line projection. In this way, microholes and scratches can be recognized by computing a pattern of affine moment invariants for each region of the target surface. This technique is performed using a microscope vision system, which retrieves the surface topography via microlaser line scanning. The proposed technique allows for the recognition of holes and scratches with a surface depth greater than 20 microns, with a minor relative error of less than 2%. The proposed surface defect recognition approach enhances the literature on recognition techniques performed using visual technologies based on optical microscope systems. This contribution is corroborated through a discussion focused on the recognition of holes and scratches by means of various optical-microscope-based systems. Full article
(This article belongs to the Special Issue Laser-Assisted Coating Techniques and Surface Modifications)
Show Figures

Figure 1

21 pages, 14388 KiB  
Article
Adaptive Matching of High-Frequency Infrared Sea Surface Images Using a Phase-Consistency Model
by Xiangyu Li, Jie Chen, Jianwei Li, Zhentao Yu and Yaxun Zhang
Sensors 2025, 25(5), 1607; https://doi.org/10.3390/s25051607 - 6 Mar 2025
Viewed by 657
Abstract
The sea surface displays dynamic characteristics, such as waves and various formations. As a result, images of the sea surface usually have few stable feature points, with a background that is often complex and variable. Moreover, the sea surface undergoes significant changes due [...] Read more.
The sea surface displays dynamic characteristics, such as waves and various formations. As a result, images of the sea surface usually have few stable feature points, with a background that is often complex and variable. Moreover, the sea surface undergoes significant changes due to variations in wind speed, lighting conditions, weather, and other environmental factors, resulting in considerable discrepancies between images. These variations present challenges for identification using traditional methods. This paper introduces an algorithm based on the phase-consistency model. We utilize image data collected from a specific maritime area with a high-frame-rate surface array infrared camera. By accurately detecting images with identical names, we focus on the subtle texture information of the sea surface and its rotational invariance, enhancing the accuracy and robustness of the matching algorithm. We begin by constructing a nonlinear scale space using a nonlinear diffusion method. Maximum and minimum moments are generated using an odd symmetric Log–Gabor filter within the two-dimensional phase-consistency model. Next, we identify extremum points in the anisotropic weighted moment space. We use the phase-consistency feature values as image gradient features and develop feature descriptors based on the Log–Gabor filter that are insensitive to scale and rotation. Finally, we employ Euclidean distance as the similarity measure for initial matching, align the feature descriptors, and remove false matches using the fast sample consensus (FSC) algorithm. Our findings indicate that the proposed algorithm significantly improves upon traditional feature-matching methods in overall efficacy. Specifically, the average number of matching points for long-wave infrared images is 1147, while for mid-wave infrared images, it increases to 8241. Additionally, the root mean square error (RMSE) fluctuations for both image types remain stable, averaging 1.5. The proposed algorithm also enhances the rotation invariance of image matching, achieving satisfactory results even at significant rotation angles. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

16 pages, 5306 KiB  
Article
On the Identification of Mobile and Stationary Zone Mass Transfer Resistances in Chromatography
by Alessandra Adrover and Gert Desmet
Separations 2025, 12(3), 59; https://doi.org/10.3390/separations12030059 - 28 Feb 2025
Cited by 1 | Viewed by 565
Abstract
A robust and elegant approach, based on the Two-Zone Moment Analysis (TZMA) method, is proposed to assess the contributions of the mobile and stationary zones, HCm and HCs, to the C term HC in the van Deemter [...] Read more.
A robust and elegant approach, based on the Two-Zone Moment Analysis (TZMA) method, is proposed to assess the contributions of the mobile and stationary zones, HCm and HCs, to the C term HC in the van Deemter equation for plate height. The TZMA method yields two formulations for HCm and HCs, both fully equivalent in terms of HC, yet offering different decompositions of the contributions from the mobile and stationary zones. The first formulation proposes an expression for the term HCs that has strong similarities, but also significant differences, from the well-known and widely used one proposed by Giddings. While it addresses the inherent limitation of Giddings’ approach—namely, the complete decoupling of transport phenomena in the moving and stationary zones—it introduces the drawback of a non-unique decomposition of HC. Despite this, it proves highly valuable in highlighting the limitations and flaws of Giddings’ method. In contrast, the second formulation not only properly accounts for the interaction between the moving and stationary zones, but provides a unique and consistent decomposition of HC into its components. Three different geometries are investigated in detail: the 2D triangular array of cylinders (pillar array columns), the 2D array of rectangular pillars (radially elongated pillar array columns) and the 3D face-centered cubic array of spheres. It is shown that Giddings’ approach significantly underestimates the HCs term, especially for porous-shell particles. Its accuracy is limited, being reliable only when intra-particle diffusivity (Ds) and the zone retention factor (k) are very low, or when axially invariant systems are considered. Full article
(This article belongs to the Section Chromatographic Separations)
Show Figures

Figure 1

15 pages, 607 KiB  
Article
Quadratic Forms in Random Matrices with Applications in Spectrum Sensing
by Daniel Gaetano Riviello, Giusi Alfano and Roberto Garello
Entropy 2025, 27(1), 63; https://doi.org/10.3390/e27010063 - 12 Jan 2025
Cited by 1 | Viewed by 930
Abstract
Quadratic forms with random kernel matrices are ubiquitous in applications of multivariate statistics, ranging from signal processing to time series analysis, biomedical systems design, wireless communications performance analysis, and other fields. Their statistical characterization is crucial to both design guideline formulation and efficient [...] Read more.
Quadratic forms with random kernel matrices are ubiquitous in applications of multivariate statistics, ranging from signal processing to time series analysis, biomedical systems design, wireless communications performance analysis, and other fields. Their statistical characterization is crucial to both design guideline formulation and efficient computation of performance indices. To this end, random matrix theory can be successfully exploited. In particular, recent advancements in spectral characterization of finite-dimensional random matrices from the so-called polynomial ensembles allow for the analysis of several scenarios of interest in wireless communications and signal processing. In this work, we focus on the characterization of quadratic forms in unit-norm vectors, with unitarily invariant random kernel matrices, and we also provide some approximate but numerically accurate results concerning a non-unitarily invariant kernel matrix. Simulations are run with reference to a peculiar application scenario, the so-called spectrum sensing for wireless communications. Closed-form expressions for the moment generating function of the quadratic forms of interest are provided; this will pave the way to an analytical performance analysis of some spectrum sensing schemes, and will potentially assist in the rate analysis of some multi-antenna systems. Full article
(This article belongs to the Special Issue Random Matrix Theory and Its Innovative Applications)
Show Figures

Figure 1

19 pages, 16987 KiB  
Article
Trajectory Planning Method in Time-Variant Wind Considering Heterogeneity of Segment Flight Time Distribution
by Man Xu, Jian Wang and Qiuqi Wu
Systems 2024, 12(12), 523; https://doi.org/10.3390/systems12120523 - 25 Nov 2024
Viewed by 732
Abstract
The application of Trajectory-Based Operation (TBO) and Free-Route Airspace (FRA) can relieve air traffic congestion and reduce flight delays. However, this new operational framework has higher requirements for the reliability and efficiency of the trajectory, which will be significantly influenced if the analysis [...] Read more.
The application of Trajectory-Based Operation (TBO) and Free-Route Airspace (FRA) can relieve air traffic congestion and reduce flight delays. However, this new operational framework has higher requirements for the reliability and efficiency of the trajectory, which will be significantly influenced if the analysis of wind uncertainty during trajectory planning is insufficient. In the literature, trajectory planning models considering wind uncertainty are developed based on the time-invariant condition (i.e., three-dimensional), which may potentially lead to a significant discrepancy between the predicted flight time and the real flight time. To address this problem, this study proposes a trajectory planning model considering time-variant wind uncertainty (i.e., four-dimensional). This study aims to optimize a reliable and efficient trajectory by minimizing the Mean-Excess Flight Time (MEFT). This model formulates wind as a discrete variable, forming the foundation of the proposed time-variant predicted method that can calculate the segment flight time accurately. To avoid the homogeneous assumption of distributions, we specifically apply the first four moments (i.e., expectation, variance, skewness, and kurtosis) to describe the stochasticity of the distributions, rather than using the probability distribution function. We apply a two-stage algorithm to solve this problem and demonstrate its convergence in the time-variant network. The simulation results show that the optimal trajectory has 99.2% reliability and reduces flight time by approximately 9.2% compared to the current structured airspace trajectory. In addition, the solution time is only 2.3 min, which can satisfy the requirement of trajectory planning. Full article
Show Figures

Figure 1

13 pages, 3614 KiB  
Article
Automatic Defects Recognition of Lap Joint of Unequal Thickness Based on X-Ray Image Processing
by Dazhao Chi, Ziming Wang and Haichun Liu
Materials 2024, 17(22), 5463; https://doi.org/10.3390/ma17225463 - 8 Nov 2024
Cited by 1 | Viewed by 934
Abstract
It is difficult to automatically recognize defects using digital image processing methods in X-ray radiographs of lap joints made from plates of unequal thickness. The continuous change in the wall thickness of the lap joint workpiece causes very different gray levels in an [...] Read more.
It is difficult to automatically recognize defects using digital image processing methods in X-ray radiographs of lap joints made from plates of unequal thickness. The continuous change in the wall thickness of the lap joint workpiece causes very different gray levels in an X-ray background image. Furthermore, due to the shape and fixturing of the workpiece, the distribution of the weld seam in the radiograph is not vertical which results in an angle between the weld seam and the vertical direction. This makes automatic defect detection and localization difficult. In this paper, a method of X-ray image correction based on invariant moments is presented to solve the problem. In addition, a novel background removal method based on image processing is introduced to reduce the difficulty of defect recognition caused by variations in grayscale. At the same time, an automatic defect detection method combining image noise suppression, image segmentation, and mathematical morphology is adopted. The results show that the proposed method can effectively recognize the gas pores in an automatic welded lap joint of unequal thickness, making it suitable for automatic detection. Full article
Show Figures

Figure 1

17 pages, 3129 KiB  
Article
A Remote Two-Point Magnetic Localization Method Based on SQUID Magnetometers and Magnetic Gradient Tensor Invariants
by Yingzi Zhang, Gaigai Liu, Chen Wang, Longqing Qiu, Hongliang Wang and Wenyi Liu
Sensors 2024, 24(18), 5917; https://doi.org/10.3390/s24185917 - 12 Sep 2024
Viewed by 3964
Abstract
In practical application, existing two-point magnetic gradient tensor (MGT) localization methods have a maximum detection distance of only 2.5 m, and the magnetic moment vectors of measured targets are all unknown. In order to realize remote, real-time localization, a new two-point magnetic localization [...] Read more.
In practical application, existing two-point magnetic gradient tensor (MGT) localization methods have a maximum detection distance of only 2.5 m, and the magnetic moment vectors of measured targets are all unknown. In order to realize remote, real-time localization, a new two-point magnetic localization method based on self-developed, ultra-sensitive superconducting quantum interference device (SQUID) magnetometers and MGT invariants is proposed. Both the magnetic moment vector and the relative position vector can be directly calculated based on the linear positioning model, and a quasi-Newton optimization algorithm is adopted to further improve the interference suppression capability. The simulation results show that the detection distance of the proposed method can reach 500 m when the superconducting MGT measurement system is used. Compared with Nara’s single-point tensor (NSPT) method and Xu’s two-point tensor (XTPT) method, the proposed method produces the smallest relative localization error (i.e., significantly less than 1% in the non-positioning blind area) without sacrificing real-time characteristics. The causes of and solutions to the positioning blind area are also analyzed. The equivalent experiments, which were conducted with a detection distance of 10 m, validate the effectiveness of the localization method, yielding a minimum relative localization error of 4.5229%. Full article
(This article belongs to the Special Issue Challenges and Future Trends of Magnetic Sensors)
Show Figures

Figure 1

31 pages, 2408 KiB  
Article
A Dyson Brownian Motion Model for Weak Measurements in Chaotic Quantum Systems
by Federico Gerbino, Pierre Le Doussal, Guido Giachetti and Andrea De Luca
Quantum Rep. 2024, 6(2), 200-230; https://doi.org/10.3390/quantum6020016 - 16 May 2024
Cited by 7 | Viewed by 2477
Abstract
We consider a toy model for the study of monitored dynamics in many-body quantum systems. We study the stochastic Schrödinger equation resulting from continuous monitoring with a rate Γ of a random Hermitian operator, drawn from the Gaussian unitary ensemble (GUE) at every [...] Read more.
We consider a toy model for the study of monitored dynamics in many-body quantum systems. We study the stochastic Schrödinger equation resulting from continuous monitoring with a rate Γ of a random Hermitian operator, drawn from the Gaussian unitary ensemble (GUE) at every time t. Due to invariance by unitary transformations, the dynamics of the eigenvalues {λα}α=1n of the density matrix decouples from that of the eigenvectors, and is exactly described by stochastic equations that we derive. We consider two regimes: in the presence of an extra dephasing term, which can be generated by imperfect quantum measurements, the density matrix has a stationary distribution, and we show that in the limit of large size n it matches with the inverse-Marchenko–Pastur distribution. In the case of perfect measurements, instead, purification eventually occurs and we focus on finite-time dynamics. In this case, remarkably, we find an exact solution for the joint probability distribution of λ’s at each time t and for each size n. Two relevant regimes emerge: at short times tΓ=O(1), the spectrum is in a Coulomb gas regime, with a well-defined continuous spectral distribution in the n limit. In that case, all moments of the density matrix become self-averaging and it is possible to exactly characterize the entanglement spectrum. In the limit of large times tΓ=O(n), one enters instead a regime in which the eigenvalues are exponentially separated log(λα/λβ)=O(Γt/n), but fluctuations O(Γt/n) play an essential role. We are still able to characterize the asymptotic behaviors of the entanglement entropy in this regime. Full article
(This article belongs to the Special Issue Exclusive Feature Papers of Quantum Reports in 2024–2025)
Show Figures

Figure 1

34 pages, 19793 KiB  
Article
Ten Traps for Non-Representational Theory in Human Geography
by Paul M. Torrens
Geographies 2024, 4(2), 253-286; https://doi.org/10.3390/geographies4020016 - 18 Apr 2024
Cited by 2 | Viewed by 2987
Abstract
Non-Representational Theory (NRT) emphasizes the significance of routine experience in shaping human geography. In doing so, the theory largely eschews traditional approaches that have offered area-based, longitudinal, and synoptic formalisms for geographic inquiry. Instead, NRT prioritizes the roles of individualized and often dynamic [...] Read more.
Non-Representational Theory (NRT) emphasizes the significance of routine experience in shaping human geography. In doing so, the theory largely eschews traditional approaches that have offered area-based, longitudinal, and synoptic formalisms for geographic inquiry. Instead, NRT prioritizes the roles of individualized and often dynamic lived geographies as they unfold in the moment. To date, NRT has drawn significant inspiration from the synergies that it shares with philosophy, critical geography, and self-referential ethnography. These activities have been tremendous in advancing NRT as a concept, but the theory’s strong ties to encounter and experience invariably call for practical exposition. Alas, applications of NRT to concrete examples at scales beyond small case studies often prove challenging, which we argue artificially constrains further development of the theory. In this paper, we examine some of the thorny problems that present in applying NRT in practical terms. Specifically, we identify ten traps that NRT can fall into when moving from theory to actuality. These traps include conundrums of small geographies, circularity in representation, cognitive traps, issues of mustering and grappling with detail, access issues, limitations with empiricism, problems of subjectivity, methodological challenges, thorny issues of translation, and the unwieldy nature of process dynamics. We briefly demonstrate a novel observational instrument that can sidestep some, but not all, of these traps. Full article
Show Figures

Figure 1

Back to TopTop