Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (232)

Search Parameters:
Keywords = monotone mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 5847 KB  
Article
Spatiotemporal Dynamics of the Alpine Treeline Ecotone in Response to Climate Warming Across the Eastern Slopes of the Canadian Rocky Mountains
by Behnia Hooshyarkhah, Dan L. Johnson, Locke Spencer, Hardeep S. Ryait and Amir Chegoonian
Climate 2026, 14(3), 69; https://doi.org/10.3390/cli14030069 (registering DOI) - 13 Mar 2026
Abstract
Mountain ecosystems are susceptible to climate change, and alpine treeline ecotones (ATEs) represent one of the significant responsive indicators of climate-driven environmental change. This study examines long-term spatiotemporal dynamics of the ATE across the Eastern Slopes of the Canadian Rocky Mountains (ESCR) from [...] Read more.
Mountain ecosystems are susceptible to climate change, and alpine treeline ecotones (ATEs) represent one of the significant responsive indicators of climate-driven environmental change. This study examines long-term spatiotemporal dynamics of the ATE across the Eastern Slopes of the Canadian Rocky Mountains (ESCR) from 1984 to 2023, with the objective of assessing whether regional climate warming has influenced ATE extent and elevation across different aspects and watersheds. Multi-decadal Landsat imagery, ERA5-Land temperature data, and topographic variables were integrated within a Google Earth Engine (GEE) framework to map ATEs using the Alpine Treeline Ecotone Index (ATEI), a probabilistic approach designed to capture transitional vegetation zones. Temporal trends were evaluated using non-parametric statistics, correlation analyses, and watershed- and aspect-based comparisons. Results indicate that the total alpine treeline ecotone (ATE) area in the ESCR was approximately 13.3% larger in 2023 than in 1984. However, the temporal evolution of ATE extent and elevation was non-monotonic, and linear trend analyses did not detect statistically significant increasing or decreasing trends over the full study period. ATE elevation and expansion exhibited pronounced spatial heterogeneity, with greater changes occurring on north- and northwest-facing slopes and within selected watersheds. In contrast, summer (July–September) temperatures increased significantly (+2.84 °C), exceeding global land-only warming rates, and vegetation greenness (NDVI) showed a strong, statistically significant positive relationship with temperature. These findings show that while climate warming has clearly increased vegetation productivity, elevational ATE dynamics remain spatially heterogeneous and temporally non-synchronous with summer temperature trends. Full article
Show Figures

Figure 1

25 pages, 2978 KB  
Article
Process Modeling of 3D Electrodeposition Printing of Metallic Materials
by Satyaki Sinha, Saumitra Bhate and Tuhin Mukherjee
Modelling 2026, 7(2), 53; https://doi.org/10.3390/modelling7020053 - 11 Mar 2026
Viewed by 116
Abstract
3D electrodeposition printing is an emerging process for fabricating metallic parts with controllable geometry, yet the coupled influences of electrochemical kinetics, ion transport, and tool motion on layer height remain difficult to interpret. This work presents a physics-based process model that links key [...] Read more.
3D electrodeposition printing is an emerging process for fabricating metallic parts with controllable geometry, yet the coupled influences of electrochemical kinetics, ion transport, and tool motion on layer height remain difficult to interpret. This work presents a physics-based process model that links key process inputs, current density, electrolyte concentration, the inter-electrode gap, and tool scanning speed, to the resulting layer height in 3D electrodeposition printing of nickel-based structures. The model combines species transport in the inter-electrode gap with Butler–Volmer kinetics, under carefully stated assumptions regarding current efficiency, overpotential, and lateral spreading. Model predictions are validated against experimentally reported layer heights over a range of process conditions, yielding average errors (9–15%) and root-mean-square errors (0.13–0.28 µm) that demonstrate good agreement and highlight the impact of simplifying assumptions. Systematic parametric studies reveal how each process input monotonically influences layer height in ways consistent with Faraday’s law and diffusion-controlled growth, while also quantifying the relative sensitivity to different parameters. Building on these results, we introduce a dimensionless 3D Electrodeposition Printing Index that consolidates the key process and material parameters into a single scalar describing the geometric growth regime. The index enables construction of process maps that capture how combinations of current density, scan speed, concentration, and gap affect achievable layer height within the validated operating window. The scope and limitations of the proposed modeling framework and the index, particularly regarding other materials, more complex geometries, and pulsed or strongly convective regimes, are explicitly discussed, providing a basis for future model extensions and experimental validation. Full article
Show Figures

Figure 1

21 pages, 3133 KB  
Article
Lyapunov-Based Synthesis of Self-Organizing Nonlinear Integrators for Stage Motion Control Under Parametric Uncertainty
by Raigul Tuleuova, Nurgul Shazhdekeyeva, Sharbat Nurzhanova, Aigul Myrzasheva, Saltanat Sharmukhanbet, Maxot Rakhmetov, Makhatova Valentina and Lyailya Kurmangaziyeva
Computation 2026, 14(3), 64; https://doi.org/10.3390/computation14030064 - 3 Mar 2026
Viewed by 191
Abstract
Linear integrators are traditionally used in motion control systems to compensate for static effects and suppress low-frequency disturbances. However, their use is inevitably accompanied by phase delays that limit the performance and robustness of control systems, especially in conditions of parametric uncertainty. In [...] Read more.
Linear integrators are traditionally used in motion control systems to compensate for static effects and suppress low-frequency disturbances. However, their use is inevitably accompanied by phase delays that limit the performance and robustness of control systems, especially in conditions of parametric uncertainty. In this regard, nonlinear integrators have been considered for several decades as a promising alternative that can weaken phase constraints and improve the quality of transients. In this paper, the concept of nonlinear integrators is reinterpreted in the context of self-organizing motion control of precision stages. In contrast to traditional approaches focused primarily on frequency analysis and the method of describing the function, a method is proposed for the synthesis of a self-organizing control system for nonlinear SISO objects based on catastrophe theory, namely in the class of elliptical dynamics with the property of structural stability. The control action is formed in such a way that transitions between stable modes occur due to bifurcation-conditioned self-organization, without using external switching logic. To ensure strict analytical guarantees of stability, the Lyapunov gradient-velocity vector function method is used, which guarantees aperiodic robust stability, suppression of oscillatory and chaotic modes, as well as monotonic convergence of trajectories under conditions of parameter uncertainty. The parameters of the nonlinear integrator are adapted using Self-Organizing Maps (SOM), while any parameter changes are allowed only within the regions that meet the conditions of Lyapunov stability. This approach ensures the alignment of analytical and data-oriented methods without violating the structural stability of the system. The results of numerical experiments demonstrate the superiority of the proposed method in comparison with classical linear and adaptive regulators in problems of controlling the movement of stages, especially near bifurcation boundaries and with significant parametric uncertainty. The results obtained confirm that the integration of nonlinear integrators with catastrophe theory and self-organization mechanisms forms a promising basis for the creation of robust and high-precision motion control systems of a new generation. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

18 pages, 1592 KB  
Article
Quantifying Degeneracy in Two-Point Statistics for Small Two-Phase Composite Structures
by Ethan R. Cluff, Ryan L. Weber, Christopher G. Nyborg, Blake A. Jensen, Sterling G. Baird and David T. Fullwood
J. Compos. Sci. 2026, 10(3), 119; https://doi.org/10.3390/jcs10030119 - 25 Feb 2026
Viewed by 178
Abstract
Volume fraction, or one-point statistics, is commonly used to homogenize composites. However, it contains no geometric information regarding the spatial distribution of the phases. The spatial distribution can be characterized using higher-order statistics. Two-point statistics (f2) quantify average relative phase [...] Read more.
Volume fraction, or one-point statistics, is commonly used to homogenize composites. However, it contains no geometric information regarding the spatial distribution of the phases. The spatial distribution can be characterized using higher-order statistics. Two-point statistics (f2) quantify average relative phase positions, and the geometric features encoded in f2 influence material properties. However, just as a single volume fraction can describe multiple unique microstructures, some f2 map to multiple distinct microstructures. The existence of multiple microstructures possessing the same f2 is termed ‘degeneracy’ and is problematic for microstructure-sensitive design because unique microstructures may map to the same f2 yet exhibit different properties. This study quantifies how pervasive degeneracy is in f2 through exhaustive enumeration of all 2366.9×1010 possible 6×6 binary microstructures, and tests other metrics as ways to uniquely characterize microstructures with degenerate f2. We determined that using nondirectional f2 (i.e., orientation-averaged f2) substantially increases degeneracy, nearly doubling the probability that a randomly selected microstructure will share the same f2 as some other symmetry-inequivalent microstructure. Notably, the fraction of nontrivially degenerate microstructures does not increase monotonically with system size—a counterintuitive finding that challenges prior theoretical predictions. Finally, for the small microstructures examined, we determined that three-point statistics will fully resolve the degeneracy at a computational cost that scales as n4 (where n is side length), while two-point cluster functions resolve the majority of degeneracies with substantially lower computational overhead. Full article
(This article belongs to the Special Issue Feature Papers in Journal of Composites Science in 2025)
Show Figures

Figure 1

21 pages, 3678 KB  
Article
Dynamic Error Improved Model-Free Adaptive Control Method for Electro-Hydraulic Servo Actuators in Active Suspensions with Time Delay and Data Disturbances
by Hao Xiong, Dingxuan Zhao, Haiwu Zheng and Liqiang Zhao
Actuators 2026, 15(2), 130; https://doi.org/10.3390/act15020130 - 21 Feb 2026
Viewed by 240
Abstract
The Electro-Hydraulic Servo Actuator for Active Suspensions (ASEHSA) plays a decisive role in shaping the holistic performance of vehicle suspension systems through its dynamic response speed and control precision. However, achieving high-performance control of ASEHSA still faces challenges. On one hand, existing model-based [...] Read more.
The Electro-Hydraulic Servo Actuator for Active Suspensions (ASEHSA) plays a decisive role in shaping the holistic performance of vehicle suspension systems through its dynamic response speed and control precision. However, achieving high-performance control of ASEHSA still faces challenges. On one hand, existing model-based control methods are highly sensitive to parameter uncertainties and unmodeled nonlinear hydraulic dynamics, which can easily lead to reduced robustness in practical applications. On the other hand, traditional model-free strategies have limited time-delay compensation capabilities and often struggle to balance overshoot and settling time under delayed and disturbed conditions. To resolve this challenge, this study proposes an improved model-free adaptive control method that incorporates the differentiation of the tracking error (DE-IMFAC). Within the framework of traditional model-free adaptive control (MFAC), this approach reconfigures the time-delay term from an explicit form in the control law to implicit management, substantially mitigating the influence of time delays on system control performance. At the same time, by refining the performance criterion function and integrating a tracking error differentiation term together with dynamic weighting factors, the dynamic performance and adjustment flexibility of the controller are significantly enhanced. Additionally, by leveraging the characteristic equation of discrete autonomous systems and compression mapping theory, the BIBO stability of the DE-IMFAC control system and the monotonic convergence of the tracking error are rigorously established through theoretical analysis. Simulation and experimental results demonstrate that, compared with PID and traditional MFAC methods, DE-IMFAC significantly reduces integral absolute error, overshoot, settling time, and maximum position tracking error, while improving disturbance rejection capability. This approach does not depend on an accurate mathematical model of the ASEHSA system and maintains robust dynamic performance under complex operating environments characterized by time delays and data disturbances, providing a practical solution for ASEHSA and related industrial control systems. Full article
Show Figures

Figure 1

36 pages, 776 KB  
Article
Carbon Risk Without a Stable Premium: Nonlinear and State-Dependent Evidence from European ESG Leaders
by Eleonora Salzmann
Risks 2026, 14(2), 41; https://doi.org/10.3390/risks14020041 - 20 Feb 2026
Viewed by 272
Abstract
Despite the economic relevance of climate-transition risk, firm-level carbon exposure often fails to appear as a robustly priced factor when ESG measures and sustainability shocks are conflated. This study examines whether carbon exposure is conditionally priced in European equity returns using a strongly [...] Read more.
Despite the economic relevance of climate-transition risk, firm-level carbon exposure often fails to appear as a robustly priced factor when ESG measures and sustainability shocks are conflated. This study examines whether carbon exposure is conditionally priced in European equity returns using a strongly balanced quarterly panel of 238 firms from the MSCI Europe ESG Leaders universe (2018–2024). Total greenhouse gas emissions act as a proxy for carbon exposure, mapped to within-year percentiles and standardized by sector-year. Regressions control for ESG scores and controversies and include firm and quarter fixed effects with firm-clustered, dependence-robust standard errors. The linear carbon coefficient is small and statistically indistinguishable from zero, indicating no stable return premium from within-firm changes in carbon exposure. Functional-form tests reject linearity: quadratic and quintile specifications reveal curvature and a non-monotonic pattern, with return differences concentrated in the middle of the carbon distribution. Conditioning on macro-financial stress, measured by the ECB Composite Indicator of Systemic Stress, yields limited evidence of a uniform carbon penalty. However, high-controversy states are associated with lower returns, while ESG scores show negative associations under dependence-robust inference. Overall, carbon-related pricing appears to be nonlinear and state-dependent, whereas controversy risk is the most robust sustainability predictor of returns. Full article
36 pages, 5121 KB  
Article
Peripheral Artery Disease (P.A.D.): Vascular Hemodynamic Simulation Using a Printed Circuit Board (PCB) Design
by Claudiu N. Lungu, Aurelia Romila, Aurel Nechita and Mihaela C. Mehedinti
Bioengineering 2026, 13(2), 241; https://doi.org/10.3390/bioengineering13020241 - 19 Feb 2026
Viewed by 432
Abstract
Background: Arterial stenosis produces nonlinear changes in vascular impedance that are challenging to investigate in real time using either benchtop flow phantoms or high-fidelity computational fluid dynamics (CFD) models. Objective: This study aimed to develop and evaluate a low-cost printed circuit board (PCB) [...] Read more.
Background: Arterial stenosis produces nonlinear changes in vascular impedance that are challenging to investigate in real time using either benchtop flow phantoms or high-fidelity computational fluid dynamics (CFD) models. Objective: This study aimed to develop and evaluate a low-cost printed circuit board (PCB) analog capable of reproducing the hemodynamic effects of progressive arterial stenosis through an R–L–C mapping of vascular mechanics. Methods: A lumped-parameter (0D) electrical network was constructed in which voltage represented pressure, current represented flow, resistance modeled viscous losses, capacitance corresponded to vessel compliance, and inductance represented fluid inertance. A variable resistor simulated focal stenosis and was adjusted incrementally to represent progressive narrowing. Input Uin, output Uout, peak-to-peak Vpp, and mean Vavg voltages were recorded at a driving frequency of 50 Hz. Physiological correspondence was established using the canonical relationships. R=8μlπr4, L=plπr2, C=3πr32Eh, where μ is blood viscosity, ρ is density, E is Young’s modulus, and h is wall thickness. A calibration constant was applied to convert measured voltage differences into pressure differences. Results: As simulated stenosis increased, the circuit exhibited a monotonic rise in Uout and Vpp, with a precise inflection beyond mid-range narrowing—consistent with the nonlinear growth in pressure loss predicted by fluid dynamic theory. Replicate measurements yielded stable, repeatable traces with no outliers under nominal test conditions. Qualitative trends matched those of surrogate 0D and CFD analyses, showing minimal changes for mild narrowing (≤25%) and a sharp increase in pressure loss for moderate to severe stenoses (≥50%). The PCB analog uses a simplified, lumped-parameter representation driven by a fixed-frequency sinusoidal excitation and therefore does not reproduce fully characterized physiological systolic–diastolic waveforms or heart–arterial coupling. In addition, the present configuration is intended for relatively straight peripheral arterial segments and is not designed to capture the complex geometry and branching of specialized vascular beds (e.g., intracranial circulation) or strongly curved elastic vessels (e.g., the thoracic aorta). Conclusions: The PCB analog successfully reproduces the characteristic hemodynamic signatures of arterial stenosis in real time and at low cost. The model provides a valuable tool for educational and research applications, offering rapid and intuitive visualization of vascular behavior. Current accuracy reflects assumptions of Newtonian, laminar, and lumped flow; future work will refine calibration, quantify uncertainty, and benchmark results against physiological measurements and full CFD simulations. Full article
Show Figures

Figure 1

11 pages, 3142 KB  
Article
Processing Maps and Nano-IR Diagnostics of Type I Modifications in Mid-IR Germanate-Based Optical Glass
by Paul Mathieu, Nadezhda Shchedrina, Florence De La Barrière, Guillaume Druart and Matthieu Lancry
Photonics 2026, 13(2), 197; https://doi.org/10.3390/photonics13020197 - 16 Feb 2026
Viewed by 289
Abstract
Mid-IR flat/integrated optics require low-loss, programmable phase control. We investigate femtosecond laser direct writing (FLDW) in aluminogermanate glass (Corning 9754), first mapping the processing landscape to delineate no modification, Type I index increase, and spatial broadening regimes. We then operate in a non-accumulating [...] Read more.
Mid-IR flat/integrated optics require low-loss, programmable phase control. We investigate femtosecond laser direct writing (FLDW) in aluminogermanate glass (Corning 9754), first mapping the processing landscape to delineate no modification, Type I index increase, and spatial broadening regimes. We then operate in a non-accumulating regime that provides a broad, stable writing window. Quantitative-phase microscopy yields Δφ and a monotonic Δn with optically limited cross-sections compatible with low loss. Transmission spectroscopy shows high values (about 90% up to 4 µm) and no additional absorptions across the near-IR and mid-IR range. FTIR reveals a redshift of the Ge–O–(Ge/Al) stretching envelope from ≈1 µJ, correlating with the high Δn onset. s-SNOM at 925 cm−1 resolves the written line as reduced near-field amplitude and decreased phase, confirming a local complex permittivity change consistent with densification-driven Type I tracks. Together, these results define practical conditions for on-demand mid-IR flat/GRIN/Fresnel optics by FLDW in this commercial mid-IR transparent glass. Full article
(This article belongs to the Special Issue Advances in Micro-Nano Optical Manufacturing)
Show Figures

Figure 1

32 pages, 18424 KB  
Article
Spatial Assessment of Urban Flood Resilience Using a GESIS-ML Framework: A Case Study of Chongqing, China
by Yunyan Li, Huanhuan Yuan, Jiaxing Dai, Binyan Wang, Xing Liu and Chenhao Fang
Sustainability 2026, 18(4), 1988; https://doi.org/10.3390/su18041988 - 14 Feb 2026
Viewed by 226
Abstract
Against the backdrop of climate change and rapid urbanization, assessing urban flood resilience requires spatially continuous and interpretable approaches capable of capturing nonlinear interactions between natural and human systems. This study proposes a high-resolution framework for mapping urban flood resilience in the built-up [...] Read more.
Against the backdrop of climate change and rapid urbanization, assessing urban flood resilience requires spatially continuous and interpretable approaches capable of capturing nonlinear interactions between natural and human systems. This study proposes a high-resolution framework for mapping urban flood resilience in the built-up areas of Chongqing, China, grounded in the geography–ecology–society–infrastructure systems (GESIS) concept. A Flood Resilience Index is constructed at a 50 m grid resolution using ten core indicators and objective weighting based on combined entropy and coefficient-of-variation methods. Three machine learning models—multilayer perceptron (MLP), random forest, and XGBoost—are then trained to reproduce the resilience surface by integrating these indicators with additional historical flood-exposure variables, with SHAP used for model interpretation. The MLP model achieves the best performance (R2 ≈ 0.78) and generates spatially coherent resilience patterns. Impervious surface fraction and building density exert dominant negative effects, whereas elevation and ecological connectivity contribute positively. The results reveal pronounced nonlinear thresholds in key drivers, indicating that flood resilience cannot be inferred from monotonic factor effects alone. By combining objective weighting, explainable machine learning, and historical exposure information, this framework supports both accurate prediction and policy-relevant interpretation of urban flood resilience for sustainable urban planning in mountainous megacities. Full article
Show Figures

Figure 1

18 pages, 235 KB  
Article
Solving of a Variational Inequality Problem Under the Presence of Computational Errors
by Alexander J. Zaslavski
Mathematics 2026, 14(4), 664; https://doi.org/10.3390/math14040664 - 13 Feb 2026
Viewed by 196
Abstract
W. Takahashi and M. Toyoda (2003) proved weak convergence of an iteration process of solving a variational inequality problem for an inverse strongly-monotone mapping. In our recent work we showed that, for the same iterative process, most of its exact iterates are approximate [...] Read more.
W. Takahashi and M. Toyoda (2003) proved weak convergence of an iteration process of solving a variational inequality problem for an inverse strongly-monotone mapping. In our recent work we showed that, for the same iterative process, most of its exact iterates are approximate solutions of the variational inequality. In this paper, we show that the iteration process for solving a variational inequality problem for an inverse strongly monotone mapping generates an approximate solution in the presence of small computational errors. We also estimate a number of iterates needed in order to obtain such an approximate solution. Full article
(This article belongs to the Special Issue Variational Problems and Applications, 3rd Edition)
24 pages, 6240 KB  
Article
YOLO-SEW: A Lightweight Cotton Apical Bud Detection Algorithm for Complex Cotton Field Environments
by Hao Li, Yuqiang Hou, Zeyu Li, Qiao Liu, Hongwen Zhang, Liping Chen, Qinhua Xu and Zekun Zhao
Agriculture 2026, 16(3), 350; https://doi.org/10.3390/agriculture16030350 - 1 Feb 2026
Viewed by 303
Abstract
With the advancement of cotton mechanized topping technology, deep learning-based methods for detecting cotton apical buds have made significant progress in improving detection accuracy. However, existing algorithms generally suffer from complex structures, large parameter counts, and high computational costs, making them difficult to [...] Read more.
With the advancement of cotton mechanized topping technology, deep learning-based methods for detecting cotton apical buds have made significant progress in improving detection accuracy. However, existing algorithms generally suffer from complex structures, large parameter counts, and high computational costs, making them difficult to deploy in practical field environments. To address this, this paper proposes a lightweight YOLO-SEW algorithm for detecting cotton apical buds in complex cotton field environments. Based on the YOLOv8 framework, the algorithm introduces Spatial and Channel Reconstruction Convolutions (SCConv) into the C2f module of the backbone network to reduce feature redundancy; embeds an Efficient Multi-scale Attention (EMA) module in the neck network to enhance feature extraction capabilities; and replaces the bounding box loss function with a dynamic non-monotonic focusing mechanism, WIoU, to accelerate model convergence. Experimental results on cotton apical bud data collected in complex field environments show that, compared to the original YOLOv8n algorithm, the YOLO-SEW algorithm reduces parameter count by 40.63%, computational load by 25%, and model size by 33.87%, while improving precision, recall, and mean average precision (mAP) by 1.2%, 2.5%, and 1.4%, respectively. Deployed on a Jetson Orin NX edge computing device and accelerated with TensorRT, the algorithm achieves a detection speed of 48 frames per second, effectively supporting real-time recognition of cotton apical buds and mechanized topping operations. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 5497 KB  
Article
Numerical Study of Combustion in a Methane–Hydrogen Co-Fired W-Shaped Radiant Tube Burner
by Daun Jeong, Seongbong Ha, Jeongwon Seo, Jinyeol Ahn, Dongkyu Lee, Byeongyun Bae, Jongseo Kwon and Gwang G. Lee
Energies 2026, 19(2), 557; https://doi.org/10.3390/en19020557 - 22 Jan 2026
Viewed by 247
Abstract
Three-dimensional computational fluid dynamics (CFD) simulation was performed using the eddy-dissipation concept coupled with detailed hydrogen oxidation kinetics and a reduced two-step methane mechanism for a newly proposed W-shaped radiant tube burner (RTB). The effects of the hydrogen volume fraction (0–100%) and excess [...] Read more.
Three-dimensional computational fluid dynamics (CFD) simulation was performed using the eddy-dissipation concept coupled with detailed hydrogen oxidation kinetics and a reduced two-step methane mechanism for a newly proposed W-shaped radiant tube burner (RTB). The effects of the hydrogen volume fraction (0–100%) and excess air ratio (0%, 10%, 20%) on the flame morphology, temperature distribution, and NOX emissions are systematically analyzed. The results deliver three main points. First, a flame-shape transformation was identified in which the near-injector flame changes from a triangular attached mode to a splitting mode as the mixture reactivity increases with the transition occurring at a characteristic laminar flame speed window of about 0.33 to 0.36 m/s. Second, NOX shows non-monotonic behavior with dilution, and 10% excess air can produce higher NOX than 0% or 20% because OH radical enhancement locally promotes thermal NO pathways despite partial cooling. Third, a multi-parameter coupling strategy was established showing that hydrogen enrichment raises the maximum gas temperature by roughly 100 to 200 K from 0% to 100% H2, while higher excess air improves axial temperature uniformity and can suppress NOX if over-dilution is avoided. These findings provide a quantitative operating map for balancing stability, uniform heating, and NOX–CO trade-offs in hydrogen-enriched industrial RTBs. Full article
Show Figures

Figure 1

25 pages, 1674 KB  
Article
Relaxed Monotonic QMIX (R-QMIX): A Regularized Value Factorization Approach to Decentralized Multi-Agent Reinforcement Learning
by Liam O’Brien and Hao Xu
Robotics 2026, 15(1), 28; https://doi.org/10.3390/robotics15010028 - 21 Jan 2026
Viewed by 521
Abstract
Value factorization methods have become a standard tool for cooperative multi-agent reinforcement learning (MARL) in the centralized-training, decentralized-execution (CTDE) setting. QMIX (a monotonic mixing network for value factorization), in particular, constrains the joint action–value function to be a monotonic mixing of per-agent utilities, [...] Read more.
Value factorization methods have become a standard tool for cooperative multi-agent reinforcement learning (MARL) in the centralized-training, decentralized-execution (CTDE) setting. QMIX (a monotonic mixing network for value factorization), in particular, constrains the joint action–value function to be a monotonic mixing of per-agent utilities, which guarantees consistency with individual greedy policies but can severely limit expressiveness on tasks with non-monotonic agent interactions. This work revisits this design choice and proposes Relaxed Monotonic QMIX (R-QMIX), a simple regularized variant of QMIX that encourages but does not strictly enforce the monotonicity constraint. R-QMIX removes the sign constraints on the mixing network weights and introduces a differentiable penalty on negative partial derivatives of the joint value with respect to each agent’s utility. This preserves the computational benefits of value factorization while allowing the joint value to deviate from strict monotonicity when beneficial. R-QMIX is implemented in a standard PyMARL (an open-source MARL codebase) and evaluated on the StarCraft Multi-Agent Challenge (SMAC). On a simple map (3m), R-QMIX matches the asymptotic performance of QMIX while learning substantially faster. On more challenging maps (MMM2, 6h vs. 8z, and 27m vs. 30m), R-QMIX significantly improves both sample efficiency and final win rate (WR), for example increasing the final-quarter mean win rate from 42.3% to 97.1% on MMM2, from 0.0% to 57.5% on 6h vs. 8z, and from 58.0% to 96.6% on 27m vs. 30m. These results suggest that soft monotonicity regularization is a practical way to bridge the gap between strictly monotonic value factorization and fully unconstrained joint value functions. A further comparison against QTRAN (Q-value transformation), a more expressive value factorization method, shows that R-QMIX achieves higher and more reliably convergent win rates on the challenging SMAC maps considered. Full article
(This article belongs to the Special Issue AI-Powered Robotic Systems: Learning, Perception and Decision-Making)
Show Figures

Figure 1

25 pages, 6614 KB  
Article
Timer-Based Digitization of Analog Sensors Using Ramp-Crossing Time Encoding
by Gabriel Bravo, Ernesto Sifuentes, Geu M. Puentes-Conde, Francisco Enríquez-Aguilera, Juan Cota-Ruiz, Jose Díaz-Roman and Arnulfo Castro
Technologies 2026, 14(1), 72; https://doi.org/10.3390/technologies14010072 - 18 Jan 2026
Viewed by 329
Abstract
This work presents a time-domain analog-to-digital conversion method in which the amplitude of a sensor signal is encoded through its crossing instants with a periodic ramp. The proposed architecture departs from conventional ADC and PWM demodulation approaches by shifting quantization entirely to the [...] Read more.
This work presents a time-domain analog-to-digital conversion method in which the amplitude of a sensor signal is encoded through its crossing instants with a periodic ramp. The proposed architecture departs from conventional ADC and PWM demodulation approaches by shifting quantization entirely to the time domain, enabling waveform reconstruction using only a ramp generator, an analog comparator, and a timer capture module. A theoretical framework is developed to formalize the voltage-to-time mapping, derive expressions for resolution and error, and identify the conditions ensuring monotonicity and single-crossing behavior. Simulation results demonstrate high-fidelity reconstruction for both periodic and non-periodic signals, including real photoplethysmographic (PPG) waveforms, with errors approaching the theoretical quantization limit. A hardware implementation on a PSoC 5LP microcontroller confirms the practicality of the method under realistic operating conditions. Despite ramp nonlinearity, comparator delay, and sensor noise, the system achieves effective resolutions above 12 bits using only native mixed-signal peripherals and no conventional ADC. These results show that accurate waveform reconstruction can be obtained from purely temporal information, positioning time-encoded sensing as a viable alternative to traditional amplitude-based conversion. The minimal analog front end, low power consumption, and scalability of timer-based processing highlight the potential of the proposed approach for embedded instrumentation, distributed sensor nodes, and biomedical monitoring applications. Full article
Show Figures

Figure 1

13 pages, 310 KB  
Article
A Reflected–Forward–Backward Splitting Method for Monotone Inclusions Involving Lipschitz Operators in Banach Spaces
by Changchi Huang, Jigen Peng, Liqian Qin and Yuchao Tang
Mathematics 2026, 14(2), 245; https://doi.org/10.3390/math14020245 - 8 Jan 2026
Viewed by 367
Abstract
The reflected–forward–backward splitting (RFBS) method is well-established for solving monotone inclusion problems involving Lipschitz continuous operators in Hilbert spaces, where it converges weakly under mild assumptions. Extending this method to Banach spaces presents significant challenges, primarily due to the nonlinearity of the duality [...] Read more.
The reflected–forward–backward splitting (RFBS) method is well-established for solving monotone inclusion problems involving Lipschitz continuous operators in Hilbert spaces, where it converges weakly under mild assumptions. Extending this method to Banach spaces presents significant challenges, primarily due to the nonlinearity of the duality mapping. In this paper, we propose and analyze an RFBS algorithm in the setting of real Banach spaces that are 2-uniformly convex and uniformly smooth. To the best of our knowledge, this work presents the first strong (R-linear) convergence result for the RFBS method in such Banach spaces, achieved under a newly adapted notion of strong monotonicity. Our results thus establish a foundational theoretical guarantee for RFBS in Banach spaces under strengthened monotonicity conditions, while highlighting the open problem of proving weak convergence for the general monotone case. Full article
(This article belongs to the Special Issue Nonlinear Functional Analysis: Theory, Methods, and Applications)
Show Figures

Figure 1

Back to TopTop