Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = trade discrepancies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 8766 KiB  
Article
Fusion of Airborne, SLAM-Based, and iPhone LiDAR for Accurate Forest Road Mapping in Harvesting Areas
by Evangelia Siafali, Vasilis Polychronos and Petros A. Tsioras
Land 2025, 14(8), 1553; https://doi.org/10.3390/land14081553 - 28 Jul 2025
Viewed by 339
Abstract
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and [...] Read more.
This study examined the integraftion of airborne Light Detection and Ranging (LiDAR), Simultaneous Localization and Mapping (SLAM)-based handheld LiDAR, and iPhone LiDAR to inspect forest road networks following forest operations. The goal is to overcome the challenges posed by dense canopy cover and ensure accurate and efficient data collection and mapping. Airborne data were collected using the DJI Matrice 300 RTK UAV equipped with a Zenmuse L2 LiDAR sensor, which achieved a high point density of 285 points/m2 at an altitude of 80 m. Ground-level data were collected using the BLK2GO handheld laser scanner (HPLS) with SLAM methods (LiDAR SLAM, Visual SLAM, Inertial Measurement Unit) and the iPhone 13 Pro Max LiDAR. Data processing included generating DEMs, DSMs, and True Digital Orthophotos (TDOMs) via DJI Terra, LiDAR360 V8, and Cyclone REGISTER 360 PLUS, with additional processing and merging using CloudCompare V2 and ArcGIS Pro 3.4.0. The pairwise comparison analysis between ALS data and each alternative method revealed notable differences in elevation, highlighting discrepancies between methods. ALS + iPhone demonstrated the smallest deviation from ALS (MAE = 0.011, RMSE = 0.011, RE = 0.003%) and HPLS the larger deviation from ALS (MAE = 0.507, RMSE = 0.542, RE = 0.123%). The findings highlight the potential of fusing point clouds from diverse platforms to enhance forest road mapping accuracy. However, the selection of technology should consider trade-offs among accuracy, cost, and operational constraints. Mobile LiDAR solutions, particularly the iPhone, offer promising low-cost alternatives for certain applications. Future research should explore real-time fusion workflows and strategies to improve the cost-effectiveness and scalability of multisensor approaches for forest road monitoring. Full article
Show Figures

Figure 1

16 pages, 4826 KiB  
Article
Formulation-Driven Optimization of PEG-Lipid Content in Lipid Nanoparticles for Enhanced mRNA Delivery In Vitro and In Vivo
by Wei Liu, Meihui Zhang, Huiyuan Lv and Chuanxu Yang
Pharmaceutics 2025, 17(8), 950; https://doi.org/10.3390/pharmaceutics17080950 - 22 Jul 2025
Viewed by 380
Abstract
Background: Lipid nanoparticles (LNPs) represent one of the most effective non-viral vectors for nucleic acid delivery and have demonstrated clinical success in siRNA therapies and mRNA vaccines. While considerable research has focused on optimizing ionizable lipids and helper lipids, the impact of [...] Read more.
Background: Lipid nanoparticles (LNPs) represent one of the most effective non-viral vectors for nucleic acid delivery and have demonstrated clinical success in siRNA therapies and mRNA vaccines. While considerable research has focused on optimizing ionizable lipids and helper lipids, the impact of PEGylated lipid content on LNP-mediated mRNA delivery, especially in terms of in vitro transfection efficiency and in vivo performance, remains insufficiently understood. Methods: In this study, LNPs were formulated using a self-synthesized ionizable lipid and varying molar ratios of DMG-PEG2000. Nanoparticles were prepared via nanoprecipitation, and their physicochemical properties, mRNA encapsulation efficiency, cellular uptake, and transfection efficiency were evaluated in HeLa and DC2.4 cells. In vivo delivery efficiency and organ distribution were assessed in mice following intravenous administration. Results: The PEGylated lipid content exerted a significant influence on both the in vitro and in vivo performance of LNPs. A bell-shaped relationship between PEG content and transfection efficiency was observed: 1.5% DMG-PEG2000 yielded optimal mRNA transfection in vitro, while 5% DMG-PEG2000 resulted in the highest transgene expression in vivo. This discrepancy in optimal PEG content may be attributed to the trade-off between cellular uptake and systemic circulation: lower PEG levels enhance cellular internalization, whereas higher PEG levels improve stability and in vivo bioavailability at the expense of cellular entry. Furthermore, varying the PEG-lipid content enabled the partial modulation of organ distribution, offering a formulation-based strategy to influence biodistribution without altering the ionizable lipid structure. Conclusions: This study highlights the critical role of PEGylated lipid content in balancing nanoparticle stability, cellular uptake, and in vivo delivery performance. Our findings provide valuable mechanistic insights and suggest a straightforward formulation-based strategy to optimize LNP/mRNA systems for therapeutic applications. Full article
Show Figures

Graphical abstract

12 pages, 489 KiB  
Article
Generative Artificial Intelligence and Risk Appetite in Medical Decisions in Rheumatoid Arthritis
by Florian Berghea, Dan Andras and Elena Camelia Berghea
Appl. Sci. 2025, 15(10), 5700; https://doi.org/10.3390/app15105700 - 20 May 2025
Viewed by 689
Abstract
With Generative AI (GenAI) entering medicine, understanding its decision-making under uncertainty is important. It is well known that human subjective risk appetite influences medical decisions. This study investigated whether the risk appetite of GenAI can be evaluated and if established human risk assessment [...] Read more.
With Generative AI (GenAI) entering medicine, understanding its decision-making under uncertainty is important. It is well known that human subjective risk appetite influences medical decisions. This study investigated whether the risk appetite of GenAI can be evaluated and if established human risk assessment tools are applicable for this purpose in a medical context. Five GenAI systems (ChatGPT 4.5, Gemini 2.0, Qwen 2.5 MAX, DeepSeek-V3, and Perplexity) were evaluated using Rheumatoid Arthritis (RA) clinical scenarios. We employed two methods adapted from human risk assessment: the General Risk Propensity Scale (GRiPS) and the Time Trade-Off (TTO) technique. Queries involving RA cases with varying prognoses and hypothetical treatment choices were posed repeatedly to assess risk profiles and response consistency. All GenAIs consistently identified the same RA cases for the best and worst prognoses. However, the two risk assessment methodologies yielded varied results. The adapted GRiPS showed significant differences in general risk propensity among GenAIs (ChatGPT being the least risk-averse and Qwen/DeepSeek the most), though these differences diminished in specific prognostic contexts. Conversely, the TTO method indicated a strong general risk aversion (unwillingness to trade lifespan for pain relief) across systems yet revealed Perplexity as significantly more risk-tolerant than Gemini. The variability in risk profiles obtained using the GRiPS versus the TTO for the same AI systems raises questions about tool applicability. This discrepancy suggests that these human-centric instruments may not adequately or consistently capture the nuances of risk processing in Artificial Intelligence. The findings imply that current tools might be insufficient, highlighting the need for methodologies specifically tailored for evaluating AI decision-making under medical uncertainty. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Sciences)
Show Figures

Figure 1

20 pages, 12773 KiB  
Article
Multi-Scale Sponge Capacity Trading and SLSQP for Stormwater Management Optimization
by An-Kang Liu, Qing Xu, Wen-Jin Zhu, Yang Zhang, De-Long Huang, Qing-Hai Xie, Chun-Bo Jiang and Hai-Ruo Wang
Sustainability 2025, 17(10), 4646; https://doi.org/10.3390/su17104646 - 19 May 2025
Viewed by 392
Abstract
Low-impact development (LID) facilities serve as a fundamental approach in urban stormwater management. However, significant variations in land use among different plots lead to discrepancies in runoff reduction demands, frequently leading to either the over- or under-implementation of LID infrastructure. To address this [...] Read more.
Low-impact development (LID) facilities serve as a fundamental approach in urban stormwater management. However, significant variations in land use among different plots lead to discrepancies in runoff reduction demands, frequently leading to either the over- or under-implementation of LID infrastructure. To address this issue, we propose a cost-effective optimization framework grounded in the concept of “Capacity Trading (CT)”. The study area was partitioned into multi-scale grids (CT-100, CT-200, CT-500, and CT-1000) to systematically investigate runoff redistribution across heterogeneous land parcels. Integrated with the Sequential Least Squares Programming (SLSQP) optimization algorithm, LID facilities are allocated according to demand under two independent constraint conditions: runoff coefficient (φ ≤ 0.49) and runoff control rate (η ≥ 70%). A quantitative analysis was conducted to evaluate the construction cost and reduction effectiveness across different trading scales. The key findings include the following: (1) At a constant return period, increasing the trading scale significantly reduces the demand for LID facility construction. Expanding trading scales from CT-100 to CT-1000 reduces LID area requirements by 28.33–142.86 ha under the φ-constraint and 25.5–197.19 ha under the η-constraint. (2) Systematic evaluations revealed that CT-500 optimized cost-effectiveness by balancing infrastructure investments and hydrological performance. This scale allows for coordinated construction, avoiding the high costs associated with small-scale trading (CT-100 and CT-200) while mitigating the diminishing returns observed in large-scale trading (CT-1000). This study provides a refined and efficient solution for urban stormwater management, overcoming the limitations of traditional approaches and demonstrating significant practical value. Full article
(This article belongs to the Special Issue Sustainable Stormwater Management and Green Infrastructure)
Show Figures

Graphical abstract

21 pages, 6959 KiB  
Article
Multi-Domain Digital Twin and Real-Time Performance Optimization for Marine Steam Turbines
by Yuhui Liu, Duansen Shangguan, Liping Chen, Xiaoyan Liu, Guihao Yin and Gang Li
Symmetry 2025, 17(5), 689; https://doi.org/10.3390/sym17050689 - 30 Apr 2025
Viewed by 752
Abstract
The digital twin model, which serves as a virtual counterpart symmetric to the physical entity, enables high-fidelity simulation and real-time monitoring. However, digital twin implementation for marine steam turbines (MSTs) faces dual multi-domain simulation fidelity and computational efficiency challenges. This study establishes a [...] Read more.
The digital twin model, which serves as a virtual counterpart symmetric to the physical entity, enables high-fidelity simulation and real-time monitoring. However, digital twin implementation for marine steam turbines (MSTs) faces dual multi-domain simulation fidelity and computational efficiency challenges. This study establishes a MST digital twin modeling methodology through two interconnected innovations: (1) a Modelica-based modular architecture enabling cross-domain coupling across mechanical, thermodynamic, and hydrodynamic systems via hierarchical decomposition, ensuring bidirectional symmetry between physical components and their virtual representations; and (2) a hybrid support vector regression-bidirectional long short-term memory (SVR-BiLSTM) surrogate model combining Gaussian radial basis function-supported SVR for steady-state mapping with Bi-LSTM networks for dynamic error compensation. Experimental validation demonstrates: (a) the SVR component achieves <1.57% absolute error under step-load conditions with 85% computational time reduction versus physics-based models; and (b) Bi-LSTM integration improves transient prediction accuracy by 14.85% in maximum absolute error compared to standalone SVR, effectively resolving static–dynamic discrepancies in telemetry simulation. This dual-approach innovation successfully bridges the critical trade-off between real-time computation and predictive accuracy while maintaining symmetric consistency between the physical turbine and its digital counterpart, providing a validated technical foundation for the intelligent operation and maintenance of MSTs. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

22 pages, 1475 KiB  
Article
Leveraging Precision Agriculture Principles for Eco-Efficiency: Performance of Common Bean Production Across Irrigation Levels and Sowing Periods
by Aleksa Lipovac, Kledja Canaj, Andi Mehmeti, Mladen Todorovic, Marija Ćosić, Nevenka Djurović and Ružica Stričević
Water 2025, 17(9), 1312; https://doi.org/10.3390/w17091312 - 27 Apr 2025
Viewed by 640
Abstract
Optimizing irrigation and sowing schedules is critical for enhancing crop performance and resource efficiency, especially in water-limited environments. However, the balancing the trade-offs between crop yield, energy use, and environmental impacts remains a complex challenge. This study investigates the eco-efficiency of common bean [...] Read more.
Optimizing irrigation and sowing schedules is critical for enhancing crop performance and resource efficiency, especially in water-limited environments. However, the balancing the trade-offs between crop yield, energy use, and environmental impacts remains a complex challenge. This study investigates the eco-efficiency of common bean (Phaseolus vulgaris L.) cultivation in Vojvodina region (Serbia) under three irrigation regimes (100%, 80%, and 60% of crop evapotranspiration—ETc) and three sowing periods (mid-April, late May/early June, and late June/early July). A combined energy analysis and cradle-to-farm gate Life Cycle Assessment (LCA) was employed to assess sustainability trade-offs. Results show that early sowing with full irrigation achieved the highest crop yields, energy use efficiency, and net energy gain while minimizing specific energy input. However, this strategy also incurred the greatest environmental burden due to elevated water and fertilizer inputs. In contrast, late sowing and deficit irrigation reduced environmental impacts at the expense of productivity and energy performance. The most balanced outcome—combining acceptable yield with lower environmental pressure—was observed under early sowing (mid-April) and moderate deficit irrigation (60% of ETc). Importantly, the study reveals discrepancies between energy and environmental assessments; energy analysis favors high-yield, high-input systems, whereas LCA emphasizes environmental burdens per unit area, often favoring low-input strategies. These findings underscore the need for integrated, site-specific management approaches that optimize both agronomic performance and environmental sustainability, particularly under growing climate and resource constraints. Full article
(This article belongs to the Section Water, Agriculture and Aquaculture)
Show Figures

Figure 1

19 pages, 16342 KiB  
Article
Revolutionizing Open-Pit Mining Fleet Management: Integrating Computer Vision and Multi-Objective Optimization for Real-Time Truck Dispatching
by Kürşat Hasözdemir, Mert Meral and Muhammet Mustafa Kahraman
Appl. Sci. 2025, 15(9), 4603; https://doi.org/10.3390/app15094603 - 22 Apr 2025
Viewed by 1148
Abstract
The implementation of fleet management software in mining operations poses challenges, including high initial costs and the need for skilled personnel. Additionally, integrating new software with existing systems can be complex, requiring significant time and resources. This study aims to mitigate these challenges [...] Read more.
The implementation of fleet management software in mining operations poses challenges, including high initial costs and the need for skilled personnel. Additionally, integrating new software with existing systems can be complex, requiring significant time and resources. This study aims to mitigate these challenges by leveraging advanced technologies to reduce initial costs and minimize reliance on highly trained employees. Through the integration of computer vision and multi-objective optimization, it seeks to enhance operational efficiency and optimize fleet management in open-pit mining. The objective is to optimize truck-to-excavator assignments, thereby reducing excavator idle time and deviations from production targets. A YOLO v8 model, trained on six hours of mine video footage, identifies vehicles at excavators and dump sites for real-time monitoring. Extracted data—including truck assignments and excavator ready times—is incorporated into a multi-objective binary integer programming model that aims to minimize excavator waiting times and discrepancies in target truck assignments. The epsilon-constraint method generates a Pareto frontier, illustrating trade-offs between these objectives. Integrating real-time image analysis with optimization significantly improves operational efficiency, enabling adaptive truck-excavator allocation. This study highlights the potential of advanced computer vision and optimization techniques to enhance fleet management in mining, leading to more cost-effective and data-driven decision-making. Full article
Show Figures

Figure 1

19 pages, 4719 KiB  
Article
Adapting the High-Resolution PlanetScope Biomass Model to Low-Resolution VIIRS Imagery Using Spectral Harmonization: A Case of Grassland Monitoring in Mongolia
by Margad-Erdene Jargalsaikhan, Masahiko Nagai, Begzsuren Tumendemberel, Erdenebaatar Dashdondog, Vaibhav Katiyar and Dorj Ichikawa
Remote Sens. 2025, 17(8), 1428; https://doi.org/10.3390/rs17081428 - 17 Apr 2025
Cited by 1 | Viewed by 800
Abstract
Monitoring grassland biomass accurately and frequently is critical for ecological management, climate change assessment, and sustainable resource use. However, the use of single-satellite data faces challenges due to trade-offs between spatial resolution and temporal frequency, especially for large areas. High-resolution imagery, such as [...] Read more.
Monitoring grassland biomass accurately and frequently is critical for ecological management, climate change assessment, and sustainable resource use. However, the use of single-satellite data faces challenges due to trade-offs between spatial resolution and temporal frequency, especially for large areas. High-resolution imagery, such as PlanetScope, provides detailed spatial data but presents significant challenges in data management and processing over large regions. Conversely, low-resolution sensors such as JPSS-VIIRS offer daily global coverage with low memory data but lack the spatial detail required for precise biomass estimation, making it difficult to retrieve or validate model parameters due to the mismatch with small ground reference data polygons. To overcome these limitations, this study introduces a robust methodology for accurate frequent biomass estimation based on JPSS-VIIRS data through spectral harmonization, adapting a high-resolution biomass estimation model originally developed from PlanetScope imagery. The core innovation is an optimized Spectral Band Adjustment Factor (SBAF) approach tailored specifically to grassland spectral characteristics. This method significantly enhances spectral alignment, reducing red-band reflectance discrepancies from 6.2% to 4.8% in grassy areas and from 6.9% to 4.0% in bare areas. NDVI discrepancies also improved substantially. Applied across Mongolia, the harmonized VIIRS data estimated a five-year average biomass of 71.4 g/m2, clearly reflecting environmental variability. Specifically, the P375 dataset showed average biomass estimates of 54.8 g/m2 for desert grasslands (10.5% higher than PlanetScope), 122.6 g/m2 for dry grasslands (9.6% higher), and 134 g/m2 for mountain grasslands (1.9% lower). The uncertainty analysis showed strong overall agreement with PlanetScope-derived biomass, with an RMSE of 11.6 g/m2, a mean percentage difference of 10.74%, and an R2 of 0.92. While mountain grasslands exhibited the lowest RMSE, a relatively lower R2 indicated limited variability. Higher uncertainty in desert and dry grasslands highlighted the impact of ecological heterogeneity on biomass estimation accuracy. These detailed comparisons demonstrate the effectiveness and accuracy of the proposed methodology in bridging spatial and temporal gaps, providing a valuable tool for large-scale weekly grassland biomass monitoring with applicability beyond the Mongolian context. Full article
(This article belongs to the Special Issue Vegetation Mapping through Multiscale Remote Sensing)
Show Figures

Figure 1

23 pages, 11116 KiB  
Article
Mathematical Modeling and Simulation of Logistic Growth
by Camilla Pelagalli, Stefano Faccio and Paolo Casari
Appl. Sci. 2025, 15(8), 4409; https://doi.org/10.3390/app15084409 - 16 Apr 2025
Viewed by 1048
Abstract
We propose a reproducible pipeline of work consisting of the time-driven simulation of discrete logistic growth based on the corresponding master equation, focusing on demographic variation under a carrying capacity limit. The mathematical modeling that leads to the stochastic implementation is presented in [...] Read more.
We propose a reproducible pipeline of work consisting of the time-driven simulation of discrete logistic growth based on the corresponding master equation, focusing on demographic variation under a carrying capacity limit. The mathematical modeling that leads to the stochastic implementation is presented in a step-by-step fashion to statistically ground the designed simulation. The main parameters of the system, whose settings include extreme values, are varied to analyze the simulation behavior and explore the empirical limits of its applicability, minimizing the distance between the theoretical and observed carrying capacity trough parameter tuning. After such tuning, a single simulation scenario is chosen and compared with the state-of-the-art Gillespie algorithm, which adopts a contrasting event-driven approach. The output analysis of these two strategies and the assessment of their statistical significance highlight the trade-off between adherence to the model and the computational effort of the proposed approach, while shedding light on multiple facets of logistic growth, including discrepancies between continuous and discrete models. Full article
Show Figures

Figure 1

29 pages, 1840 KiB  
Article
Fractional-Order System Identification: Efficient Reduced-Order Modeling with Particle Swarm Optimization and AI-Based Algorithms for Edge Computing Applications
by Ignacio Fidalgo Astorquia, Nerea Gómez-Larrakoetxea, Juan J. Gude and Iker Pastor
Mathematics 2025, 13(8), 1308; https://doi.org/10.3390/math13081308 - 16 Apr 2025
Cited by 1 | Viewed by 491
Abstract
Fractional-order systems capture complex dynamic behaviors more accurately than integer-order models, yet their real-time identification remains challenging, particularly in resource-constrained environments. This work proposes a hybrid framework that combines Particle Swarm Optimization (PSO) with various artificial intelligence (AI) techniques to estimate reduced-order models [...] Read more.
Fractional-order systems capture complex dynamic behaviors more accurately than integer-order models, yet their real-time identification remains challenging, particularly in resource-constrained environments. This work proposes a hybrid framework that combines Particle Swarm Optimization (PSO) with various artificial intelligence (AI) techniques to estimate reduced-order models of fractional systems. First, PSO optimizes model parameters by minimizing the discrepancy between the high-order system response and the reduced model output. These optimized parameters then serve as training data for several AI-based algorithms—including neural networks, support vector regression (SVR), and extreme gradient boosting (XGBoost)—to evaluate their inference speed and accuracy. Experimental validation on a custom-built heating system demonstrates that both PSO and the AI techniques yield precise reduced-order models. While PSO achieves slightly lower error metrics, its iterative nature leads to higher and more variable computation times compared to the deterministic and rapid inference of AI approaches. These findings highlight a trade-off between estimation accuracy and computational efficiency, providing a robust solution for real-time fractional-order system identification on edge devices. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

14 pages, 1620 KiB  
Article
Transcriptional and Physiological Responses of Saccharomyces cerevisiae CZ to Octanoic Acid Stress
by Zhi-Hai Yu, Ming-Zhi Shi, Wen-Xuan Dong, Xiao-Zhu Liu, Wei-Yuan Tang and Ming-Zheng Huang
Fermentation 2025, 11(4), 180; https://doi.org/10.3390/fermentation11040180 - 1 Apr 2025
Viewed by 555
Abstract
This study elucidates the adaptive mechanisms of Saccharomyces cerevisiae CZ under octanoic acid stress, revealing concentration-dependent growth inhibition (76% lethality at 800 mg/L) and notable tolerance at 600 mg/L. Initial exposure (≤6 h) showed no growth impairment, but prolonged treatment induced dose-dependent lethality, [...] Read more.
This study elucidates the adaptive mechanisms of Saccharomyces cerevisiae CZ under octanoic acid stress, revealing concentration-dependent growth inhibition (76% lethality at 800 mg/L) and notable tolerance at 600 mg/L. Initial exposure (≤6 h) showed no growth impairment, but prolonged treatment induced dose-dependent lethality, accompanied by reduced H+/K+-ATPase activity and elevated malondialdehyde (MDA) levels, indicative of oxidative damage. Transcriptomic profiling of 5665 genes highlighted the predominant downregulation of ribosomal functions (translation, ribosome biogenesis) and amino acid metabolism pathways (e.g., ARO10, ARO9). Strain-specific regulatory dynamics were observed: (1) TPO1-mediated efflux was active at 400 mg/L but absent at 600 mg/L, suggesting compensatory mechanisms under high stress; (2) HTX1-related genes exhibited bidirectional regulation (downregulated at 400 mg/L vs. upregulated at 600 mg/L), reflecting metabolic flexibility; (3) ACC1 downregulation (600 mg/L) and unaltered SFK1 expression contrasted with lipid-remodeling strategies in engineered strains; and (4) PMA2 suppression diverged from literature-reported PMA1 activation, underscoring strain-specific energy reallocation. Suppression of ergosterol biosynthesis and ribosomal genes revealed a trade-off between stress adaptation and biosynthetic processes. These findings reconcile prior contradictions by attributing discrepancies to genetic backgrounds (CZ vs. laboratory/engineered strains) and methodological variations. Unlike strains relying on phospholipid asymmetry or oleic acid overproduction, CZ’s unique tolerance stems from integrated membrane homeostasis (via lipid balance) and metabolic conservation. This work emphasizes the critical role of strain-specific regulatory networks in octanoic acid resistance and provides insights for optimizing yeast robustness through targeted engineering of membrane stability and metabolic adaptability. Future studies should employ multi-omics integration to unravel the dynamic gene regulatory logic underlying these adaptive traits. Full article
Show Figures

Figure 1

19 pages, 1572 KiB  
Article
FeTT: Class-Incremental Learning with Feature Transformation Tuning
by Sunyuan Qiang and Yanyan Liang
Mathematics 2025, 13(7), 1095; https://doi.org/10.3390/math13071095 - 27 Mar 2025
Viewed by 643
Abstract
Class-incremental learning (CIL) enables models to continuously acquire knowledge and adapt in an ever-changing environment. However, one primary challenge lies in the trade-off between the stability and plasticity, i.e., plastically expand the novel knowledge base and stably retaining previous knowledge without catastrophic forgetting. [...] Read more.
Class-incremental learning (CIL) enables models to continuously acquire knowledge and adapt in an ever-changing environment. However, one primary challenge lies in the trade-off between the stability and plasticity, i.e., plastically expand the novel knowledge base and stably retaining previous knowledge without catastrophic forgetting. We find that even recent promising CIL methods via pre-trained models (PTMs) still suffer from this dilemma. To this end, this paper begins by analyzing the aforementioned dilemma from the perspective of marginal distribution for data categories. Then, we propose the feature transformation tuning (FeTT) model, which concurrently alleviates the inadequacy of previous PTM-based CIL in terms of stability and plasticity. Specifically, we apply the parameter-efficient fine-tuning (PEFT) strategies solely in the first CIL task to bridge the domain gap between the PTMs and downstream task dataset. Subsequently, the model is kept fixed to maintain stability and avoid discrepancies in training data distributions. Moreover, feature transformation is employed to regulate the backbone representations, boosting the model’s adaptability and plasticity without additional training or parameter costs. Extensive experimental results and further feature channel activations discussion on CIL benchmarks across six datasets validate the superior performance of our proposed method. Full article
(This article belongs to the Special Issue New Insights in Machine Learning (ML) and Deep Neural Networks)
Show Figures

Figure 1

28 pages, 725 KiB  
Article
Lost Institutional Memory and Policy Advice: The Royal Society of Arts on the Circular Economy Through the Centuries
by Pierre Desrochers
Recycling 2025, 10(2), 49; https://doi.org/10.3390/recycling10020049 - 19 Mar 2025
Viewed by 1226
Abstract
Circular economy theorists and advocates typically describe traditional market economies as linear “take, make, use and dispose” systems. Various policy interventions, from green taxes to extended producer responsibility, are therefore deemed essential to ensure the systematic (re)introduction of residuals, secondary materials and components [...] Read more.
Circular economy theorists and advocates typically describe traditional market economies as linear “take, make, use and dispose” systems. Various policy interventions, from green taxes to extended producer responsibility, are therefore deemed essential to ensure the systematic (re)introduction of residuals, secondary materials and components in manufacturing activities. By contrast, many nineteenth- and early twentieth-century writers documented how the profit motive, long-distance trade and actors now largely absent from present-day circularity discussions (e.g., waste dealers and brokers) spontaneously created ever more value out of the recovery of residuals and waste. These opposite assessments and underlying perspectives are perhaps best illustrated in the nineteenth classical liberal and early twenty-first century interventionist writings on circularity of Fellows, members and collaborators of the near tricentennial British Royal Society for the Encouragement of Arts, Manufactures and Commerce. This article summarizes their respective contributions and compares their stance on market institutions, design, intermediaries, extended producer responsibility and long-distance trade. Some hypotheses as to the sources of their analytical discrepancies and current beliefs on resource recovery are then discussed in more detail. A final suggestion is made that, if the analysis offered by early contributors is more correct, then perhaps the most important step towards greater circularity is regulatory reform (or deregulation) that would facilitate the spontaneous recovery of residuals and their processing in the most suitable, if sometimes more distant, locations. Full article
Show Figures

Figure 1

14 pages, 3326 KiB  
Article
Accuracy of Measuring Methods of Pile Volume of Forest Harvesting Residues and Economic Impacts
by Ladislav Zvěřina, Miloš Cibulka, Radomír Ulrich, Tomáš Badal and Václav Kupčák
Forests 2025, 16(3), 498; https://doi.org/10.3390/f16030498 - 12 Mar 2025
Viewed by 529
Abstract
The accurate measurement of logging residue volume is essential for efficient resource management and economic planning in the biomass supply chain. This study compares 3D laser scanning using a mobile ZEB-HORIZON™ scanner and conventional manual measurement with a measuring tape and staff rod. [...] Read more.
The accurate measurement of logging residue volume is essential for efficient resource management and economic planning in the biomass supply chain. This study compares 3D laser scanning using a mobile ZEB-HORIZON™ scanner and conventional manual measurement with a measuring tape and staff rod. Measurements were conducted at three locations in the Czech Republic, covering a representative sample of logging residue piles. The results indicate that manual measurement systematically overestimates biomass volume by approximately 35%, leading to potential inaccuracies in biomass trade and logistics. The average conversion coefficient was 0.35 for laser scanning and 0.23 for manual measurement, confirming the higher precision of 3D scanning. Statistical analysis, including the Shapiro–Wilk test for normality and a paired t-test, confirmed that the differences between methods were statistically significant (p < 0.0001). Economic analysis suggests that adopting 3D laser scanning can enhance logistics planning, optimize transport capacities, and improve fairness in business transactions. Compared to manual measurement, laser scanning reduces measurement time by approximately two-thirds while preventing overestimation errors that can lead to discrepancies exceeding three times the actual biomass revenues. Unlike manual methods, laser scanning eliminates measurement inconsistencies caused by pile irregularities, terrain conditions, and human error. The study recommends prioritizing 3D laser scanning for measuring logging residue volumes, particularly for larger and irregularly shaped piles, and incorporating moisture content analysis in economic assessments to improve pricing accuracy and transparency. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

16 pages, 1736 KiB  
Article
A Comparative Study on the Average CO2 Emission Factors of Electricity of China
by Feng Chen, Jingyu Lei, Zilong Liu and Xingchuang Xiong
Energies 2025, 18(3), 654; https://doi.org/10.3390/en18030654 - 30 Jan 2025
Cited by 3 | Viewed by 1093
Abstract
The intensification of global climate change and the resulting environmental challenges have made carbon emission control a focal point of global attention. As one of the major sources of carbon emissions, the power sector plays a critical role in accurately quantifying CO2 [...] Read more.
The intensification of global climate change and the resulting environmental challenges have made carbon emission control a focal point of global attention. As one of the major sources of carbon emissions, the power sector plays a critical role in accurately quantifying CO2 emissions, which is essential for formulating effective emission reduction policies and action plans. The average CO2 emission factor of electricity (AEF), as a key parameter, is widely used in calculating indirect carbon emissions from purchased electricity in various industries. The International Energy Agency (IEA) reported an AEF of 0.6093 kgCO2/kWh for China in 2021, while the Ministry of Ecology and Environment of China (MEE) officially reported a value of 0.5568 kg CO2/kWh, resulting in a discrepancy of 9.43%. This study conducts an in-depth analysis of the calculation methodologies used by the MEE and IEA, comparing them from two critical dimensions: calculation formulas and data sources, to explore potential causes of the observed discrepancies. Differences in formula components include factors such as electricity trade, the allocation of emissions from combined heat and power (CHP) plants, and emissions from own energy use in power plants. Notably, the IEA’s inclusion of CHP allocation reduces its calculated emissions by 10.99%. Regarding data sources, this study focuses on total carbon emissions and total electricity generation, revealing that the IEA’s total carbon emissions exceed those of the MEE by 9.71%. This exploratory analysis of the discrepancies in China’s AEFs provides valuable insights and a foundational basis for further research. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

Back to TopTop