Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (250)

Search Parameters:
Keywords = energy budget method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2090 KB  
Article
Translating the One Security Framework for Global Sustainability: From Concept to Operational Model
by Minhyung Park and Alex McBratney
Sustainability 2026, 18(2), 1031; https://doi.org/10.3390/su18021031 - 19 Jan 2026
Viewed by 204
Abstract
Fragmented, sector-by-sector governance is poorly suited to cascading risks that couple climate, food, water, health, biodiversity, soils, energy, and environmental quality. This paper addresses the translation gap between integrative security–sustainability paradigms and the routine machinery of government, including planning, budgeting, procurement, and accountability. [...] Read more.
Fragmented, sector-by-sector governance is poorly suited to cascading risks that couple climate, food, water, health, biodiversity, soils, energy, and environmental quality. This paper addresses the translation gap between integrative security–sustainability paradigms and the routine machinery of government, including planning, budgeting, procurement, and accountability. We develop the Spheres of Security (SOS) model as a conceptual–operational method organised around four overlapping spheres (biophysical, economic, social, and governance) and a repeatable cycle—diagnose → co-design → deliver → demonstrate → adapt—illustrated through two stylised vignettes (urban heat and health; watershed food–water–energy). SOS introduces an auditable overlap rule and an Overlap Score, supported by lean assurance, to make verified multi-sphere co-benefits commissionable and to surface trade-offs transparently within normal, accountable institutions (consistent with weak securitisation). We provide implementation guidance, including minimum institutional preconditions and staged entry-point options for jurisdictions where pooled budgets and full administrative integration are not immediately feasible. Full article
Show Figures

Figure 1

50 pages, 3712 KB  
Article
Explainable AI and Multi-Agent Systems for Energy Management in IoT-Edge Environments: A State of the Art Review
by Carlos Álvarez-López, Alfonso González-Briones and Tiancheng Li
Electronics 2026, 15(2), 385; https://doi.org/10.3390/electronics15020385 - 15 Jan 2026
Viewed by 302
Abstract
This paper reviews Artificial Intelligence techniques for distributed energy management, focusing on integrating machine learning, reinforcement learning, and multi-agent systems within IoT-Edge-Cloud architectures. As energy infrastructures become increasingly decentralized and heterogeneous, AI must operate under strict latency, privacy, and resource constraints while remaining [...] Read more.
This paper reviews Artificial Intelligence techniques for distributed energy management, focusing on integrating machine learning, reinforcement learning, and multi-agent systems within IoT-Edge-Cloud architectures. As energy infrastructures become increasingly decentralized and heterogeneous, AI must operate under strict latency, privacy, and resource constraints while remaining transparent and auditable. The study examines predictive models ranging from statistical time series approaches to machine learning regressors and deep neural architectures, assessing their suitability for embedded deployment and federated learning. Optimization methods—including heuristic strategies, metaheuristics, model predictive control, and reinforcement learning—are analyzed in terms of computational feasibility and real-time responsiveness. Explainability is treated as a fundamental requirement, supported by model-agnostic techniques that enable trust, regulatory compliance, and interpretable coordination in multi-agent environments. The review synthesizes advances in MARL for decentralized control, communication protocols enabling interoperability, and hardware-aware design for low-power edge devices. Benchmarking guidelines and key performance indicators are introduced to evaluate accuracy, latency, robustness, and transparency across distributed deployments. Key challenges remain in stabilizing explanations for RL policies, balancing model complexity with latency budgets, and ensuring scalable, privacy-preserving learning under non-stationary conditions. The paper concludes by outlining a conceptual framework for explainable, distributed energy intelligence and identifying research opportunities to build resilient, transparent smart energy ecosystems. Full article
Show Figures

Figure 1

19 pages, 10771 KB  
Article
When Analog Electronics Extends Solar Life: Gate-Resistance Retuning for PV Reuse
by Euzeli C. dos Santos, Yongchun Ni, Fabiano Salvadori and Haitham Kanakri
Processes 2026, 14(1), 146; https://doi.org/10.3390/pr14010146 - 1 Jan 2026
Viewed by 411
Abstract
This paper proposes an analog retuning strategy that strengthens the functional longevity of photovoltaic (PV) systems operating within circular-economy environments. Although PV modules can be relocated from large generation sites to low-demand rural or remote settings, their electrical behavior offers no adjustable quantities [...] Read more.
This paper proposes an analog retuning strategy that strengthens the functional longevity of photovoltaic (PV) systems operating within circular-economy environments. Although PV modules can be relocated from large generation sites to low-demand rural or remote settings, their electrical behavior offers no adjustable quantities capable of extending service duration. In many cases, even after formal disposal or decommissioning, these solar panels still retain a considerable portion of their energy-generation capability and can operate for many additional years before their output becomes negligible, making second-life deployment both technically viable and economically attractive. In contrast, the associated power-electronic converters contain modifiable gate-driver parameters that can be reconfigured to moderate transient phenomena and lessen device stress. The method introduced here adjusts the external gate resistance in conjunction with coordinated switching-frequency adaptation, reducing overshoot, ringing, and steep dv/dt slopes while preserving the original switching-loss budget. A unified analytical framework connects stress mitigation, ripple evolution, and projected lifetime enhancement, demonstrating that deliberate analog tuning can substantially increase the endurance of aged semiconductor hardware without compromising suitability for second-life PV applications. Analytical results are supported by experimental validation, including hardware measurements of switching waveforms and energy dissipation under multiple gate-resistance configurations. Full article
Show Figures

Figure 1

40 pages, 577 KB  
Article
Variational Quantum Eigensolver for Clinical Biomarker Discovery: A Multi-Qubit Model
by Juan Pablo Acuña González, Moisés Sánchez Adame and Oscar Montiel
Axioms 2026, 15(1), 23; https://doi.org/10.3390/axioms15010023 - 27 Dec 2025
Viewed by 400
Abstract
We formalize an inverse, data-conditioned variant of the Variational Quantum Eigensolver (VQE) for clinical biomarker discovery. Given patient-encoded quantum states, we construct a task-specific Hamiltonian whose coefficients are inferred from clinical associations and interpret its expectation value as a calibrated energy score for [...] Read more.
We formalize an inverse, data-conditioned variant of the Variational Quantum Eigensolver (VQE) for clinical biomarker discovery. Given patient-encoded quantum states, we construct a task-specific Hamiltonian whose coefficients are inferred from clinical associations and interpret its expectation value as a calibrated energy score for prognosis and treatment monitoring. The method integrates coefficient estimation, ansatz specification with basis rotations, commuting-group measurements, and a practical shot budget analysis. Evaluated on public infectious disease datasets under severe class imbalance, the approach yields consistent gains in balanced accuracy and precision–recall over strong classical baselines, with stability across random seeds and feature ablations. This variational energy scoring framework bridges Hamiltonian learning and clinical risk modeling, offering a compact, interpretable, and reproducible route to biomarker prioritization and decision support. Full article
29 pages, 5880 KB  
Article
Ensemble Surrogates and NSGA-II with Active Learning for Multi-Objective Optimization of WAG Injection in CO2-EOR
by Yutong Zhu, Hao Li, Yan Zheng, Cai Li, Chaobin Guo and Xinwen Wang
Energies 2025, 18(24), 6575; https://doi.org/10.3390/en18246575 - 16 Dec 2025
Viewed by 408
Abstract
CO2-enhanced oil recovery (CO2-EOR) with water-alternating-gas (WAG) injection offers the dual benefit of boosted oil production and CO2 storage, addressing both energy needs and climate goals. However, designing CO2-WAG schemes is challenging; maximizing oil recovery, CO [...] Read more.
CO2-enhanced oil recovery (CO2-EOR) with water-alternating-gas (WAG) injection offers the dual benefit of boosted oil production and CO2 storage, addressing both energy needs and climate goals. However, designing CO2-WAG schemes is challenging; maximizing oil recovery, CO2 storage, and economic returns (net present value, NPV) simultaneously under a limited simulation budget leads to conflicting trade-offs. We propose a novel closed-loop multi-objective framework that integrates high-fidelity reservoir simulation with stacking surrogate modeling and active learning for multi-objective CO2-WAG optimization. A high-diversity stacking ensemble surrogate is constructed to approximate the reservoir simulator. It fuses six heterogeneous models (gradient boosting, Gaussian process regression, polynomial ridge regression, k-nearest neighbors, generalized additive model, and radial basis SVR) via a ridge-regression meta-learner, with original control variables included to improve robustness. This ensemble surrogate significantly reduces per-evaluation cost while maintaining accuracy across the parameter space. During optimization, an NSGA-II genetic algorithm searches for Pareto-optimal CO2-WAG designs by varying key control parameters (water and CO2 injection rates, slug length, and project duration). Crucially, a decision-space diversity-controlled active learning scheme (DCAF) iteratively refines the surrogate: it filters candidate designs by distance to existing samples and selects the most informative points for high-fidelity simulation. This closed-loop cycle of “surrogate prediction → high-fidelity correction → model update” improves surrogate fidelity and drives convergence toward the true Pareto front. We validate the framework of the SPE5 benchmark reservoir under CO2-WAG conditions. Results show that the integrated “stacking + NSGA-II + DCAF” approach closely recovers the true tri-objective Pareto front (oil recovery, CO2 storage, NPV) while greatly reducing the number of expensive simulator runs. The method’s novelty lies in combining diverse stacking ensembles, NSGA-II, and active learning into a unified CO2-EOR optimization workflow. It provides practical guidance for economically aware, low-carbon reservoir management, demonstrating a data-efficient paradigm for coordinated production, storage, and value optimization in CO2-WAG EOR. Full article
(This article belongs to the Special Issue Enhanced Oil Recovery: Numerical Simulation and Deep Machine Learning)
Show Figures

Figure 1

12 pages, 697 KB  
Data Descriptor
Computational Dataset for Polymer–Pharmaceutical Interactions: MD/MM-PBSA and DFT Resources for Molecularly Imprinted Polymer (MIP) Design
by David Visentin, Mario Lovrić, Dejan Milenković, Robert Vianello, Željka Maglica, Kristina Tolić Čop and Dragana Mutavdžić Pavlović
Data 2025, 10(12), 205; https://doi.org/10.3390/data10120205 - 10 Dec 2025
Cited by 1 | Viewed by 610
Abstract
Molecularly imprinted polymers (MIPs) are promising sorbents for selectively capturing pharmaceutically active compounds (PhACs), but design remains slow because candidate screening is largely experimental or based on computationally expensive methods. We present MIP–PhAC, an open, curated resource of polymer–pharmaceutical interaction energies generated from [...] Read more.
Molecularly imprinted polymers (MIPs) are promising sorbents for selectively capturing pharmaceutically active compounds (PhACs), but design remains slow because candidate screening is largely experimental or based on computationally expensive methods. We present MIP–PhAC, an open, curated resource of polymer–pharmaceutical interaction energies generated from molecular dynamics (MD) followed by MM/PBSA analysis, with a small DFT subset for cross-method comparison. This resource is comprised of two complementary datasets: MIP–PhAC-Calibrated, a benchmark set with manually verified pH-7 microstates that reports both monomeric (pre-polymerized) and polymeric (short-chain) MD/MMPBSA energies and includes a DFT subset; and MIP–PhAC-Screen, a broader, high-throughput collection produced under a uniform automated workflow (including automated protonation) for rapid within-polymer ranking and machine learning development. For each MIP—PhAC pair we provide ΔG* components (electrostatics, van der Waals, polar and non-polar solvation; −TΔS omitted), summary statistics from post-convergence frames, simulation inputs, and chemical metadata. To our knowledge, MIP–PhAC is the largest open, curated dataset of polymer–pharmaceutical interaction energies to date. It enables benchmarking of end-point methods, reproducible protocol evaluation, data-driven ranking of polymer–pharmaceutical combinations, and training/validation of machine learning (ML) models for MIP design on modest compute budgets. Full article
Show Figures

Figure 1

21 pages, 9487 KB  
Article
Low-Cost Real-Time Remote Sensing and Geolocation of Moving Targets via Monocular Bearing-Only Micro UAVs
by Peng Sun, Shiji Tong, Kaiyu Qin, Zhenbing Luo, Boxian Lin and Mengji Shi
Remote Sens. 2025, 17(23), 3836; https://doi.org/10.3390/rs17233836 - 27 Nov 2025
Viewed by 626
Abstract
Low-cost and real-time remote sensing of moving targets is increasingly required in civilian applications. Micro unmanned aerial vehicles (UAVs) provide a promising platform for such missions because of their small size and flexible deployment, but they are constrained by payload capacity and energy [...] Read more.
Low-cost and real-time remote sensing of moving targets is increasingly required in civilian applications. Micro unmanned aerial vehicles (UAVs) provide a promising platform for such missions because of their small size and flexible deployment, but they are constrained by payload capacity and energy budget. Consequently, they typically carry lightweight monocular cameras only. These cameras cannot directly measure distance and suffer from scale ambiguity, which makes accurate geolocation difficult. This paper tackles geolocation and short-term trajectory prediction of moving targets over uneven terrain using bearing-only measurements from a monocular camera. We present a two-stage estimation framework in which a pseudo-linear Kalman filter (PLKF) provides real-time state estimates, while a sliding-window nonlinear least-squares (NLS) back end refines them. Future target positions are obtained by extrapolating the estimated trajectory. To improve localization accuracy, we analyze the relationship between the UAV path and the Cramér–Rao lower bound (CRLB) using the Fisher Information Matrix (FIM) and derive an observability-enhanced trajectory planning method. Real-flight experiments validate the framework, showing that accurate geolocation can be achieved in real time using only low-cost monocular bearing measurements. Full article
Show Figures

Figure 1

25 pages, 3760 KB  
Article
Estimating Reservoir Evaporation Under Mediterranean Climate Using Indirect Methods: A Case Study in Southern Portugal
by Carlos Miranda Rodrigues, Rita Cabral Guimarães and Madalena Moreira
Hydrology 2025, 12(11), 286; https://doi.org/10.3390/hydrology12110286 - 31 Oct 2025
Cited by 1 | Viewed by 1099 | Correction
Abstract
This study focuses on the Alentejo and Algarve regions of southern Portugal, which is characterized by a typical Mediteranean climate. In the Mediterranean region, evaporation plays a significant role in reservoir water budgets. Therefore, estimating water surface evaporation is essential for efficient reservoir [...] Read more.
This study focuses on the Alentejo and Algarve regions of southern Portugal, which is characterized by a typical Mediteranean climate. In the Mediterranean region, evaporation plays a significant role in reservoir water budgets. Therefore, estimating water surface evaporation is essential for efficient reservoir water management. This study aims to (i) assess the reservoir evaporation pattern in southern Portugal from meteorological offshore measures, (ii) benchmark various indirect methods for evaluating reservoir evaporation at a monthly scale, and (iii) provide recommendations on the most suitable indirect method to apply in operational practices. This study presents meteorological data collected from floating weather stations on instrumented platforms across nine reservoirs in Alentejo and Algarve. This is the first time that so many offshore local measurements have been made available in a Mediterranean climate region. The reservoir evaporation was estimated by the Energy Budget (Bowen Ratio) method, having concluded that monthly evaporation rates across the nine reservoirs ranged from 0.8 mm d­1 in winter to 4.6 mm d­1 in summer, with an annual average of 2.7 mm d­1. Annual evaporation values ranged from 750 to 1230 mm, showing a positive gradient from the northern Alentejo region to the southwest Algarve region. To evaluate the performance of five empirical and semi-empirical evaporation indirect methods, a benchmarking analysis was conducted. The indirect methods studied are Mass Transfer (MT), Penman (PEN), Priestley and Taylor (PT), Thornthwaite (THOR), and Pan Evaporation (PE). Regarding the MT method, an N function of a reservoir superficial area is presented for the Mediterranean climate regions. In the Pan Evaporation method, the pan coefficient was considered equal to one. The benchmarking analysis revealed that all studied methods produced estimates that had good correlation with the Energy Budget method’s results across all reservoirs. All the methods showed small biases at the monthly scale, particularly in the dry semester. The estimates’ evaporation variability depended on the reservoir. Overall, the evaluation of evaporation methods concluded that (i) the stakeholders should considerer having an evaporation pan offshore; (ii) to manage the water balance of the studied reservoirs, the manager must apply the method with the best performance, depending on the data available; (iii) to manage other reservoirs located in the Mediterranean climate region, the manager must compare reservoir characteristics and the data available in order to choose the most suitable method to apply. Full article
Show Figures

Graphical abstract

24 pages, 943 KB  
Review
A Review on AI Miniaturization: Trends and Challenges
by Bin Tang, Shengzhi Du and Antonie Johan Smith
Appl. Sci. 2025, 15(20), 10958; https://doi.org/10.3390/app152010958 - 12 Oct 2025
Viewed by 2108
Abstract
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing [...] Read more.
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing (MEC)/micro data center (MDC) architectures. Based on a standardized literature search and screening process, three categories of miniaturization strategies are distilled: redundancy compression (e.g., pruning, quantization, and distillation), knowledge transfer (e.g., distillation and parameter-efficient fine-tuning), and hardware–software co-design (e.g., neural architecture search (NAS), compiler-level, and operator-level optimization). The purposes of this review are threefold: (1) to unify the “architecture–strategy–implementation pathway” from a system-level perspective; (2) to establish technology–budget mapping with verifiable quantitative indicators; and (3) to summarize representative pathways for energy- and performance-prioritized scenarios, while highlighting current deficiencies in data disclosure and device-side validation. The findings indicate that, compared with single techniques, cross-layer combined optimization better balances accuracy, latency, and power consumption. Therefore, AI miniaturization should be regarded as a proactive method of structural reconfiguration for large-scale deployment. Future efforts should advance cross-scenario empirical validation and standardized benchmarking, while reinforcing hardware–software co-design. Compared with existing reviews that mostly focus on a single dimension, this review proposes a cross-level framework and design checklist, systematizing scattered optimization methods into reusable engineering pathways. Full article
Show Figures

Figure 1

22 pages, 1741 KB  
Article
Profit Optimization in Multi-Unit Construction Projects Under Variable Weather Conditions: A Wind Farm Case Study
by Michał Podolski, Jerzy Rosłon and Bartłomiej Sroka
Appl. Sci. 2025, 15(19), 10769; https://doi.org/10.3390/app151910769 - 7 Oct 2025
Viewed by 860
Abstract
This paper introduces a novel scheduling model that integrates weather-based productivity coefficients into multi-unit construction projects, aiming to enhance profit and reduce delays. The method is suitable especially for renewable energy, open-area projects. The authors propose a flow-shop optimization framework that considers key [...] Read more.
This paper introduces a novel scheduling model that integrates weather-based productivity coefficients into multi-unit construction projects, aiming to enhance profit and reduce delays. The method is suitable especially for renewable energy, open-area projects. The authors propose a flow-shop optimization framework that considers key aspects of construction contracts, e.g., contractual penalties, downtime losses, and cash flow constraints. A proprietary Tabu Search (TS) metaheuristic algorithm variant is used to solve the resulting NP-hard problem. Numerical experiments on multiple test sets indicate that the TS algorithm consistently outperforms other methods in finding higher-profit schedules. A real-world wind farm case study further demonstrates substantial improvements, transforming an initially loss-making operation into a profitable venture. By explicitly accounting for weather disruptions within a formalized scheduling model, this work advances the understanding of reliable project planning under uncertain environmental conditions. The solution framework offers contractors an effective tool for mitigating scheduling risks and optimizing resource usage. The integration of weather data and cash flow management increases the likelihood of on-time and on-budget project delivery. Full article
Show Figures

Figure 1

18 pages, 1425 KB  
Article
Exploring DC Power Quality Measurement and Characterization Techniques
by Yara Daaboul, Daniela Istrate, Yann Le Bihan, Ludovic Bertin and Xavier Yang
Sensors 2025, 25(19), 6043; https://doi.org/10.3390/s25196043 - 1 Oct 2025
Viewed by 802
Abstract
Within the modernizing energy infrastructure of today, the integration of renewable energy sources and direct current (DC)-powered technologies calls for the re-examination of traditional alternative current (AC) networks. Low-voltage DC (LVDC) grids offer an attractive way forward in reducing conversion losses and simplifying [...] Read more.
Within the modernizing energy infrastructure of today, the integration of renewable energy sources and direct current (DC)-powered technologies calls for the re-examination of traditional alternative current (AC) networks. Low-voltage DC (LVDC) grids offer an attractive way forward in reducing conversion losses and simplifying local power management. However, ensuring reliable operation depends on a thorough understanding of DC distortions—phenomena generated by power converters, source instability, and varying loads. Two complementary traceable measurement chains are presented in this article with the purpose of measuring the steady-state DC component and the amplitude and frequency of the distortions around the DC bus with low uncertainties. One chain is optimized for laboratory environments, with high effectiveness in a controlled setup, and the other one is designed as a flexible and easily transportable solution, ensuring efficient and accurate assessments of DC distortions for field applications. In addition to our hardware solutions fully characterized by the uncertainty budget, we present the measurement method used for assessing DC distortions after evaluating the limitations of conventional AC techniques. Both arrangements are set to measure voltages of up to 1000 V, currents of up to 30 A, and frequency components of up to 150–500 kHz, with an uncertainty varying from 0.01% to less than 1%. This level of accuracy in the measurements will allow us to draw reliable conclusions regarding the dynamic behavior of future LVDC grids. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

20 pages, 16838 KB  
Article
Multi-Criteria Visual Quality Control Algorithm for Selected Technological Processes Designed for Budget IIoT Edge Devices
by Piotr Lech
Electronics 2025, 14(16), 3204; https://doi.org/10.3390/electronics14163204 - 12 Aug 2025
Viewed by 849
Abstract
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the [...] Read more.
This paper presents an innovative multi-criteria visual quality control algorithm designed for deployment on cost-effective Edge devices within the Industrial Internet of Things environment. Traditional industrial vision systems are typically associated with high acquisition, implementation, and maintenance costs. The proposed solution addresses the need to reduce these costs while maintaining high defect detection efficiency. The developed algorithm largely eliminates the need for time- and energy-intensive neural network training or retraining, though these capabilities remain optional. Consequently, the reliance on human labor, particularly for tasks such as manual data labeling, has been significantly reduced. The algorithm is optimized to run on low-power computing units typical of budget industrial computers, making it a viable alternative to server- or cloud-based solutions. The system supports flexible integration with existing industrial automation infrastructure, but it can also be deployed at manual workstations. The algorithm’s primary application is to assess the spread quality of thick liquid mold filling; however, its effectiveness has also been demonstrated for 3D printing processes. The proposed hybrid algorithm combines three approaches: (1) the classical SSIM image quality metric, (2) depth image measurement using Intel MiDaS technology combined with analysis of depth map visualizations and histogram analysis, and (3) feature extraction using selected artificial intelligence models based on the OpenCLIP framework and publicly available pretrained models. This combination allows the individual methods to compensate for each other’s limitations, resulting in improved defect detection performance. The use of hybrid metrics in defective sample selection has been shown to yield superior algorithmic performance compared to the application of individual methods independently. Experimental tests confirmed the high effectiveness and practical applicability of the proposed solution, preserving low hardware requirements. Full article
Show Figures

Figure 1

30 pages, 866 KB  
Article
Balancing Profitability and Sustainability in Electric Vehicles Insurance: Underwriting Strategies for Affordable and Premium Models
by Xiaodan Lin, Fenqiang Chen, Haigang Zhuang, Chen-Ying Lee and Chiang-Ku Fan
World Electr. Veh. J. 2025, 16(8), 430; https://doi.org/10.3390/wevj16080430 - 1 Aug 2025
Viewed by 3001
Abstract
This study aims to develop an optimal underwriting strategy for affordable (H1 and M1) and premium (L1 and M2) electric vehicles (EVs), balancing financial risk and sustainability commitments. The research is motivated by regulatory pressures, risk management needs, and sustainability goals, necessitating an [...] Read more.
This study aims to develop an optimal underwriting strategy for affordable (H1 and M1) and premium (L1 and M2) electric vehicles (EVs), balancing financial risk and sustainability commitments. The research is motivated by regulatory pressures, risk management needs, and sustainability goals, necessitating an adaptation of traditional underwriting models. The study employs a modified Delphi method with industry experts to identify key risk factors, including accident risk, repair costs, battery safety, driver behavior, and PCAF carbon impact. A sensitivity analysis was conducted to examine premium adjustments under different risk scenarios, categorizing EVs into four risk segments: Low-Risk, Low-Carbon (L1); Medium-Risk, Low-Carbon (M1); Medium-Risk, High-Carbon (M2); and High-Risk, High-Carbon (H1). Findings indicate that premium EVs (L1 and M2) exhibit lower volatility in underwriting costs, benefiting from advanced safety features, lower accident rates, and reduced carbon attribution penalties. Conversely, budget EVs (H1 and M1) experience higher premium fluctuations due to greater accident risks, costly repairs, and higher carbon costs under PCAF implementation. The worst-case scenario showed a 14.5% premium increase, while the best-case scenario led to a 10.5% premium reduction. The study recommends prioritizing premium EVs for insurance coverage due to their lower underwriting risks and carbon efficiency. For budget EVs, insurers should implement selective underwriting based on safety features, driver risk profiling, and energy efficiency. Additionally, incentive-based pricing such as telematics discounts, green repair incentives, and low-carbon charging rewards can mitigate financial risks and align with net-zero insurance commitments. This research provides a structured framework for insurers to optimize EV underwriting while ensuring long-term profitability and regulatory compliance. Full article
Show Figures

Figure 1

21 pages, 2965 KB  
Article
Inspection Method Enabled by Lightweight Self-Attention for Multi-Fault Detection in Photovoltaic Modules
by Shufeng Meng and Tianxu Xu
Electronics 2025, 14(15), 3019; https://doi.org/10.3390/electronics14153019 - 29 Jul 2025
Cited by 1 | Viewed by 998
Abstract
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity [...] Read more.
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity concurrent detection in existing robotic inspection systems, while stringent onboard compute budgets also preclude the adoption of bulky detectors. To resolve this accuracy–efficiency trade-off for dual-defect detection, we present YOLOv8-SG, a lightweight yet powerful framework engineered for mobile PV inspectors. First, a rigorously curated multi-modal dataset—RGB for stains and long-wave infrared for hotspots—is assembled to enforce robust cross-domain representation learning. Second, the HSV color space is leveraged to disentangle chromatic and luminance cues, thereby stabilizing appearance variations across sensors. Third, a single-head self-attention (SHSA) block is embedded in the backbone to harvest long-range dependencies at negligible parameter cost, while a global context (GC) module is grafted onto the detection head to amplify fine-grained semantic cues. Finally, an auxiliary bounding box refinement term is appended to the loss to hasten convergence and tighten localization. Extensive field experiments demonstrate that YOLOv8-SG attains 86.8% mAP@0.5, surpassing the vanilla YOLOv8 by 2.7 pp while trimming 12.6% of parameters (18.8 MB). Grad-CAM saliency maps corroborate that the model’s attention consistently coincides with defect regions, underscoring its interpretability. The proposed method, therefore, furnishes PV operators with a practical low-latency solution for concurrent bird-dropping and hotspot surveillance. Full article
Show Figures

Figure 1

17 pages, 1467 KB  
Article
Confidence-Based Knowledge Distillation to Reduce Training Costs and Carbon Footprint for Low-Resource Neural Machine Translation
by Maria Zafar, Patrick J. Wall, Souhail Bakkali and Rejwanul Haque
Appl. Sci. 2025, 15(14), 8091; https://doi.org/10.3390/app15148091 - 21 Jul 2025
Viewed by 2263
Abstract
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, [...] Read more.
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, power-, and energy-hungry, typically requiring powerful GPUs or large-scale clusters to train and deploy. As a result, they are often regarded as “non-green” and “unsustainable” technologies. Distilling knowledge from large deep NN models (teachers) to smaller NN models (students) is a widely adopted sustainable development approach in MT as well as in broader areas of natural language processing (NLP), including speech, and image processing. However, distilling large pretrained models presents several challenges. First, increased training time and cost that scales with the volume of data used for training a student model. This could pose a challenge for translation service providers (TSPs), as they may have limited budgets for training. Moreover, CO2 emissions generated during model training are typically proportional to the amount of data used, contributing to environmental harm. Second, when querying teacher models, including encoder–decoder models such as NLLB, the translations they produce for low-resource languages may be noisy or of low quality. This can undermine sequence-level knowledge distillation (SKD), as student models may inherit and reinforce errors from inaccurate labels. In this study, the teacher model’s confidence estimation is employed to filter those instances from the distilled training data for which the teacher exhibits low confidence. We tested our methods on a low-resource Urdu-to-English translation task operating within a constrained training budget in an industrial translation setting. Our findings show that confidence estimation-based filtering can significantly reduce the cost and CO2 emissions associated with training a student model without drop in translation quality, making it a practical and environmentally sustainable solution for the TSPs. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

Back to TopTop