Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (322)

Search Parameters:
Keywords = frontier efficiency methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2881 KB  
Article
KTNSGA-II: An Enhanced Hybrid Heuristic Algorithm for Multi-Objective Flexible Job Shop Scheduling with Makespan Workload Balance and Energy Consumption
by Li Zhu, Zimei Huang, Haitao Fu, Xin Pan and Yuxuan Feng
Symmetry 2026, 18(2), 354; https://doi.org/10.3390/sym18020354 - 14 Feb 2026
Viewed by 122
Abstract
The Multi-Objective Flexible Job Shop Scheduling Problem (MOFJSSP) represents a core challenge in modern manufacturing: achieving synergistic optimization of multiple conflicting objectives while pursuing production efficiency and energy sustainability. To address this, this study proposes an enhanced hybrid heuristic algorithm—KNN–Tabu Search NSGA-II (KTNSGA-II)—for [...] Read more.
The Multi-Objective Flexible Job Shop Scheduling Problem (MOFJSSP) represents a core challenge in modern manufacturing: achieving synergistic optimization of multiple conflicting objectives while pursuing production efficiency and energy sustainability. To address this, this study proposes an enhanced hybrid heuristic algorithm—KNN–Tabu Search NSGA-II (KTNSGA-II)—for simultaneously optimizing completion time, machine load, and total energy consumption. First, a three-objective mathematical model is established. Subsequently, four key strategies are integrated: (1) workload balancing initialization rapidly generates high-quality initial solutions; (2) an adaptive job-level crossover mechanism dynamically adjusts subset sizes during iterations to balance global exploration and local exploitation; (3) K-nearest neighbor-based congestion distance calculation maintains population diversity; (4) tabu search applied to non-dominated solutions on the Pareto front for local refinement. Extensive experiments on standard benchmark instances demonstrate that KTNSGA-II significantly outperforms representative algorithms in terms of convergence and diversity. For large-scale Behnke benchmark instances, KTNSGA-II achieves an average hypervolume (HV) improvement of 32.32% compared to other comparison algorithms. Furthermore, this method substantially enhances solution diversity: the Spacing Performance (SP) metric improved by 39.72%, indicating more uniform distribution of Pareto optimal solutions; the Diversity Metric (DM) increased by 57.54%, reflecting broader coverage and more even distribution along the Pareto frontier boundary. These results confirm that KTNSGA-II generates higher-quality, better-distributed Pareto fronts, achieving a more optimal trade-off between completion time, machine load, and energy consumption. Full article
32 pages, 4917 KB  
Article
Optimization of Cultivation Strategies Through Crop Yield Prediction for Rice and Maize Using a Hybrid CatBoost-NSGA-II Model
by Yuyang Zhang, Amir Abdullah Khan, Wei Zhao and Xufeng Xiao
Agriculture 2026, 16(4), 423; https://doi.org/10.3390/agriculture16040423 - 12 Feb 2026
Viewed by 148
Abstract
In light of the dual challenges of global climate change and the pressure on agricultural resources, increasing crop yields and resource utilization efficiency has become the key to ensuring food security and sustainable agricultural development. This study takes environmental factors and cultivation measures [...] Read more.
In light of the dual challenges of global climate change and the pressure on agricultural resources, increasing crop yields and resource utilization efficiency has become the key to ensuring food security and sustainable agricultural development. This study takes environmental factors and cultivation measures as input and crop yield as output; systematically compares five ensemble learning models: RF, LightGBM, GBDT, XGBoost, and CatBoost; and then screens out the CatBoost algorithm with the best performance. The CatBoost-Nondominated Sorting Genetic Algorithm II (NSGA-II) hybrid model was constructed. This model provides data-driven solutions and strategies for cultivating rice and maize through precise yield prediction and multi-objective optimization. To enhance the interpretability of the model, we used the SHAP method to parse the predicted behavior to ensure that the results conform to common agricultural knowledge. Based on this, we constructed a constrained multi-objective optimization problem and solved it using the NSGA-II algorithm to obtain a Pareto frontier that strikes a balance among yield, resource consumption and growth cycle. Case studies showed that CatBoost performs best in the selected datasets. SHAP identified precipitation, fertilization/irrigation intensity and temperature as the main influencing factors; NSGA-II generated a well-distributed Pareto solution set, allowing for the flexible selection of representative cultivation schemes based on different management objectives. This modeling paradigm showed good generalization ability and can be extended to other crop cultivation strategy optimization scenarios based on tabular data. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 367 KB  
Article
Multiobjective Distributionally Robust Dominating Set Design for Networked Systems Under Correlated Uncertainty
by Pablo Adasme, Ali Dehghan Firoozabadi, Renata Lopes Rosa, Matthew Okwudili Ugochukwu and Demóstenes Zegarra Rodríguez
Systems 2026, 14(2), 174; https://doi.org/10.3390/systems14020174 - 5 Feb 2026
Viewed by 165
Abstract
Networked systems operating under uncertainty require decision making frameworks capable of balancing nominal efficiency and robustness against correlated risks. In this work, we study a distributionally robust weighted dominating set problem as a system-level model for robust network design, where node selection decisions [...] Read more.
Networked systems operating under uncertainty require decision making frameworks capable of balancing nominal efficiency and robustness against correlated risks. In this work, we study a distributionally robust weighted dominating set problem as a system-level model for robust network design, where node selection decisions are affected by uncertainty in costs and their correlation structure. We formulate the problem as a bi-objective optimization model that simultaneously minimizes the expected price and a risk measure derived from mean–covariance ambiguity. Rather than proposing new optimization algorithms, we conduct a systematic, methodological, and computational analysis of classical multiobjective solution approaches within this nonconvex and combinatorial setting. In particular, we compare weighted-sum, lexicographic, and ε-constraint methods, highlighting their ability to reveal different structural properties of the Pareto Frontier. Our numerical results demonstrate that the methods that use scalarization allow us to obtain only partial insights for networked systems where robustness is inherent. However, the ε-constraint method is highly efficient in recovering the full set of Pareto-optimal solutions. Once obtained, the Pareto Frontier exposes non-supported solutions and disruptive changes in its form. Notice that the latter is directly related to different configurations of dominating sets which are induced by the uncertainties. Consequently, these observations allow us to select from different subsets of relevant operating conditions for robust network designs that are significantly different for a decision maker. Full article
(This article belongs to the Section Systems Engineering)
Show Figures

Figure 1

28 pages, 1769 KB  
Article
Analysis and Evaluation of the Impact of Quantitative and Qualitative Factors on Vietnam’s Logistics Efficiency Using the DEA-MCDM Integrated Method
by Minh-Tai Le and Thuy-Duong Thi Pham
Sustainability 2026, 18(3), 1594; https://doi.org/10.3390/su18031594 - 4 Feb 2026
Viewed by 253
Abstract
This paper proposes a two-stage framework integrating Data Envelopment Analysis (DEA) and fuzzy multi-criteria decision-making methods to evaluate the performance of logistics firms in Vietnam. In the first stage, DEA models (CCR, BCC, and SBM) are employed to measure relative efficiency and identify [...] Read more.
This paper proposes a two-stage framework integrating Data Envelopment Analysis (DEA) and fuzzy multi-criteria decision-making methods to evaluate the performance of logistics firms in Vietnam. In the first stage, DEA models (CCR, BCC, and SBM) are employed to measure relative efficiency and identify benchmark firms among 15 leading logistics companies. In the second stage, FAHP–FTOPSIS is used to incorporate qualitative and sustainability-oriented criteria and to provide a comprehensive ranking of the efficient firms. The results indicate that a considerable proportion of firms operate below the efficiency frontier, implying substantial opportunities for resource optimization. Environmental and technological dimensions are found to be the most influential factors, while companies implementing green distribution strategies and strong data security practices consistently achieve higher rankings. Sensitivity analysis confirms the robustness and stability of the proposed framework. This study contributes by bridging operational efficiency assessment with broader strategic and sustainability considerations, overcoming the limitations of single-method evaluations used in prior research. The integrated DEA–FAHP–FTOPSIS approach offers managers a practical tool to diagnose weaknesses, prioritize improvement actions, and benchmark against top performers. In addition, it offers policymakers valuable insights to support digital transformation and green logistics initiatives in developing economy contexts. Full article
Show Figures

Figure 1

22 pages, 3309 KB  
Article
Simultaneous Incremental Map-Prediction-Driven UAV Trajectory Planning for Unknown Environment Exploration
by Jianing Tang, Guoran Jiang, Jingkai Yang and Sida Zhou
Aerospace 2026, 13(2), 139; https://doi.org/10.3390/aerospace13020139 - 30 Jan 2026
Viewed by 226
Abstract
Efficient autonomous exploration in unknown environments is a core challenge for Unmanned Aerial Vehicle (UAV) applications in unstructured settings. The primary challenges are exploration speed, coverage efficiency, and the autonomous, efficient, and obstacle-/threat-avoiding global guidance of UAV under local observational information. This paper [...] Read more.
Efficient autonomous exploration in unknown environments is a core challenge for Unmanned Aerial Vehicle (UAV) applications in unstructured settings. The primary challenges are exploration speed, coverage efficiency, and the autonomous, efficient, and obstacle-/threat-avoiding global guidance of UAV under local observational information. This paper proposes an autonomous exploration method driven by simultaneous incremental map prediction and the fusion of global frontier information to enhance the exploration efficiency of UAVs in unknown unstructured environments. Based on generative deep learning, we introduce an incremental map prediction method for 3D unstructured mountainous terrain, enabling the simultaneous acquisition of map predictions and their uncertainty estimates. Map prediction and trajectory planning are conducted concurrently: by utilizing the simultaneously predicted 3D map and its confidence (i.e., the uncertainty estimates), an overlap analysis is conducted between the flyable areas in the predicted map and the high-confidence regions. Dynamic guidance subspaces are generated by extracting global frontier points, within which shortest-time optimization is adopted for trajectory planning to maximize information gain and coverage per step. Experimental results demonstrate that compared to classical methods, our proposed approach achieves significant performance improvements in key metrics, including map coverage rate, total exploration time, and average path length. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

22 pages, 3686 KB  
Article
Optimization of Earth Dam Cross-Sections Using the Max–Min Ant System and Artificial Neural Networks with Real Case Studies
by Amin Rezaeian, Mohammad Davoodi, Mohammad Kazem Jafari, Mohsen Bagheri, Ali Asgari and Hassan Jafarian Kafshgarkolaei
Buildings 2026, 16(3), 501; https://doi.org/10.3390/buildings16030501 - 26 Jan 2026
Viewed by 316
Abstract
The identification of non-circular critical slip surfaces in slopes using metaheuristic algorithms remains a frontier challenge in geotechnical engineering. Such approaches are particularly effective for assessing the stability of heterogeneous slopes, including earth dams. This study introduces ODACO, a comprehensive program developed to [...] Read more.
The identification of non-circular critical slip surfaces in slopes using metaheuristic algorithms remains a frontier challenge in geotechnical engineering. Such approaches are particularly effective for assessing the stability of heterogeneous slopes, including earth dams. This study introduces ODACO, a comprehensive program developed to determine the optimum cross-section of earth dams with berms. The program employs the Max–Min Ant System (MMAS), one of the most robust variants of the ant colony optimization algorithm. For each candidate cross-section, the critical slip surface is first identified using MMAS. Among the stability-compliant alternatives, the configuration with the most efficient shell geometry is then selected. The optimization process is conducted automatically across all loading conditions, incorporating slope stability criteria and operational constraints. To ensure that the optimized cross-section satisfies seismic performance requirements, an artificial neural network (ANN) model is applied to rapidly and reliably predict seismic responses. These ANN-based predictions provide an efficient alternative to computationally intensive dynamic analyses. The proposed framework highlights the potential of optimization-driven approaches to replace conventional trial-and-error design methods, enabling more economical, reliable, and practical earth dam configurations. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

21 pages, 2026 KB  
Review
Adsorption and Removal of Emerging Pollutants from Water by Activated Carbon and Its Composites: Research Hotspots, Recent Advances, and Future Prospects
by Hao Chen, Qingqing Hu, Haiqi Huang, Lei Chen, Chunfang Zhang, Yue Jin and Wenjie Zhang
Water 2026, 18(3), 300; https://doi.org/10.3390/w18030300 - 23 Jan 2026
Viewed by 486
Abstract
The continuous detection of emerging pollutants (EPs) in water poses potential threats to aquatic environmental safety and human health, and their efficient removal is a frontier in environmental engineering research. This review systematically summarizes research progress from 2005 to 2025 on the application [...] Read more.
The continuous detection of emerging pollutants (EPs) in water poses potential threats to aquatic environmental safety and human health, and their efficient removal is a frontier in environmental engineering research. This review systematically summarizes research progress from 2005 to 2025 on the application of activated carbon (AC) and its composites for removing EPs from water and analyzes the development trends in this field using bibliometric methods. The results indicate that research has evolved from the traditional use of AC for adsorption to the design of novel materials through physical and chemical modifications, as well as composites with metal oxides, carbon-based nanomaterials, and other functional components, achieving high adsorption capacity, selective recognition, and catalytic degradation capabilities. Although AC-based materials demonstrate considerable potential, their large-scale application still faces challenges such as cost control, adaptability to complex water matrices, material regeneration, and potential environmental risks. Future research should focus on precise material design, process integration, and comprehensive life-cycle sustainability assessment to advance this technology toward highly efficient, economical, and safe solutions, thereby providing practical strategies for safeguarding water resources. Full article
(This article belongs to the Special Issue Water Treatment Technology for Emerging Contaminants, 2nd Edition)
Show Figures

Graphical abstract

29 pages, 1782 KB  
Article
Reinforcement Learning-Guided NSGA-II Enhanced with Gray Relational Coefficient for Multi-Objective Optimization: Application to NASDAQ Portfolio Optimization
by Zhiyuan Wang, Qinxu Ding, Ding Ding, Siying Zhu, Jing Ren, Yue Wang and Chong Hui Tan
Mathematics 2026, 14(2), 296; https://doi.org/10.3390/math14020296 - 14 Jan 2026
Cited by 1 | Viewed by 404
Abstract
In modern financial markets, decision-makers increasingly rely on quantitative methods to navigate complex trade-offs among multiple, often conflicting objectives. This paper addresses constrained multi-objective optimization (MOO) with an application to portfolio optimization for minimizing risk and maximizing return. To this end, and to [...] Read more.
In modern financial markets, decision-makers increasingly rely on quantitative methods to navigate complex trade-offs among multiple, often conflicting objectives. This paper addresses constrained multi-objective optimization (MOO) with an application to portfolio optimization for minimizing risk and maximizing return. To this end, and to address existing gaps, we propose a novel reinforcement learning (RL)-guided non-dominated sorting genetic algorithm II (NSGA-II) enhanced with gray relational coefficients (GRC), termed RL-NSGA-II-GRC, which combines an RL agent controller and GRC-based selection to improve the convergence and diversity of the Pareto-optimal fronts. The agent adapts key evolutionary parameters online using population-level metrics of hypervolume, feasibility, and diversity, while the GRC-enhanced tournament operator ranks parents via a unified score simultaneously considering dominance rank, crowding distance, and geometric proximity to ideal reference. We evaluate the framework on the Kursawe and CONSTR benchmark problems and on a NASDAQ portfolio optimization application. On the benchmarks, RL-NSGA-II-GRC achieves convergence metric improvements of about 5.8% and 4.4% over the original NSGA-II, while preserving a well-distributed set of non-dominated solutions. In the portfolio application, the method produces a smooth and densely populated efficient frontier that supports the identification of the maximum Sharpe ratio portfolio (with annualized Sharpe ratio = 1.92), as well as utility-optimal portfolios for different risk-aversion levels. The main contributions of this work are three-fold: (1) we propose an RL-NSGA-II-GRC method that integrates an RL agent into the evolutionary framework to adaptively control key parameters using generational feedback; (2) we design a GRC-enhanced binary tournament selection operator that provides a comprehensive performance indicator to efficiently guide the search toward the Pareto-optimal front; (3) we demonstrate, on benchmark MOO problems and a NASDAQ portfolio case study, that the proposed method delivers improved convergence and well-populated efficient frontiers that support actionable investment insights. Full article
(This article belongs to the Special Issue Multi-Objective Evolutionary Algorithms and Their Applications)
Show Figures

Figure 1

20 pages, 3283 KB  
Article
Unequal Progress in Early-Onset Bladder Cancer Control: Global Trends, Socioeconomic Disparities, and Policy Efficiency from 1990 to 2021
by Zhuofan Nan, Weiguang Zhao, Shengzhou Li, Chaoyan Yue, Xiangqian Cao, Chenkai Yang, Yilin Yan, Fenyong Sun and Bing Shen
Healthcare 2026, 14(2), 193; https://doi.org/10.3390/healthcare14020193 - 12 Jan 2026
Viewed by 282
Abstract
Background: This study investigates the global burden of early-onset bladder cancer (EOBC) from 1990 to 2021, highlighting regional disparities and the growing role of metabolic risk factors. Early-onset bladder cancer (EOBC), diagnosed before age 50, is an emerging global health concern. While [...] Read more.
Background: This study investigates the global burden of early-onset bladder cancer (EOBC) from 1990 to 2021, highlighting regional disparities and the growing role of metabolic risk factors. Early-onset bladder cancer (EOBC), diagnosed before age 50, is an emerging global health concern. While less common than kidney cancer, EOBC contributes substantially to mortality and disability-adjusted life years (DALYs), with marked sex disparities. Its global epidemiology remains unassessed systematically. Methods: Using GBD 1990–2021 data, we analyzed EOBC incidence, prevalence, mortality, and DALYs across 204 countries in individuals aged 15–49. Trends were examined via segmented regression, EAPC, and Bayesian age-period-cohort modeling. Inequality was quantified using SII and CI. Decomposition and SDI-efficiency frontier analyses were introduced. Results: From 1990 to 2021, EOBC incidence rose 62.2%, prevalence 73.1%, deaths 15.3%, and DALYs 15.8%. Middle-SDI regions bore the highest burden. Aging drove trends in high-SDI areas and population growth in low-SDI regions. Over 25% of high-SDI countries underperformed in incidence/prevalence control. Smoking remained the leading risk factor, with rising hyperglycemia burdens in high-income areas. Males carried over twice the female burden, peaking at age 45–49. Conclusions: EOBC shows sustained global growth with middle-aged concentration and significant regional disparities. Structural inefficiencies highlight the need for enhanced screening, early warning, and tailored resource allocation. Full article
Show Figures

Figure 1

18 pages, 431 KB  
Article
Measuring Environmental Efficiency of Ports Under Undesirable Outputs and Uncertainty
by Anjali Sonkariya and Anjali Awasthi
Logistics 2026, 10(1), 19; https://doi.org/10.3390/logistics10010019 - 12 Jan 2026
Viewed by 344
Abstract
Ports are the major gateways of cities. Background: Sustainable growth requires ports to prioritize efficiency while balancing economic, social, and environmental goals. There is limited synthesized evidence on the sustainability evaluation of ports, including those of North America. In this paper, we [...] Read more.
Ports are the major gateways of cities. Background: Sustainable growth requires ports to prioritize efficiency while balancing economic, social, and environmental goals. There is limited synthesized evidence on the sustainability evaluation of ports, including those of North America. In this paper, we propose a multi-step approach based on fuzzy DEA to evaluate the environmental performance of ports. Methods: In the first step, we identify indicators for environmental performance evaluation. The second step involves application of fuzzy DEA using the identified indicators to measure the environmental efficiency of ports. In the third step, a numerical illustration is provided using open data. The proposed model incorporates undesirable outputs and employs one set of constraints to make a production frontier. Results: The findings show wide differences in performance, ports reach higher scores when they use resources wisely plus keep emissions low, not merely when they expand. Conclusions: The proposed methodology provides a robust and comparable measurement of port environmental efficiency under uncertainty. Full article
(This article belongs to the Special Issue Decarbonization of Maritime Logistics and Global Supply Chains)
Show Figures

Figure 1

24 pages, 1630 KB  
Article
Hardware-Oriented Approximations of Softmax and RMSNorm for Efficient Transformer Inference
by Yiwen Kang and Dong Wang
Micromachines 2026, 17(1), 84; https://doi.org/10.3390/mi17010084 - 7 Jan 2026
Viewed by 429
Abstract
With the rapid advancement of Transformer-based large language models (LLMs), these models have found widespread applications in industrial domains such as code generation and non-functional requirement (NFR) classification in software engineering. However, recent research has primarily focused on optimizing linear matrix operations, while [...] Read more.
With the rapid advancement of Transformer-based large language models (LLMs), these models have found widespread applications in industrial domains such as code generation and non-functional requirement (NFR) classification in software engineering. However, recent research has primarily focused on optimizing linear matrix operations, while nonlinear operators remain relatively underexplored. This paper proposes hardware-efficient approximation and acceleration methods for the Softmax and RMSNorm operators to reduce resource cost and accelerate Transformer inference while maintaining model accuracy. For the Softmax operator, an additional range reduction based on the SafeSoftmax technique enables the adoption of a bipartite lookup table (LUT) approximation and acceleration. The bit-width configuration is optimized through Pareto frontier analysis to balance precision and hardware cost, and an error compensation mechanism is further applied to preserve numerical accuracy. The division is reformulated as a logarithmic subtraction implemented with a small LOD-driven lookup table, eliminating expensive dividers. For RMSNorm, LOD is further leveraged to decompose the reciprocal square root into mantissa and exponent parts, enabling parallel table lookup and a single multiplication. Based on these optimizations, an FPGA-based pipelined accelerator is implemented, achieving low operator-level latency and power consumption with significantly reduced hardware resource usage while preserving model accuracy. Full article
(This article belongs to the Special Issue Advances in Field-Programmable Gate Arrays (FPGAs))
Show Figures

Figure 1

14 pages, 399 KB  
Article
LAFS: A Fast, Differentiable Approach to Feature Selection Using Learnable Attention
by Hıncal Topçuoğlu, Atıf Evren, Elif Tuna and Erhan Ustaoğlu
Entropy 2026, 28(1), 20; https://doi.org/10.3390/e28010020 - 24 Dec 2025
Viewed by 484
Abstract
Feature selection is a critical preprocessing step for mitigating the curse of dimensionality in machine learning. Existing methods present a difficult trade-off: filter methods are fast but often suboptimal as they evaluate features in isolation, while wrapper methods are powerful but computationally prohibitive [...] Read more.
Feature selection is a critical preprocessing step for mitigating the curse of dimensionality in machine learning. Existing methods present a difficult trade-off: filter methods are fast but often suboptimal as they evaluate features in isolation, while wrapper methods are powerful but computationally prohibitive due to their iterative nature. In this paper, we propose LAFS (Learnable Attention for Feature Selection), a novel, end-to-end differentiable framework that achieves the performance of wrapper methods at the speed of simpler models. LAFS employs a neural attention mechanism to learn a context-aware importance score for all features simultaneously in a single forward pass. To encourage the selection of a sparse and non-redundant feature subset, we introduce a novel hybrid loss function that combines the standard classification objective with an information-theoretic entropic regularizer on the attention weights. We validate our approach on real-world high-dimensional benchmark datasets. Our experiments demonstrate that LAFS successfully identifies complex feature interactions and handles multicollinearity. In general comparison, LAFS achieves very close and accurate results to state-of-the-art RFE-LGBM and embedded FSA methods. Our work establishes a new point on the accuracy-efficiency frontier, demonstrating that attention-based architectures provide a compatible solution to the feature selection problem. Full article
(This article belongs to the Special Issue Information-Theoretic Methods in Data Analytics, 2nd Edition)
Show Figures

Figure 1

19 pages, 485 KB  
Article
Are Andean Dairy Farms Losing Their Efficiency?
by Carlos Santiago Torres-Inga, Ángel Javier Aguirre-de Juana, Raúl Victorino Guevara-Viera, Paola Gabriela Alvarado-Dávila and Guillermo Emilio Guevara-Viera
Agriculture 2026, 16(1), 17; https://doi.org/10.3390/agriculture16010017 - 20 Dec 2025
Viewed by 511
Abstract
(1) Background: Ecuador is the fourth largest milk producer in Latin America, where ap-proximately 80% of production originates from small family farms located in the Andean region. Despite their socioeconomic importance, these farms face challenges related to low technical efficiency. While there are [...] Read more.
(1) Background: Ecuador is the fourth largest milk producer in Latin America, where ap-proximately 80% of production originates from small family farms located in the Andean region. Despite their socioeconomic importance, these farms face challenges related to low technical efficiency. While there are specific studies on efficiency in dairy systems from other regions, a knowledge gap persists regarding the temporal evolution of technical efficiency (TE) in Ecuadorian Andean dairy farms, especially during crisis periods such as the COVID-19 pandemic. The objective of this study was to evaluate the evolution of TE of family dairy farms in the Ecuadorian Andean region during the period 2018–2024 and to analyze the impact of the pandemic on said efficiency. (2) Methods: Data Envelopment Analysis (DEA) with input orientation and bootstrap simulation was employed to estimate TE, using data from a representative sample that included between 2370 and 2987 farms per year (approximately 25% of the national database of the Ministry of Agriculture and Livestock). Farms were selected based on the availability of complete information on key variables: number of milking cows, area dedicated to forage, family and hired labor (annual hours), and total annual milk production. Statistical analysis included ANOVA to compare mean TE values between years, post-hoc tests to identify specific differences between periods, and the identification of factors related to the TE. (3) Results: The mean TE of Andean dairy farms increased significantly from 0.37 in 2018 to 0.44 in 2024 (p < 0.10), evidencing sustained improvement, although the mean is still distant from the efficiency frontier. The analysis revealed a notable decrease in TE during 2020–2021, coinciding with the period of greatest impact of the COVID-19 pandemic, followed by progressive recovery in subsequent years. The TE distribution showed that between 70% and 75% of farms remained below 0.50 throughout the analyzed period, while only 8–12% achieved levels above 0.70. The main sources of technical inefficiency identified were relative excesses of labor and forage area in relation to milk production obtained. When compared with international studies, Ecuadorian farms present TE levels substantially lower than those reported in the European Union (>0.80) and similar to or slightly lower than those found in Turkey (0.61–0.71). (4) Conclusions: Family dairy farms in the Ecuadorian Andean region operate with technical efficiency levels considerably below their potential and international standards, suggesting substantial scope for improvement through the optimization of productive resource use, particularly labor and land. The COVID-19 pandemic impacted the sector’s efficiency negatively but temporarily, demonstrating resilience and recovery capacity. These findings are relevant to the design of public policies and technical assistance programs aimed at sustainable intensification of family dairy production in the Andes, with an emphasis on improving labor productivity and the efficient use of forage area. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

21 pages, 893 KB  
Article
Enhancing Diagnostic Infrastructure Through Innovation-Driven Technological Capacity in Healthcare
by Nicoleta Mihaela Doran
Healthcare 2025, 13(24), 3328; https://doi.org/10.3390/healthcare13243328 - 18 Dec 2025
Viewed by 452
Abstract
Background: This study examines how national innovation performance shapes the diffusion of advanced diagnostic technologies across European healthcare systems. Strengthening technological capacity through innovation is increasingly essential for resilient and efficient health services. The analysis quantifies the influence of innovation capacity on the [...] Read more.
Background: This study examines how national innovation performance shapes the diffusion of advanced diagnostic technologies across European healthcare systems. Strengthening technological capacity through innovation is increasingly essential for resilient and efficient health services. The analysis quantifies the influence of innovation capacity on the availability of medical imaging technologies in 26 EU Member States between 2018 and 2024. Methods: A balanced panel dataset was assembled from Eurostat, the European Innovation Scoreboard, and World Bank indicators. Dynamic relationships between innovation performance and the adoption of CT, MRI, gamma cameras, and PET scanners were estimated using a two-step approach combining General-to-Specific (GETS) outlier detection with Robust Least Squares regression to address heterogeneity and specification uncertainty. Results: Higher innovation scores significantly increase the diffusion of R&D-intensive technologies such as MRI and PET, while CT availability shows limited responsiveness due to market maturity. Public health expenditure supports frontier technologies when strategically targeted, whereas GDP growth has no significant effect. Population size consistently enhances technological capacity through scale and system-integration effects. Conclusions: The findings show that innovation ecosystems, rather than economic growth alone, drive the modernization of diagnostic infrastructure in the EU. Integrating innovation metrics into health-technology assessments offers a more accurate basis for designing innovation-oriented investment policies in European healthcare. Full article
Show Figures

Figure 1

23 pages, 3017 KB  
Article
Modeling Battery Degradation in Home Energy Management Systems Based on Physical Modeling and Swarm Intelligence Algorithms
by Milad Riyahi, Christina Papadimitriou and Álvaro Gutiérrez Martín
Energies 2025, 18(24), 6578; https://doi.org/10.3390/en18246578 - 16 Dec 2025
Viewed by 414
Abstract
Home energy management systems have emerged as a crucial solution for enhancing energy efficiency, reducing carbon emissions, and facilitating the integration of renewable energy sources into homes. To fully realize their potential, these systems’ performance must be optimized, which involves addressing multiple objectives, [...] Read more.
Home energy management systems have emerged as a crucial solution for enhancing energy efficiency, reducing carbon emissions, and facilitating the integration of renewable energy sources into homes. To fully realize their potential, these systems’ performance must be optimized, which involves addressing multiple objectives, such as minimizing costs and environmental impact. The Pareto frontier is a tool widely adopted in multi-objective optimization within home energy management systems’ operation, where a range of optimal solutions are produced. This study uses the Pareto curve to optimize the operational performance of home energy management systems, considering the state health of the battery to determine the best answer among the optimal solutions in the curve. The main reason for considering the state of health is the effects of the battery’s operation on the performance of energy systems, especially for long-term optimization outcomes. In this study, the performance of the battery is measured through a physical model named PyBaMM that is tuned based on swarm intelligence techniques, including the Whale Optimization Algorithm, Grey Wolf Optimization, Particle Swarm Optimization, and the Gravitational Search Algorithm. The proposed framework automatically identifies the optimal solution out of the ones in the Pareto curve by comparing the performance of the battery through the tuned physical model. The effectiveness of the proposed algorithm is demonstrated for a home, including four distinct energy carriers along with a 12 V 128 Ah LFP chemistry Li-ion battery module, where the overall cost and carbon emissions are the metrics for comparisons. Implementation results show that tuning the physical model based on the Whale Optimization Algorithm reaches the highest accuracy compared to the other methods. Moreover, considering the state of health of the battery as the selecting criterion will improve home energy management systems’ performance, particularly in long-term operation models, because it guarantees a longer battery lifespan. Full article
Show Figures

Figure 1

Back to TopTop