Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,449)

Search Parameters:
Keywords = learning management systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4795 KB  
Article
A Bayesian-Optimized LightGBM Approach for Reliable Cooling Load Prediction
by Zhiying Zhang, Li Ling, Jinjie He and Honghua Yang
Buildings 2026, 16(7), 1357; https://doi.org/10.3390/buildings16071357 (registering DOI) - 29 Mar 2026
Abstract
With the rapid advancement of information technology, the energy consumption of data centers has become a critical issue. Accurate cooling load prediction is essential for optimizing cooling system operations and improving energy efficiency. However, conventional models often struggle to capture the complex nonlinearities [...] Read more.
With the rapid advancement of information technology, the energy consumption of data centers has become a critical issue. Accurate cooling load prediction is essential for optimizing cooling system operations and improving energy efficiency. However, conventional models often struggle to capture the complex nonlinearities and multi-variable coupling effects inherent in data centers. To address the limitations of existing models in terms of training efficiency and generalization performance, this study proposes a cooling load prediction model that integrates the light gradient boosting machine (LightGBM) algorithm with Bayesian optimization. The model was validated using data generated from an EnergyPlus simulation of a representative medium-scale data center. Comparative analysis demonstrates that the proposed model surpasses naive benchmarks (T-1, T-24, and T-168) and other machine learning models (SVR, XGBoost, and LSTM), achieving superior performance with a Root Mean Squared Error (RMSE) of 4.3234 kW, R2 of 0.9999, and Mean Absolute Percentage Error (MAPE) of 0.07%. A noise robustness analysis further reveals that the model maintains excellent performance under realistic uncertainties, achieving an R2 above 0.99 and an RPD exceeding 12 even at high noise levels (SNR = 20 dB). The total runtime and Relative Prediction Deviation (RPD) were 33.45 s and 86.2685, respectively, indicating an excellent balance between computational efficiency and robust predictive reliability. The key contribution of this research is the effective integration of LightGBM and Bayesian optimization to provide a highly accurate and efficient tool for data center cooling load prediction. This approach offers a scientific foundation for the intelligent control of cooling systems and energy efficiency optimization in data centers, with direct practical implications for building energy management. Full article
(This article belongs to the Special Issue Research on Energy Efficiency and Low-Carbon Pathways in Buildings)
Show Figures

Figure 1

33 pages, 4106 KB  
Article
Probabilistic Orchestrator for Indeterministic Multi-Agent Systems in Real-Time Environments
by Arkady Bovshover, Andrei Kojukhov and Ilya Levin
Algorithms 2026, 19(4), 261; https://doi.org/10.3390/a19040261 (registering DOI) - 29 Mar 2026
Abstract
Multi-agent perception systems must operate under fundamental asymmetries: some agents provide fast but unreliable observations, while others deliver higher-quality evidence with delay and uncertain correspondence. Traditional deterministic orchestration and rule-based fusion struggle to manage these trade-offs, often producing brittle or unstable behavior. We [...] Read more.
Multi-agent perception systems must operate under fundamental asymmetries: some agents provide fast but unreliable observations, while others deliver higher-quality evidence with delay and uncertain correspondence. Traditional deterministic orchestration and rule-based fusion struggle to manage these trade-offs, often producing brittle or unstable behavior. We introduce a probabilistic orchestration framework that treats coordination as an epistemic generation problem—constructing and updating belief states under uncertainty—rather than a selection problem. Instead of committing to a single agent’s output, the orchestrator constructs a belief state that explicitly represents uncertainty, evidential provenance, and temporal relevance. Decisions are produced through latency-aware, association-weighted fusion, and uncertainty itself becomes a first-class signal governing action, deferral, and learning. Crucially, the orchestrator enables controlled teacher–student adaptation: high-confidence, well-associated stationary observations are gated into a feedback loop that improves ego perception over time while mitigating error amplification. We demonstrate the approach on an infrastructure-assisted dual-camera obstacle-recognition task. Experimental results show improved robustness to distance, occlusion, and delayed evidence compared to ego-only and deterministic orchestration baselines. By operationalizing orchestration as epistemic generation, this work provides a unifying framework for robust decision-making and safe adaptation in multi-agent systems, with implications that extend beyond perception to agentic and generative AI architectures. Full article
40 pages, 4626 KB  
Review
A Systematic Lifecycle-Referenced Capability Mapping of MLOps Platforms for Energy Forecasting
by Xun Zhao, Zheng Grace Ma and Bo Nørregaard Jørgensen
Information 2026, 17(4), 328; https://doi.org/10.3390/info17040328 (registering DOI) - 28 Mar 2026
Abstract
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy [...] Read more.
Accurate energy forecasting is essential for maintaining power system reliability, integrating renewable generation, and ensuring market stability. Although machine learning has improved forecasting accuracy, its operational deployment depends on Machine Learning Operations (MLOps) platforms that automate and scale the entire lifecycle of energy data pipelines. However, the capabilities of existing MLOps platforms for energy forecasting have not been systematically compared. This study adopts a PRISMA-informed review process to identify relevant end-to-end MLOps platforms for energy forecasting and then maps their documented capabilities using an established energy forecasting pipeline lifecycle as the reference structure. A total of 256 records were screened across vendor documentation, open-source repositories, and academic literature, of which 13 MLOps platforms were selected for comparative capability analysis. Platform capabilities are organised and presented across an end-to-end lifecycle covering project setup and governance, data ingestion and management, model development and experimentation, deployment and serving, and monitoring and feedback. Commercial platforms such as Amazon SageMaker and Google Vertex AI generally provide stronger end-to-end integration and production readiness, while open-source platforms such as Kubeflow and ClearML offer modular flexibility that typically requires additional integration effort to achieve end-to-end operation. The mapping identifies four priority areas where platform support remains limited, namely (i) governance workflow automation, (ii) automated data quality validation, (iii) feature management, and (iv) deployment and monitoring support under nonstationary conditions. These findings indicate that platform selection for energy forecasting should be treated as a lifecycle capability decision, balancing end-to-end integration, operational assurance, and long-term flexibility. Full article
Show Figures

Graphical abstract

18 pages, 972 KB  
Article
CPU Deployment-Oriented Evaluation of Compact Neural Networks for Remaining Useful Life Prediction
by Ali Naderi Bakhtiyari, Vahid Hassani and Mohammad Omidi
Machines 2026, 14(4), 375; https://doi.org/10.3390/machines14040375 (registering DOI) - 28 Mar 2026
Abstract
Remaining Useful Life (RUL) prediction is a key component of prognostics and health management for modern industrial systems. While deep learning methods have significantly improved prediction accuracy, many existing approaches rely on large neural networks that are difficult to deploy on resource-constrained edge [...] Read more.
Remaining Useful Life (RUL) prediction is a key component of prognostics and health management for modern industrial systems. While deep learning methods have significantly improved prediction accuracy, many existing approaches rely on large neural networks that are difficult to deploy on resource-constrained edge devices. This study presents a deployment-oriented evaluation of compact neural networks for RUL prediction using the NASA C-MAPSS turbofan engine benchmark. Two lightweight hybrid architectures, CNN–GRU and CNN–TCN, were developed with approximately 28k–32k parameters to represent realistic models for CPU-based edge inference. A systematic experimental analysis was conducted across all four C-MAPSS subsets (FD001–FD004), which represent increasing levels of operational and fault complexity. In addition to baseline performance, two post-training compression techniques (i.e., global unstructured magnitude pruning and dynamic INT8 quantization) were evaluated. To assess real deployment behavior, inference latency was measured on both a high-performance Intel x86 workstation and a resource-constrained ARM platform. Results show that CNN–GRU generally achieves higher predictive accuracy, whereas CNN–TCN provides more consistent and lower inference latency due to its convolution-only temporal modeling. Unstructured pruning can yield modest improvements in prediction accuracy, suggesting a regularization effect, but it does not reliably reduce model size or latency on standard CPUs due to the overhead associated with pruning masks. Dynamic quantization substantially reduces model size (particularly for CNN–GRU) while preserving predictive accuracy; however, it increases runtime latency because of additional quantization and dequantization operations. These findings demonstrate that compression techniques commonly used for large models do not necessarily translate into deployment benefits for already compact RUL architectures and highlight the importance of hardware-aware evaluation when designing edge prognostics systems. Full article
Show Figures

Figure 1

33 pages, 8263 KB  
Article
Semantic Graphs of Learning Activities from LLM Embeddings: A Lightweight and Explainable Approach for Smart Learning Systems
by Javier García-Sigüenza, Alberto Real-Fernández, Faraón Llorens-Largo, Jose F. Vicent and Rafael Molina-Carmona
Electronics 2026, 15(7), 1414; https://doi.org/10.3390/electronics15071414 (registering DOI) - 28 Mar 2026
Abstract
Smart learning systems are designed to analyze the context, needs, and progress of each student. These are becoming increasingly common, but they present challenges, such as predicting student performance and automatically managing learning activities. In this context, Large Language Models (LLMs) can be [...] Read more.
Smart learning systems are designed to analyze the context, needs, and progress of each student. These are becoming increasingly common, but they present challenges, such as predicting student performance and automatically managing learning activities. In this context, Large Language Models (LLMs) can be useful, as they are capable of understanding word relationships and analyzing their context. They are often associated with chatbots, which are computationally expensive, thereby complicating their integration. Instead, in this work, we propose to leverage the capabilities of LLMs through a semantic graph of activities created from sentence embeddings. This representation is a lightweight and explainable alternative. On the one hand, it requires a lower computational cost. On the other hand, it allows us to observe which activities are most similar directly. On this basis, we propose two problems to validate our proposal. In the first, we use the graph to classify new activities. In the second, we extend this representation with the temporal dimension to formulate a spatio-temporal problem and predict student performance. The results show that the semantic graph not only provides an accurate representation for the organization and classification of activities, but also offers practical advantages and improves explainability. Full article
Show Figures

Figure 1

27 pages, 707 KB  
Review
Clinical Artificial Intelligence Agents in Nephrology: From Prediction to Action Through Workflow-Native Intelligence—A Roadmap for Workflow-Integrated Care
by Charat Thongprayoon, Francesco Pesce and Wisit Cheungpasitporn
J. Clin. Med. 2026, 15(7), 2576; https://doi.org/10.3390/jcm15072576 - 27 Mar 2026
Abstract
Background: Artificial intelligence in nephrology has largely focused on predictive models for outcomes such as acute kidney injury (AKI), chronic kidney disease (CKD) progression, and transplant complications. Although these models demonstrate technical performance, their real-world clinical impact has remained limited because prediction [...] Read more.
Background: Artificial intelligence in nephrology has largely focused on predictive models for outcomes such as acute kidney injury (AKI), chronic kidney disease (CKD) progression, and transplant complications. Although these models demonstrate technical performance, their real-world clinical impact has remained limited because prediction alone rarely translates into coordinated clinical action. Clinical artificial intelligence agents represent workflow-native systems that operate in real time, interact bidirectionally with clinical environments, adapt to evolving patient and workflow states, and support coordinated clinical action rather than generating isolated predictions. This review proposes clinical artificial intelligence agents as a new paradigm for integrating artificial intelligence directly into nephrology workflows. Methods: We conducted a narrative synthesis of emerging literature on artificial intelligence systems, agentic artificial intelligence architectures, clinical decision support, and digital health infrastructures relevant to kidney care. Drawing from interdisciplinary sources in medicine, health informatics, and artificial intelligence research, we developed a conceptual framework describing the architecture, governance requirements, and evaluation principles of clinical artificial intelligence agents in nephrology. Results: Clinical artificial intelligence agents represent workflow-integrated systems capable of continuously perceiving patient data, reasoning under clinical constraints, planning tasks, and supporting coordinated clinical actions over time. We describe a layered architecture consisting of perception, cognition, planning and control, action, and learning components. Potential applications span the nephrology care continuum, including CKD management, AKI monitoring, dialysis and continuous renal replacement therapy (CRRT) optimization, kidney transplantation care coordination, glomerulonephritis management, and supervised patient-facing systems. Conclusions: Clinical artificial intelligence agents shift the role of artificial intelligence from isolated prediction toward longitudinal clinical orchestration. Future evaluation should prioritize workflow integration, time-to-action, clinician oversight, safety, and patient-centered outcomes rather than relying solely on traditional model performance metrics. This roadmap provides a conceptual foundation for the responsible development and clinical integration of agentic artificial intelligence systems in nephrology. Full article
23 pages, 1545 KB  
Article
Advanced Hybrid Deep Learning Framework for Short-Term Solar Radiation Forecasting Using Temporal and Meteorological Features
by Farrukh Hafeez, Zeeshan Ahmad Arfeen, Muhammad I. Masud, Abdoalateef Alzhrani, Mohammed Aman, Nasser Alkhaldi and Mehreen Kausar Azam
Processes 2026, 14(7), 1081; https://doi.org/10.3390/pr14071081 - 27 Mar 2026
Abstract
Short-term forecasting of solar radiation is essential for the efficient operation of solar energy systems. This study presents a neural network-based approach for short-term solar radiation forecasting using a hybrid framework that integrates temporal characteristics with weather-based features. The proposed model combines a [...] Read more.
Short-term forecasting of solar radiation is essential for the efficient operation of solar energy systems. This study presents a neural network-based approach for short-term solar radiation forecasting using a hybrid framework that integrates temporal characteristics with weather-based features. The proposed model combines a Gated Recurrent Unit (GRU) to capture short-term temporal dynamics, a Transformer Encoder, and a Multilayer Perceptron (MLP) to integrate these representations for final prediction. Key meteorological variables, including temperature, humidity, and wind speed, are incorporated along with engineered time-related features such as lagged values, rolling statistics, and cyclical time-of-day encodings. The results demonstrate that the hybrid model effectively integrates sequential learning and feature interaction, leading to improved forecasting accuracy. The proposed approach achieves a test Mean Absolute Error (MAE) of 0.056, Root Mean Square Error (RMSE) of 0.086, and coefficient of determination (R2) of 0.92, outperforming benchmark models such as AutoRegressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), GRU, and Extreme Gradient Boosting (XGBoost). The model maintains stable performance across cross-validation folds, multiple forecasting horizons, and varying weather conditions. These findings indicate that the proposed framework provides a reliable and practical solution for accurate short-term solar radiation forecasting, supporting real-time solar energy management and renewable energy system optimization. Full article
(This article belongs to the Special Issue Advanced Technologies of Renewable Energy Sources (RESs))
26 pages, 2135 KB  
Article
Mapping Research Trends in Road Safety: A Topic Modeling Perspective
by Iulius Alexandru Tudor and Florin Gîrbacia
Vehicles 2026, 8(4), 69; https://doi.org/10.3390/vehicles8040069 - 27 Mar 2026
Abstract
Over the past decade, road safety research has experienced rapid development due to the rapid expansion of large crash databases, the adoption of artificial intelligence techniques, and the demand for proactive and predictive safety solutions. This study conducts a data-driven review of recent [...] Read more.
Over the past decade, road safety research has experienced rapid development due to the rapid expansion of large crash databases, the adoption of artificial intelligence techniques, and the demand for proactive and predictive safety solutions. This study conducts a data-driven review of recent research trends in transport safety. It focuses on main domains including crash severity analysis, human factors, vulnerable road users (VRUs), spatial modeling, and artificial intelligence applications. A systematic search of the Scopus database identified 15,599 relevant scientific papers published between 2016 and 2025. After constructing this corpus, titles, abstracts, and keywords were preprocessed using a natural language pipeline. The analysis employed BERTopic, a transformer-based topic modeling framework. The analysis identified 29 distinct research topics, further synthesized into five major thematic areas: (1) crash severity and injury analysis, (2) driver behavior and human factors, (3) vulnerable road users, (4) artificial intelligence, machine learning, and computer vision in intelligent transportation systems, and (5) spatial analysis and hotspot detection. A notable increase in publications related to artificial intelligence and machine learning has been evident since 2020. The results show a transition from descriptive, post-crash studies to integrated, multimodal, predictive analysis. Overall, the findings reveal a paradigm shift in the field. This study also identifies ethical and economic issues associated with the use of artificial intelligence in intelligent transportation systems, including data management, infrastructure requirements, system security, and model transparency. The results signify a transition from intuition-based models to explainable, spatially explicit, and data-intensive models, ultimately facilitating proactive risk assessment and informed decision-making. Full article
(This article belongs to the Special Issue Intelligent Mobility and Sustainable Automotive Technologies)
Show Figures

Figure 1

23 pages, 7096 KB  
Article
Research and Application of Functional Model Construction Method for Production Equipment Operation Management and Control Oriented to Diversified and Personalized Scenarios
by Jun Li, Keqin Dou, Jinsong Liu, Qing Li and Yong Zhou
Machines 2026, 14(4), 368; https://doi.org/10.3390/machines14040368 - 27 Mar 2026
Abstract
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in [...] Read more.
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in the industrial internet environment. To address the diversity of scenarios and objectives of PEOMC, a hierarchical construction method for the functional model of PEOMC based on IDEF0 is proposed. By analysing relevant international standards, such as ISO 55010, ISO/IEC 62264, and OSA-CBM, the generic functional modules for the first and second layers of the functional model are identified and defined. On the basis of semi-supervised machine learning, topic clustering is used to extract the components, functional mechanisms, and logical relationships of production equipment operation management and control from approximately 200 standard texts and to construct a reference resource pool for the third-layer functional module. On this basis, an interface matching and recursive traversal algorithm for functional modules is designed, and a composition and orchestration strategy of functional modules for specific scenarios is provided to support the flexible construction of diversified and personalized PEOMC scenarios. The proposed construction and application method was validated through an engineering case study in an aero-engine transmission unit manufacturing workshop: the average process capability index of the enterprise’s production equipment steadily increased from 1.28 to approximately 1.60, the mean time to repair (MTTR) of production equipment failures significantly decreased from 8 h to 3 h, and the average overall equipment effectiveness (OEE) increased from 56.43% to a stable 68.57%, demonstrating its effectiveness and practicality. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

25 pages, 1607 KB  
Article
Data-Driven Prioritization of User Requirements in Health E-Commerce: An Explainable Machine Learning Study
by Fanyong Meng and Yincan Jia
J. Theor. Appl. Electron. Commer. Res. 2026, 21(4), 104; https://doi.org/10.3390/jtaer21040104 - 27 Mar 2026
Viewed by 43
Abstract
The rapid expansion of mobile healthcare (mHealth) applications has transformed health-related e-commerce, creating new challenges for understanding and responding to user needs. This study proposes a data-driven framework to systematically identify and prioritize unmet user requirements from negative reviews of Chinese mHealth applications. [...] Read more.
The rapid expansion of mobile healthcare (mHealth) applications has transformed health-related e-commerce, creating new challenges for understanding and responding to user needs. This study proposes a data-driven framework to systematically identify and prioritize unmet user requirements from negative reviews of Chinese mHealth applications. Using a dataset of 31,124 user reviews collected between 2019 and 2025, the framework integrates sentiment analysis, topic modeling, and machine learning regression to uncover six key areas of user concern and examine their temporal evolution. Among several predictive models linking user concerns to app ratings, the k-nearest neighbors (KNN) model demonstrated superior performance. Subsequent SHAP-based interpretability analysis reveals that account authentication, system accessibility, and application stability have the most significant impact on user ratings, highlighting the critical roles of trust and technical reliability in health e-commerce. This research not only provides actionable insights for platform governance but also contributes a generalizable methodology for leveraging user-generated content to inform evidence-based management and policy decisions in mobile digital services. Full article
(This article belongs to the Section Data Science, AI, and e-Commerce Analytics)
Show Figures

Figure 1

25 pages, 429 KB  
Review
Mapping Water: A Brief History of GIS in Hydrology and a Path Toward AI-Native Modeling
by Daniel P. Ames
Water 2026, 18(7), 796; https://doi.org/10.3390/w18070796 - 27 Mar 2026
Viewed by 44
Abstract
The integration of Geographic Information Systems (GISs) with hydrologic science has evolved over seven decades from manual catchment delineation and output visualization to AI-native spatial water intelligence, reshaping how the water cycle is observed, modeled, and managed. This review explores that evolution, from [...] Read more.
The integration of Geographic Information Systems (GISs) with hydrologic science has evolved over seven decades from manual catchment delineation and output visualization to AI-native spatial water intelligence, reshaping how the water cycle is observed, modeled, and managed. This review explores that evolution, from the progressively tightening coupling between GIS software and hydrologic models to an AI-assisted future in which the line between these two fields blurs and eventually dissolves completely. The evolution of GISs in hydrology is traced through four eras, stratified as: (1) the formalization of governing equations and digital terrain representations (1950–1985); (2) the initial GIS–model coupling era and the rise in watershed simulation (1985–2000); (3) open source and the start of the open data deluge (2000–2015); and (4) machine learning and cloud-native computing (2015–present). A four-level vision for the role of artificial intelligence in the next generation of spatial hydrology is then articulated, from AI-assisted GIS operation to spatially aware AI water intelligence that reasons directly over geospatial data without requiring a traditional GIS or simulation software as an intermediary. Broader limitations and challenges are also discussed. Full article
(This article belongs to the Special Issue GIS Applications in Hydrology and Water Resources)
Show Figures

Figure 1

27 pages, 4046 KB  
Article
A Deep Learning Framework for Predicting Psycho-Physiological States in Urban Underground Systems: Automating Human-Centric Environmental Perception
by Guanjie Huang and Hongzan Jiao
Buildings 2026, 16(7), 1328; https://doi.org/10.3390/buildings16071328 - 27 Mar 2026
Viewed by 67
Abstract
Traditional Post-Occupancy Evaluation (POE) is static and incompatible with dynamic systems like Digital Twins, creating a digital gap in managing health-oriented urban environments, especially in Urban Underground Spaces (UUS). This paper bridges this gap with a deep learning framework that automates the continuous [...] Read more.
Traditional Post-Occupancy Evaluation (POE) is static and incompatible with dynamic systems like Digital Twins, creating a digital gap in managing health-oriented urban environments, especially in Urban Underground Spaces (UUS). This paper bridges this gap with a deep learning framework that automates the continuous prediction of human physiological arousal. We created a novel multimodal dataset from in situ experiments, synchronizing first-person video, environmental data, and Galvanic Skin Response (GSR) as a real-time physiological arousal proxy. Our dual-branch spatial–temporal model fuses these data streams to predict GSR with high accuracy (Pearson’s r = 0.72), effectively mapping objective environmental inputs to continuous human physiological dynamics. This framework provides an automated, human-centric analysis engine for urban planning, design validation, and real-time building management. It establishes a foundational ‘human dynamics layer’ for urban Digital Twins, evolving them into predictive tools for simulating human-environment interactions and embedding physiological perception into intelligent urban systems. Full article
Show Figures

Figure 1

43 pages, 4672 KB  
Review
Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends
by Khadija Abouhssous, Rasha Hasan, Asmaa Zugari and Alia Zakriti
Algorithms 2026, 19(4), 258; https://doi.org/10.3390/a19040258 - 27 Mar 2026
Viewed by 82
Abstract
In recent years, optimization algorithms have emerged as powerful computational tools for addressing complex and dynamic challenges across diverse domains. These domains include engineering, technology, management, and decision-making. Their growing importance is motivated by (a) the increasing complexity of modern systems, (b) the [...] Read more.
In recent years, optimization algorithms have emerged as powerful computational tools for addressing complex and dynamic challenges across diverse domains. These domains include engineering, technology, management, and decision-making. Their growing importance is motivated by (a) the increasing complexity of modern systems, (b) the need for efficient resource utilization, and (c) the demand for scalable algorithmic solutions. These algorithms enable the systematic and computational exploration of large solution spaces, supporting decision-making and design under uncertainty, large-scale data, and evolving requirements. This study provides a structured review and comparative scientometric analysis of optimization algorithms, covering: (a) exact methods, (b) approximation techniques, (c) metaheuristics, and (d) emerging physics-informed frameworks. The analysis highlights algorithmic trends, performance-oriented research directions, and the increasing integration of mathematical programming, machine learning, and numerical methods. The results show a renewed focus on classical algorithmic paradigms. Moreover, rapid growth in hybrid and physics-informed optimization approaches is observed. These findings confirm the central role of optimization algorithms in modern algorithm engineering and interdisciplinary computational research. Full article
Show Figures

Figure 1

29 pages, 5646 KB  
Article
Regenerative and Participatory Co-Design in Biosphere Reserve Contexts
by Carlos Cobreros, Morena Villalón, Gabriel E. Calle-Sáenz, Adriana Rivas-Madrigal, Luis Miguel Gutierrez-Contreras, Daniela B. Arias-Laurino and Mariana Covarrubias-Castro
Land 2026, 15(4), 542; https://doi.org/10.3390/land15040542 - 26 Mar 2026
Viewed by 114
Abstract
Humanity is facing an unprecedented socio-ecological and climate crisis resulting from human impact on the planet, which requires a profound transformation in how we inhabit and develop our territories. Regenerative development is emerging as a key approach to strengthening living systems and improving [...] Read more.
Humanity is facing an unprecedented socio-ecological and climate crisis resulting from human impact on the planet, which requires a profound transformation in how we inhabit and develop our territories. Regenerative development is emerging as a key approach to strengthening living systems and improving environmental health. In this context, United Nations Educational, Scientific and Cultural Organization (UNESCO) Biosphere Reserves are consolidating their role as strategic instruments that link biodiversity conservation with sustainable development through integrated and participatory land management models. Mexico stands out for its regional and global leadership in implementing these areas. Participatory governance, promoted by the Man and Biosphere (MAB) programme, encourages the active involvement of local communities. This article analyses the application of a regenerative and participatory design methodology in a Biosphere Reserve, evaluating both the process and the tools used. Beyond the fulfilment of sustainability objectives, it examines the lessons learned, results and scope from a regenerative perspective, providing critical reflections on its effectiveness as a strategy for the socio-ecological management of vulnerable territories. Full article
Show Figures

Figure 1

25 pages, 2317 KB  
Article
Integrating Digital Twins into Smart Warehousing: A Practice-Based View Framework for Identifying and Prioritizing Critical Success Factors
by Sadia Samar Ali, Jose Antonio Marmolejo-Saucedo, Rosario Landa Piedra and Gerhard-Wilhelm Weber
Logistics 2026, 10(4), 73; https://doi.org/10.3390/logistics10040073 - 26 Mar 2026
Viewed by 194
Abstract
Background. Smart warehousing increasingly relies on digital twin technologies to enhance operational efficiency, real-time visibility, and decision-making in logistics systems. However, existing research primarily focuses on technological capabilities while paying limited attention to the organizational practices that shape successful implementation. Methods. This study [...] Read more.
Background. Smart warehousing increasingly relies on digital twin technologies to enhance operational efficiency, real-time visibility, and decision-making in logistics systems. However, existing research primarily focuses on technological capabilities while paying limited attention to the organizational practices that shape successful implementation. Methods. This study aims to identify and prioritize the critical success factors (CSFs) for integrating digital twins into smart warehousing using the Practice-Based View (PBV) as the theoretical lens. Based on insights from prior research and expert validation, nine CSFs were identified and evaluated using the Best–Worst Method (BWM). Empirical input was obtained from six industry experts with experience in digital transformation, warehousing, and supply chain management. Results. The results indicate that collaborative learning, contextual training, and gamification elements emerge as the most influential critical success factors, highlighting the importance of organizational practices in supporting digital twin adoption in smart warehousing. Conclusions. By linking technological capabilities with organizational routines, the proposed framework provides both theoretical insights and practical guidance for implementing digital twins in smart warehouse environments. Full article
Show Figures

Figure 1

Back to TopTop