Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (530)

Search Parameters:
Keywords = running test field

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4225 KiB  
Article
Data Sampling System for Processing Event Camera Data Using a Stochastic Neural Network on an FPGA
by Seth Shively, Nathaniel Jackson, Eugene Chabot, John DiCecco and Scott Koziol
Electronics 2025, 14(15), 3094; https://doi.org/10.3390/electronics14153094 - 2 Aug 2025
Viewed by 232
Abstract
The use of a stochastic artificial neural network (SANN) implemented on a Field Programmable Gate Array (FPGA) provides a promising method of performing image recognition on event camera recordings, however, challenges exist due to the fact that event camera data has an inherent [...] Read more.
The use of a stochastic artificial neural network (SANN) implemented on a Field Programmable Gate Array (FPGA) provides a promising method of performing image recognition on event camera recordings, however, challenges exist due to the fact that event camera data has an inherent unevenness in the timing at which data is sent out of the camera. This paper proposes a sampling system to overcome this challenge, by which all “events” occurring at specific timestamps in an event camera recording are selected (sampled) to be processed and sent to the SANN at regular intervals. This system is implemented on an FPGA in SystemVerilog, and to test it, simulated event camera data is sent to the system from a computer running MATLAB (version 2022+). The sampling system is shown to be functional. Analysis is shown demonstrating its performance regarding data sparsity, time convergence, normalization, repeatability, range, and some characteristics of the hold system. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

29 pages, 3400 KiB  
Article
Synthetic Data Generation for Machine Learning-Based Hazard Prediction in Area-Based Speed Control Systems
by Mariusz Rychlicki and Zbigniew Kasprzyk
Appl. Sci. 2025, 15(15), 8531; https://doi.org/10.3390/app15158531 (registering DOI) - 31 Jul 2025
Viewed by 243
Abstract
This work focuses on the possibilities of generating synthetic data for machine learning in hazard prediction in area-based speed monitoring systems. The purpose of the research conducted was to develop a methodology for generating realistic synthetic data to support the design of a [...] Read more.
This work focuses on the possibilities of generating synthetic data for machine learning in hazard prediction in area-based speed monitoring systems. The purpose of the research conducted was to develop a methodology for generating realistic synthetic data to support the design of a continuous vehicle speed monitoring system to minimize the risk of traffic accidents caused by speeding. The SUMO traffic simulator was used to model driver behavior in the analyzed area and within a given road network. Data from OpenStreetMap and field measurements from over a dozen speed detectors were integrated. Preliminary tests were carried out to record vehicle speeds. Based on these data, several simulation scenarios were run and compared to real-world observations using average speed, the percentage of speed limit violations, root mean square error (RMSE), and percentage compliance. A new metric, the Combined Speed Accuracy Score (CSAS), has been introduced to assess the consistency of simulation results with real-world data. For this study, a basic hazard prediction model was developed using LoRaWAN sensor network data and environmental contextual variables, including time, weather, location, and accident history. The research results in a method for evaluating and selecting the simulation scenario that best represents reality and drivers’ propensities to exceed speed limits. The results and findings demonstrate that it is possible to produce synthetic data with a level of agreement exceeding 90% with real data. Thus, it was shown that it is possible to generate synthetic data for machine learning in hazard prediction for area-based speed control systems using traffic simulators. Full article
Show Figures

Figure 1

23 pages, 5286 KiB  
Article
Measurements of Wake Concentration from a Finite Release of a Dense Fluid Upstream of a Cubic Obstacle
by Romana Akhter and Nigel Kaye
Fluids 2025, 10(8), 194; https://doi.org/10.3390/fluids10080194 - 29 Jul 2025
Viewed by 200
Abstract
Results are reported for a series of small-scale experiments that examine the dispersion of dense gas released upstream of an isolated building. The experiments replicate the geometry of the Thorney Island Phase II field tests and show good qualitative agreement with the flow [...] Read more.
Results are reported for a series of small-scale experiments that examine the dispersion of dense gas released upstream of an isolated building. The experiments replicate the geometry of the Thorney Island Phase II field tests and show good qualitative agreement with the flow regimes observed therein. The experiments were run in a water flume, and the flow is characterized by the Richardson number (Ri), where high Ri represent relatively high density releases. For low Ri the dense cloud flows over and around the building and any fluid drawn into the building wake is rapidly flushed. However, for high Ri, the dense cloud collapses, flows around the building, and is drawn into the wake. The dense fluid layer becomes trapped in the wake and is flushed by small parcels of fluid being peeled off the top of the layer and driven up and out of the wake. Results are presented for the concentration field along the center plane (parallel to the flow) of the building wake and time series of concentration just above the floor and downstream of the building. The time series for low-Ri and high-Ri flows are starkly different, with differences explained in terms of the observed flow regimes. Full article
(This article belongs to the Special Issue 10th Anniversary of Fluids—Recent Advances in Fluid Mechanics)
Show Figures

Figure 1

45 pages, 11380 KiB  
Article
Application of Multi-Strategy Controlled Rime Algorithm in Path Planning for Delivery Robots
by Haokai Lv, Qian Qian, Jiawen Pan, Miao Song, Yong Feng and Yingna Li
Biomimetics 2025, 10(7), 476; https://doi.org/10.3390/biomimetics10070476 - 19 Jul 2025
Viewed by 445
Abstract
As a core component of automated logistics systems, delivery robots hold significant application value in the field of unmanned delivery. This research addresses the robot path planning problem, aiming to enhance delivery efficiency and reduce operational costs through systematic improvements to the RIME [...] Read more.
As a core component of automated logistics systems, delivery robots hold significant application value in the field of unmanned delivery. This research addresses the robot path planning problem, aiming to enhance delivery efficiency and reduce operational costs through systematic improvements to the RIME optimization algorithm. Through in-depth analysis, we identified several major drawbacks in the standard RIME algorithm for path planning: insufficient global exploration capability in the initial stages, a lack of diversity in the hard RIME search mechanism, and oscillatory phenomena in soft RIME step size adjustment. These issues often lead to undesirable phenomena in path planning, such as local optima traps, path redundancy, or unsmooth trajectories. To address these limitations, this study proposes the Multi-Strategy Controlled Rime Algorithm (MSRIME), whose innovation primarily manifests in three aspects: first, it constructs a multi-strategy collaborative optimization framework, utilizing an infinite folding Fuch chaotic map for intelligent population initialization to significantly enhance the diversity of solutions; second, it designs a cooperative mechanism between a controlled elite strategy and an adaptive search strategy that, through a dynamic control factor, autonomously adjusts the strategy activation probability and adaptation rate, expanding the search space while ensuring algorithmic convergence efficiency; and finally, it introduces a cosine annealing strategy to improve the step size adjustment mechanism, reducing parameter sensitivity and effectively preventing path distortions caused by abrupt step size changes. During the algorithm validation phase, comparative tests were conducted between two groups of algorithms, demonstrating their significant advantages in optimization capability, convergence speed, and stability. Further experimental analysis confirmed that the algorithm’s multi-strategy framework effectively suppresses the impact of coordinate and dimensional differences on path quality during iteration, making it more suitable for delivery robot path planning scenarios. Ultimately, path planning experimental results across various Building Coverage Rate (BCR) maps and diverse application scenarios show that MSRIME exhibits superior performance in key indicators such as path length, running time, and smoothness, providing novel technical insights and practical solutions for the interdisciplinary research between intelligent logistics and computer science. Full article
Show Figures

Figure 1

31 pages, 5844 KiB  
Article
Cyclic Triaxial Testing: A Primer
by Carmine Polito
J 2025, 8(3), 25; https://doi.org/10.3390/j8030025 - 7 Jul 2025
Viewed by 362
Abstract
Cyclic triaxial tests are frequently used in the laboratory to assess the liquefaction susceptibility of soils. This paper will serve a two-fold purpose: First, it will serve to explain how the mechanics of the tests represent the stresses that occur in the field. [...] Read more.
Cyclic triaxial tests are frequently used in the laboratory to assess the liquefaction susceptibility of soils. This paper will serve a two-fold purpose: First, it will serve to explain how the mechanics of the tests represent the stresses that occur in the field. Topics covered include the differences in the stress paths for the soil in the field and in the lab, the differences in the actual stresses applied in the lab and the field, the differences between stress-controlled and strain-controlled tests, and the effects of other aspects of the testing methodology. The development of adjustment factors for converting the laboratory test results to the field is also briefly discussed. The second purpose of the paper is to serve as a guide to interpreting cyclic triaxial test results. The topics covered will include an examination of the two main liquefaction modes and the impact that the failure criteria selected have on the analysis, the differences between stress-controlled and strain-controlled test results, energy dissipation, and pore pressure generation. The author has run more than 1500 cyclic triaxial tests over the course of his career. He has found that, while the test is fairly straightforward to perform, it requires a much deeper understanding of the test mechanics and data interpretation in order to maximize the information gained from performing the test. This paper is intended as a guide, helping engineers to gain further insights into the test and its results. It has a target audience encompassing both those who are running their first tests and those who are looking to increase their understanding of the tests they have performed. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

26 pages, 23518 KiB  
Article
Avalanche Hazard Dynamics and Causal Analysis Along China’s G219 Corridor: A Case Study of the Wenquan–Khorgas Section
by Xuekai Wang, Jie Liu, Qiang Guo, Bin Wang, Zhiwei Yang, Qiulian Cheng and Haiwei Xie
Atmosphere 2025, 16(7), 817; https://doi.org/10.3390/atmos16070817 - 4 Jul 2025
Viewed by 348
Abstract
Investigating avalanche hazards is a fundamental preliminary task in avalanche research. This work is critically important for establishing avalanche warning systems and designing mitigation measures. Primary research data originated from field investigations and UAV aerial surveys, with avalanche counts and timing identified through [...] Read more.
Investigating avalanche hazards is a fundamental preliminary task in avalanche research. This work is critically important for establishing avalanche warning systems and designing mitigation measures. Primary research data originated from field investigations and UAV aerial surveys, with avalanche counts and timing identified through image interpretation. Snowpack properties were primarily acquired via in situ field testing within the study area. Methodologically, statistical modeling and RAMMS::AVALANCHE simulations revealed spatiotemporal and dynamic characteristics of avalanches. Subsequent application of the Certainty Factor (CF) model and sensitivity analysis determined dominant controlling factors and quantified zonal influence intensity for each parameter. This study, utilizing field reconnaissance and drone aerial photography, identified 86 avalanche points in the study area. We used field tests and weather data to run the RAMMS::AVALANCHE model. Then, we categorized and summarized regional avalanche characteristics using both field surveys and simulation results. Furthermore, the Certainty Factor Model (CFM) and the parameter Sensitivity Index (Sa) were applied to assess the influence of elevation, slope gradient, aspect, and maximum snow depth on the severity of avalanche disasters. The results indicate the following: (1) Avalanches exhibit pronounced spatiotemporal concentration: temporally, they cluster between February and March and during 13:00–18:00 daily; spatially, they concentrate within the 2100–3000 m elevation zone. Chute-confined avalanches dominate the region, comprising 73.26% of total events; most chute-confined avalanches feature multiple release areas; therefore the number of release areas exceeds avalanche points; in terms of scale, medium-to-large-scale avalanches dominate, accounting for 86.5% of total avalanches. (2) RAMMS::AVALANCHE simulations yielded the following maximum values for the region: flow height = 15.43 m, flow velocity = 47.6 m/s, flow pressure = 679.79 kPa, and deposition height = 10.3 m. Compared to chute-confined avalanches, unconfined slope avalanches exhibit higher flow velocities and pressures, posing greater hazard potential. (3) The Certainty Factor Model and Sensitivity Index identify elevation, slope gradient, and maximum snow depth as the key drivers of avalanches in the study area. Their relative impact ranks as follows: maximum snow depth > elevation > slope gradient > aspect. The sensitivity index values were 1.536, 1.476, 1.362, and 0.996, respectively. The findings of this study provide a scientific basis for further research on avalanche hazards, the development of avalanche warning systems, and the design of avalanche mitigation projects in the study area. Full article
(This article belongs to the Special Issue Climate Change in the Cryosphere and Its Impacts)
Show Figures

Figure 1

44 pages, 6854 KiB  
Article
A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs
by Xiaojun Zheng, Rundong Liu and Siyang Li
Biomimetics 2025, 10(7), 420; https://doi.org/10.3390/biomimetics10070420 - 29 Jun 2025
Viewed by 367
Abstract
In this study, we propose a novel improved Dung Beetle Optimizer called Environment-aware Chaotic Force-field Dung Beetle Optimizer (ECFDBO). To address DBO’s existing tendency toward premature convergence and insufficient precision in high-dimensional, complex search spaces, ECFDBO integrates three key improvements: a chaotic perturbation-based [...] Read more.
In this study, we propose a novel improved Dung Beetle Optimizer called Environment-aware Chaotic Force-field Dung Beetle Optimizer (ECFDBO). To address DBO’s existing tendency toward premature convergence and insufficient precision in high-dimensional, complex search spaces, ECFDBO integrates three key improvements: a chaotic perturbation-based nonlinear contraction strategy, an intelligent boundary-handling mechanism, and a dynamic attraction–repulsion force-field mutation. These improvements reinforce both the algorithm’s global exploration capability and its local exploitation accuracy. We conducted 30 independent runs of ECFDBO on the CEC2017 benchmark suite. Compared with seven classical and novel metaheuristic algorithms, ECFDBO achieved statistically significant improvements in multiple performance metrics. Moreover, by varying problem dimensionality, we demonstrated its robust global optimization capability for increasingly challenging tasks. We further conducted the Wilcoxon and Friedman tests to assess the significance of performance differences of the algorithms and to establish an overall ranking. Finally, ECFDBO was applied to a 3D path planning simulation in UAVs for safe path planning in complex environments. Against both the Dung Beetle Optimizer and a multi-strategy DBO (GODBO) algorithm, ECFDBO met the global optimality requirements for cooperative UAV planning and showed strong potential for high-dimensional global optimization applications. Full article
Show Figures

Graphical abstract

21 pages, 2973 KiB  
Article
Machine Learning Approach for Ground-Level Estimation of Electromagnetic Radiation in the Near Field of 5G Base Stations
by Oluwole John Famoriji and Thokozani Shongwe
Appl. Sci. 2025, 15(13), 7302; https://doi.org/10.3390/app15137302 - 28 Jun 2025
Viewed by 268
Abstract
Electromagnetic radiation measurement and management emerge as crucial factors in the economical deployment of fifth-generation (5G) infrastructure, as the new 5G network emerges as a network of services. By installing many base stations in strategic locations that operate in the millimeter-wave range, 5G [...] Read more.
Electromagnetic radiation measurement and management emerge as crucial factors in the economical deployment of fifth-generation (5G) infrastructure, as the new 5G network emerges as a network of services. By installing many base stations in strategic locations that operate in the millimeter-wave range, 5G services are able to meet serious demands for bandwidth. To evaluate the ground-plane radiation level of electromagnetics close to 5G base stations, we propose a unique machine-learning-based approach. Because a machine learning algorithm is trained by utilizing data obtained from numerous 5G base stations, it exhibits the capability to estimate the strength of the electric field effectively at every point of arbitrary radiation, while the base station generates a network and serves various numbers of 5G terminals running in different modes of service. The model requires different numbers of inputs, including the antenna’s transmit power, antenna gain, terminal service modes, number of 5G terminals, distance between the 5G terminals and 5G base station, and environmental complexity. Based on experimental data, the estimation method is both feasible and effective; the machine learning model’s mean absolute percentage error is about 5.89%. The degree of correctness shows how dependable the developed technique is. In addition, the developed approach is less expensive when compared to measurements taken on-site. The results of the estimates can be used to save test costs and offer useful guidelines for choosing the best location, which will make 5G base station electromagnetic radiation management or radio wave coverage optimization easier. Full article
(This article belongs to the Special Issue Recent Advances in Antennas and Propagation)
Show Figures

Figure 1

24 pages, 28445 KiB  
Article
Enhanced Multi-Threshold Otsu Algorithm for Corn Seedling Band Centerline Extraction in Straw Row Grouping
by Yuanyuan Liu, Yuxin Du, Kaipeng Zhang, Hong Yan, Zhiguo Wu, Jiaxin Zhang, Xin Tong, Junhui Chen, Fuxuan Li, Mengqi Liu, Yueyong Wang and Jun Wang
Agronomy 2025, 15(7), 1575; https://doi.org/10.3390/agronomy15071575 - 27 Jun 2025
Viewed by 233
Abstract
Straw row grouping is vital in conservation tillage for precision seeding, and accurate centerline extraction of the seedling bands enhances agricultural spraying efficiency. However, the traditional single-threshold Otsu segmentation struggles with adaptability and accuracy under complex field conditions. To overcome these issues, this [...] Read more.
Straw row grouping is vital in conservation tillage for precision seeding, and accurate centerline extraction of the seedling bands enhances agricultural spraying efficiency. However, the traditional single-threshold Otsu segmentation struggles with adaptability and accuracy under complex field conditions. To overcome these issues, this study proposes an adaptive multi-threshold Otsu algorithm optimized by a Simulated Annealing-Enhanced Differential Evolution–Whale Optimization Algorithm (SADE-WOA). The method avoids premature convergence and improves population diversity by embedding the crossover mechanism of Differential Evolution (DE) into the Whale Optimization Algorithm (WOA) and introducing a vector disturbance strategy. It adaptively selects thresholds based on straw-covered image features. Combined with least-squares fitting, it suppresses noise and improves centerline continuity. The experimental results show that SADE-WOA accurately separates soil regions while preserving straw texture, achieving higher between-class variance and significantly faster convergence than the other tested algorithms. It runs at just one-tenth of the time of the Grey Wolf Optimizer and one-ninth of that of DE and requires only one-sixth to one-seventh of the time needed by DE-GWO. During centerline fitting, the mean yaw angle error (MEA) ranged from 0.34° to 0.67°, remaining well within the 5° tolerance required for agricultural navigation. The root-mean-square error (RMSE) fell between 0.37° and 0.73°, while the mean relative error (MRE) stayed below 0.2%, effectively reducing the influence of noise and improving both accuracy and robustness. Full article
Show Figures

Figure 1

21 pages, 4080 KiB  
Article
M-Learning: Heuristic Approach for Delayed Rewards in Reinforcement Learning
by Cesar Andrey Perdomo Charry, Marlon Sneider Mora Cortes and Oscar J. Perdomo
Mathematics 2025, 13(13), 2108; https://doi.org/10.3390/math13132108 - 27 Jun 2025
Viewed by 354
Abstract
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. [...] Read more.
The current design of reinforcement learning methods requires extensive computational resources. Algorithms such as Deep Q-Network (DQN) have obtained outstanding results in advancing the field. However, the need to tune thousands of parameters and run millions of training episodes remains a significant challenge. This document proposes a comparative analysis between the Q-Learning algorithm, which laid the foundations for Deep Q-Learning, and our proposed method, termed M-Learning. The comparison is conducted using Markov Decision Processes with the delayed reward as a general test bench framework. Firstly, this document provides a full description of the main challenges related to implementing Q-Learning, particularly concerning its multiple parameters. Then, the foundations of our proposed heuristic are presented, including its formulation, and the algorithm is described in detail. The methodology used to compare both algorithms involved training them in the Frozen Lake environment. The experimental results, along with an analysis of the best solutions, demonstrate that our proposal requires fewer episodes and exhibits reduced variability in the outcomes. Specifically, M-Learning trains agents 30.7% faster in the deterministic environment and 61.66% faster in the stochastic environment. Additionally, it achieves greater consistency, reducing the standard deviation of scores by 58.37% and 49.75% in the deterministic and stochastic settings, respectively. The code will be made available in a GitHub repository upon this paper’s publication. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms, 2nd Edition)
Show Figures

Figure 1

38 pages, 1298 KiB  
Article
Importance of Using Modern Regression Analysis for Response Surface Models in Science and Technology
by Hsuan-Yu Chen and Chiachung Chen
Appl. Sci. 2025, 15(13), 7206; https://doi.org/10.3390/app15137206 - 26 Jun 2025
Viewed by 471
Abstract
Experimental design is important for researchers and those in other fields to find factors affecting an experimental response. The response surface methodology (RSM) is a special experimental design used to evaluate the significant factors influencing a process and confirm the optimum conditions for [...] Read more.
Experimental design is important for researchers and those in other fields to find factors affecting an experimental response. The response surface methodology (RSM) is a special experimental design used to evaluate the significant factors influencing a process and confirm the optimum conditions for different factors. RSM models represent the relationship between the response and the influencing factors established with the regression analysis. Then these equations are used to produce the contour and response surface plots for observers to determine the optimization. The influence of regression techniques on model building has not been thoroughly studied. This study collected twenty-five datasets from the literature. The backward elimination procedure and t-test value of each variable were adopted to evaluate the significant effect on the response. Modern regression techniques were used. The results of this study present some problems of RSM studies in the previous literature, including using the complete equation without checking the statistical test, using the at-once variable deletion method to delete the variables whose p-values are higher than the preset value, the inconsistency between the proposed RSM equations and the contour and response surface plots, the misuse of the ANOVA table of the sequential model to keep all variables in the linear or square term without testing for each variable, the non-normal and non-constant variance conditions of datasets, and the finding of some influential data points. The suggestions for applying RSM for researchers are training in the modern regression technique, using the backward elimination technique for sequential variable selection, and increasing the sample numbers with three replicates for each run. Full article
(This article belongs to the Section Food Science and Technology)
Show Figures

Figure 1

12 pages, 613 KiB  
Article
Effectiveness of Prolonged Application of Super High-Intensity Continuous Training—Team Case Study
by Miloš M. Milošević, Jovana Popović, Milivoj Dopsaj and Milenko B. Milosević
J. Funct. Morphol. Kinesiol. 2025, 10(3), 241; https://doi.org/10.3390/jfmk10030241 - 25 Jun 2025
Viewed by 248
Abstract
Background: Super High-Intensity Continuous Training (SHCT) is a type of aerobic training program that combines high intensity with continuous loads, such as running for 20 min at 75%, 80%, or even 95% of the velocity at maximal oxygen uptake. Recent studies show significant [...] Read more.
Background: Super High-Intensity Continuous Training (SHCT) is a type of aerobic training program that combines high intensity with continuous loads, such as running for 20 min at 75%, 80%, or even 95% of the velocity at maximal oxygen uptake. Recent studies show significant positive effects, but the consequences of prolonged use remain unknown. Purpose: This study aims to investigate and evaluate the effects of prolonged application of the SHCT model in elite team handball players. Method: For this purpose, a field-based quasi-experiment was organized using the SHCT training model on 14 professional female team handball players competing in the first national league who participated in 16 weeks of SHCT training during the competition season. Results: After the application of SHCT training, the increases in the parameters of the aerobic profile (distance run in Cooper’s 12 min run test, maximum rate of oxygen consumption, value of the maximum relative oxygen consumption, running speed for which maximum rate of oxygen consumption occurs) reached from 25.4% to 35.2%. The effect size of these changes was η2p > 0.90 and was significant at the p < 0.001 level. Conclusions: The investigated aerobic model is effective. Therefore, its use is recommended for designing aerobic training for elite teams and the general sports population. Full article
Show Figures

Figure 1

20 pages, 1021 KiB  
Article
Habit Predicting Higher Education EFL Students’ Intention and Use of AI: A Nexus of UTAUT-2 Model and Metacognition Theory
by Shaista Rashid
Educ. Sci. 2025, 15(6), 756; https://doi.org/10.3390/educsci15060756 - 16 Jun 2025
Viewed by 753
Abstract
With the emergence of AI technology, its adoption in higher education has become an interesting field for researchers. The present study explores the acceptance of AI for learning the English language by Pakistani EFL students using the UTAUT-2 and Metacognition theory. The UTAUT-2 [...] Read more.
With the emergence of AI technology, its adoption in higher education has become an interesting field for researchers. The present study explores the acceptance of AI for learning the English language by Pakistani EFL students using the UTAUT-2 and Metacognition theory. The UTAUT-2 questionnaire was adapted with minor changes to make it suitable for the EFL context. Data were collected from the English departments of the top ten general universities in Pakistan to make the findings generalizable. Another step taken to ensure generalizability was the sampling of 611 students randomly from both undergraduate (BS and ADP) and postgraduate (MPhil and PhD) programs studying in different semesters. PLS-SEM was employed for data analysis. In the first step, the PLS algorithm was run for the measurement model, which confirmed the reliability, validity, and fitness of the model. Second, the bootstrapping method was used for hypothesis testing. The findings reveal that six of the ten hypotheses for direct relationships are supported. Habit (0.489) was found to be the strongest contributor to BI, followed by PE (0.141), SI (0.100), and FC (0.093). Moreover, actual use behaviour was predicted by habit (0.325) instead of BI and FC. These findings are supported by metacognition theory, as the habit of AI seems to shape the metacognitive knowledge of EFL learners in place of traditional learning methods, and other factors seem to reinforce the metacognitive experience of using AI language. The study suggests implications for EFL experts, academia, and policymakers to strategically integrate AI into language learning by informing them of its potential benefits and risks. Full article
Show Figures

Figure 1

19 pages, 2671 KiB  
Article
A Decentralized Hierarchical Multi-Agent Framework for Smart Grid Sustainable Energy Management
by Otilia Elena Dragomir and Florin Dragomir
Sustainability 2025, 17(12), 5423; https://doi.org/10.3390/su17125423 - 12 Jun 2025
Cited by 1 | Viewed by 620
Abstract
This paper aims to design and implement a decentralized multi-agent hierarchical system for energy management that can perform real-time monitoring and management of a real-world power grid with penetration of renewable energy. This approach integrates intelligent solutions based on intelligent agents to provide [...] Read more.
This paper aims to design and implement a decentralized multi-agent hierarchical system for energy management that can perform real-time monitoring and management of a real-world power grid with penetration of renewable energy. This approach integrates intelligent solutions based on intelligent agents to provide scalable and reliable management of smart power grids. The proposed decentralized multi-agent hierarchical system architecture allows for balancing multiple objectives, such as cost and environmental impact, in the design and operation of the energy system. The testing and tuning of this system are based on simulating real-time data flow and feedback between monitoring and control agents within a multi-agent environment modelling a smart grid. The added value of this study lies in its integrated approach to smart grid energy management, which combines real-time monitoring, decentralized control, hierarchical architecture, and consideration of both economic and environmental factors. Moreover, the use of multi-agent systems for simulation further enhances the adaptability and scalability of the system, and the focus on prosumers and the integration of renewable energy sources make it a relevant contribution to the field of sustainable energy management. While the results are promising, the current simulation framework is based on single-run experiments, limiting the statistical strength of outcome interpretations. Future research will address these aspects through expanded statistical validation, the inclusion of performance indicators, and deployment scenarios in more complex, real-world energy systems to enhance the robustness and applicability of the approach. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

18 pages, 1379 KiB  
Article
The Evaluation and Development of a Prediction Artificial Neural Network Model for Specific Volumetric Fuel Efficiency (SVFE) of a Tractor–Chisel Plow System Based on Field Operation
by Saleh M. Al-Sager, Saad S. Almady, Waleed A. Almasoud, Abdulrahman A. Al-Janobi, Samy A. Marey, Saad A. Al-Hamed and Abdulwahed M. Aboukarima
Processes 2025, 13(6), 1811; https://doi.org/10.3390/pr13061811 - 7 Jun 2025
Viewed by 496
Abstract
For every tractor test carried out on a concrete road under defined conditions, the Nebraska Tractor Test Laboratory (NTTL) provides values of the specific volumetric fuel efficiency (SVFE) in unit of kWh/L). Because soil tillage is a highly energy-intensive process and the energy [...] Read more.
For every tractor test carried out on a concrete road under defined conditions, the Nebraska Tractor Test Laboratory (NTTL) provides values of the specific volumetric fuel efficiency (SVFE) in unit of kWh/L). Because soil tillage is a highly energy-intensive process and the energy consumption of tillage operations is a significant component of a farm budget, there is a growing amount of attention being given to the examination of the SVFE for tillage operations. Nonetheless, the study of the tillage process and a scientific approach to the tillage process are becoming more and more dependent on scientific modeling. Therefore, in this study based on real-tillage field operation, an artificial neural network (ANN) model was built to predict SVFE. This study aimed to confirm that the ANN model could incorporate 10 inputs for prediction: initial soil moisture content, draft force, initial soil bulk density, sand, silt, and clay proportions in the soil tractor power, plow width, tillage depth, and tillage speed. The Qnet v2000, as an ANN simulation software, was employed for the simulation of the SVFE. In this regard, 20,000 runs of Qnet v2000 were completed for the training and testing stages. The anticipated results displayed that the determination coefficient (R2) was larger than 0.96; using the training dataset, R2 was 0.982 and using the testing dataset, R2 was 0.9741, indicating that the recognition of a full ANN model makes it likely to reply to essential enquiries that were previously unanswerable regarding the impact of working and soil conditions on the SVFE of a tractor–tillage implement system. Additionally, sensitivity analyses were completed to specify which modeled parameters were more sensitive to the factors using the obtained ANN model. According to the sensitivity analysis, SVFE was more affected by changes in the tillage speed (21.07%), silt content in the soil (15.56%), draft force (11.01%), and clay content in the soil (10.86%). Predicting SVFE can lead to more appropriate decisions on tractor–chisel plow combination management. Therefore, it is highly advisable to use the newly created ANN model to appropriately manage SVFE to reduce tractor–tillage implement energy dissipation. Additionally, suitable management of some variables, for example, tillage depth, tillage speed, and soil moisture content, can help enhance fuel consumption in the tractor–tillage implementation system. Full article
(This article belongs to the Section Sustainable Processes)
Show Figures

Figure 1

Back to TopTop