Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (207)

Search Parameters:
Keywords = machine load balance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 5526 KiB  
Article
Dynamic Machine Learning-Based Simulation for Preemptive Supply-Demand Balancing Amid EV Charging Growth in the Jamali Grid 2025–2060
by Joshua Veli Tampubolon, Rinaldy Dalimi and Budi Sudiarto
World Electr. Veh. J. 2025, 16(7), 408; https://doi.org/10.3390/wevj16070408 - 21 Jul 2025
Viewed by 291
Abstract
The rapid uptake of electric vehicles (EVs) in the Jawa–Madura–Bali (Jamali) grid produces highly variable charging demands that threaten the supply–demand balance. To forestall instability, we developed a predictive simulation based on long short-term memory (LSTM) networks that combines historical generation and consumption [...] Read more.
The rapid uptake of electric vehicles (EVs) in the Jawa–Madura–Bali (Jamali) grid produces highly variable charging demands that threaten the supply–demand balance. To forestall instability, we developed a predictive simulation based on long short-term memory (LSTM) networks that combines historical generation and consumption patterns with models of EV population growth and initial charging-time (ICT). We introduce a novel supply–demand balance score to quantify weekly and annual deviations between projected supply and demand curves, then use this metric to guide the machine-learning model in optimizing annual growth rate (AGR) and preventing supply demand imbalance. Relative to a business-as-usual baseline, our approach improves balance scores by 64% and projects up to a 59% reduction in charging load by 2060. These results demonstrate the promise of data-driven demand-management strategies for maintaining grid reliability during large-scale EV integration. Full article
Show Figures

Figure 1

20 pages, 1550 KiB  
Article
Strategy for Precopy Live Migration and VM Placement in Data Centers Based on Hybrid Machine Learning
by Taufik Hidayat, Kalamullah Ramli and Ruki Harwahyu
Informatics 2025, 12(3), 71; https://doi.org/10.3390/informatics12030071 - 15 Jul 2025
Viewed by 397
Abstract
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a [...] Read more.
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a combined machine learning method that uses an MDP to choose which VMs to move, the RF method to sort the VMs according to load, and NSGA-III to achieve multiple optimization objectives, such as reducing downtime, improving SLA, and increasing energy efficiency. For this model, the GWA-Bitbrains dataset was used, on which it had a classification accuracy of 98.77%, a MAPE of 7.69% in predicting migration duration, and an energy efficiency improvement of 90.80%. The results of real-world experiments show that the hybrid machine learning strategy could significantly reduce the data center workload, increase the total migration time, and decrease the downtime. The results of hybrid machine learning affirm the effectiveness of integrating the MDP, RF method, and NSGA-III for providing holistic solutions in VM placement strategies for large-scale data centers. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

22 pages, 2366 KiB  
Review
Machine Learning for Fire Safety in the Built Environment: A Bibliometric Insight into Research Trends and Key Methods
by Mehmet Akif Yıldız
Buildings 2025, 15(14), 2465; https://doi.org/10.3390/buildings15142465 - 14 Jul 2025
Viewed by 326
Abstract
Assessing building fire safety risks during the early design phase is vital for developing practical solutions to minimize loss of life and property. This study aims to identify research trends and provide a guiding framework for researchers by systematically reviewing the literature on [...] Read more.
Assessing building fire safety risks during the early design phase is vital for developing practical solutions to minimize loss of life and property. This study aims to identify research trends and provide a guiding framework for researchers by systematically reviewing the literature on integrating machine learning-based predictive methods into building fire safety design using bibliometric methods. This study evaluates machine learning applications in fire safety using a comprehensive approach that combines bibliometric and content analysis methods. For this purpose, as a result of the scan without any year limitation from the Web of Science Core Collection-Citation database, 250 publications, the first of which was published in 2001, and the number has increased since 2019, were reached, and sample analysis was performed. In order to evaluate the contribution of qualified publications to science more accurately, citation counts were analyzed using normalized citation counts that balanced differences in publication fields and publication years. Multiple regression analysis was applied to support this metric’s theoretical basis and determine the impact levels of variables affecting the metric’s value (such as total citation count, publication year, and number of articles). Thus, the statistical impact of factors influencing the formation of the normalized citation count was measured, and the validity of the approach used was tested. The research categories included evacuation and emergency management, fire detection, and early warning systems, fire dynamics and spread prediction, fire load, and material risk analysis, intelligent systems and cyber security, fire prediction, and risk assessment. Convolutional neural networks, artificial neural networks, support vector machines, deep neural networks, you only look once, deep learning, and decision trees were prominent as machine learning categories. As a result, detailed literature was presented to define the academic publication profile of the research area, determine research fronts, detect emerging trends, and reveal sub-themes. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

19 pages, 2175 KiB  
Article
A Rule-Based Method for Enhancing Burst Tolerance in Stateful Microservices
by Kęstutis Pakrijauskas and Dalius Mažeika
Electronics 2025, 14(14), 2752; https://doi.org/10.3390/electronics14142752 - 8 Jul 2025
Viewed by 273
Abstract
Microservice architecture enables the development of flexible, loosely coupled applications that support elasticity and adaptability. As data availability is critical in any application, maintaining consistent access becomes a top priority. However, this introduces complexity, particularly for stateful microservices, which are slower to adapt [...] Read more.
Microservice architecture enables the development of flexible, loosely coupled applications that support elasticity and adaptability. As data availability is critical in any application, maintaining consistent access becomes a top priority. However, this introduces complexity, particularly for stateful microservices, which are slower to adapt to sudden changes and can lead to data availability degradation. Resource overprovisioning may prepare systems for peak loads; however, it is an inefficient method of problem-solving. Similarly, dynamic scaling based on machine learning can underperform due to insufficient training data or inaccurate prediction methods. This paper proposes a rule-based method that combines write-scaling and load balancing to distribute burst workloads across multiple stateful microservice nodes while also vertically scaling a single node to meet rising demand. This approach reduces failure rates and extends operational time under burst conditions, effectively preserving the application’s availability by providing more time for scaling. An experiment was performed to validate the proposed method of burst tolerance by workload distribution. The results showed that the proposed method enables a stateful microservice that sustains the burst load for nearly twice as long while further reducing the rate of failure. Full article
(This article belongs to the Special Issue New Advances in Cloud Computing and Its Latest Applications)
Show Figures

Figure 1

37 pages, 1029 KiB  
Article
Autonomous Reinforcement Learning for Intelligent and Sustainable Autonomous Microgrid Energy Management
by Iacovos Ioannou, Saher Javaid, Yasuo Tan and Vasos Vassiliou
Electronics 2025, 14(13), 2691; https://doi.org/10.3390/electronics14132691 - 3 Jul 2025
Viewed by 394
Abstract
Effective energy management in microgrids is essential for integrating renewable energy sources and maintaining operational stability. Machine learning (ML) techniques offer significant potential for optimizing microgrid performance. This study provides a comprehensive comparative performance evaluation of four ML-based control strategies: deep Q-networks (DQNs), [...] Read more.
Effective energy management in microgrids is essential for integrating renewable energy sources and maintaining operational stability. Machine learning (ML) techniques offer significant potential for optimizing microgrid performance. This study provides a comprehensive comparative performance evaluation of four ML-based control strategies: deep Q-networks (DQNs), proximal policy optimization (PPO), Q-learning, and advantage actor–critic (A2C). These strategies were rigorously tested using simulation data from a representative islanded microgrid model, with metrics evaluated across diverse seasonal conditions (autumn, spring, summer, winter). Key performance indicators included overall episodic reward, unmet load, excess generation, energy storage system (ESS) state-of-charge (SoC) imbalance, ESS utilization, and computational runtime. Results from the simulation indicate that the DQN-based agent consistently achieved superior performance across all evaluated seasons, effectively balancing economic rewards, reliability, and battery health while maintaining competitive computational runtimes. Specifically, DQN delivered near-optimal rewards by significantly reducing unmet load, minimizing excess renewable energy curtailment, and virtually eliminating ESS SoC imbalance, thereby prolonging battery life. Although the tabular Q-learning method showed the lowest computational latency, it was constrained by limited adaptability in more complex scenarios. PPO and A2C, while offering robust performance, incurred higher computational costs without additional performance advantages over DQN. This evaluation clearly demonstrates the capability and adaptability of the DQN approach for intelligent and autonomous microgrid management, providing valuable insights into the relative advantages and limitations of various ML strategies in complex energy management scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

19 pages, 1507 KiB  
Article
Fog Computing Architecture for Load Balancing in Parallel Production with a Distributed MES
by William Oñate and Ricardo Sanz
Appl. Sci. 2025, 15(13), 7438; https://doi.org/10.3390/app15137438 - 2 Jul 2025
Viewed by 205
Abstract
The technological growth in the automation of manufacturing processes, as seen in Industry 4.0, is characterized by a constant revolution and evolution in small- and medium-sized factories. As basic and advanced technologies from the pillars of Industry 4.0 are gradually incorporated into their [...] Read more.
The technological growth in the automation of manufacturing processes, as seen in Industry 4.0, is characterized by a constant revolution and evolution in small- and medium-sized factories. As basic and advanced technologies from the pillars of Industry 4.0 are gradually incorporated into their value chain, these factories can achieve adaptive technological transformation. This article presents a practical solution for companies seeking to evolve their production processes during the expansion phase of their manufacturing, starting from a base architecture with Industry 4.0 features which then integrate and implement specific tools that facilitate the duplication of installed capacity; this creates a situation that allows for the development of manufacturing execution systems (MESs) for each production line and a fog computing node, which is responsible for optimizing the load balance of order requests coming from the cloud and also acts as an intermediary between MESs and the cloud. On the other hand, legacy Machine Learning (ML) inference acceleration modules were integrated into the single-board computers of MESs to improve workflow across the new architecture. These improvements and integrations enabled the value chain of this expanded architecture to have lower latency, greater scalability, optimized resource utilization, and improved resistance to network service failures compared to the initial one. Full article
Show Figures

Figure 1

16 pages, 3186 KiB  
Article
AI-Driven Framework for Secure and Efficient Load Management in Multi-Station EV Charging Networks
by Md Sabbir Hossen, Md Tanjil Sarker, Marran Al Qwaid, Gobbi Ramasamy and Ngu Eng Eng
World Electr. Veh. J. 2025, 16(7), 370; https://doi.org/10.3390/wevj16070370 - 2 Jul 2025
Viewed by 454
Abstract
This research introduces a comprehensive AI-driven framework for secure and efficient load management in multi-station electric vehicle (EV) charging networks, responding to the increasing demand and operational difficulties associated with widespread EV adoption. The suggested architecture has three main parts: a Smart Load [...] Read more.
This research introduces a comprehensive AI-driven framework for secure and efficient load management in multi-station electric vehicle (EV) charging networks, responding to the increasing demand and operational difficulties associated with widespread EV adoption. The suggested architecture has three main parts: a Smart Load Balancer (SLB), an AI-driven intrusion detection system (AIDS), and a Real-Time Analytics Engine (RAE). These parts use advanced machine learning methods like Support Vector Machines (SVMs), autoencoders, and reinforcement learning (RL) to make the system more flexible, secure, and efficient. The framework uses federated learning (FL) to protect data privacy and make decisions in a decentralized way, which lowers the risks that come with centralizing data. The framework makes load distribution 23.5% more efficient, cuts average wait time by 17.8%, and predicts station-level demand with 94.2% accuracy, according to simulation results. The AI-based intrusion detection component has precision, recall, and F1-scores that are all over 97%, which is better than standard methods. The study also finds important gaps in the current literature and suggests new areas for research, such as using graph neural networks (GNNs) and quantum machine learning to make EV charging infrastructures even more scalable, resilient, and intelligent. Full article
Show Figures

Figure 1

18 pages, 2021 KiB  
Article
Analysis of Anchoring Muscles for Pipe Crawling Robots
by Frank Cianciarulo, Jacek Garbulinski, Jonathan Chambers, Thomas Pillsbury, Norman Wereley, Andrew Cross and Deepak Trivedi
Actuators 2025, 14(7), 331; https://doi.org/10.3390/act14070331 - 2 Jul 2025
Viewed by 263
Abstract
Pneumatic artificial muscles (PAMs) consist of an elastomeric bladder wrapped in a Kevlar braid. When inflated, PAMs expand radially and contract axially, producing large axial forces. PAMs are often utilized for their high specific work and specific power, as well as their ability [...] Read more.
Pneumatic artificial muscles (PAMs) consist of an elastomeric bladder wrapped in a Kevlar braid. When inflated, PAMs expand radially and contract axially, producing large axial forces. PAMs are often utilized for their high specific work and specific power, as well as their ability to produce large axial displacements. Although the axial behavior of PAMs is well studied, the radial behavior has remained underutilized and is poorly understood. Modeling was performed using a force balance approach to capture the effects that bladder strain and applied axial load have on the anchoring force. Radial expansion testing was performed to validate the model. Force due to anchoring was recorded using force transducers attached to sections of aluminum pipe using an MTS servo-hydraulic testing machine. Data from the test were compared to the predicted anchoring force. Radial expansion in large-diameter (over 50.8 mm) PAMs was then used in worm-like robots to create anchoring forces that allow for a peristaltic wave, which creates locomotion through acrylic pipes. By radially expanding, the PAM presses itself into the pipe, creating an anchor point. The previously anchored PAM then deflates, which propels the robot forward. Modeling of the radial expansion forces and anchoring was necessary to determine the pressurization required for proper anchoring before slipping occurs due to the combined robot and payload weight. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

28 pages, 2037 KiB  
Article
Proposed Model to Minimize Machining Time by Chip Removal Under Structural Constraint Taking into Consideration Machine Power, Surface Finish, and Cutting Speed by Using Sorting Algorithms
by Abraham Manilla-García, Néstor F. Guerrero-Rodriguez and Ivan Rivas-Cambero
Appl. Sci. 2025, 15(13), 7401; https://doi.org/10.3390/app15137401 - 1 Jul 2025
Viewed by 297
Abstract
This article proposes a model to estimate the optimal cutting speed and depth of cut used in the machining process by chip removal during the turning operation, considering the structural integrity of the workpiece to be machined. The structural integrity model is proposed [...] Read more.
This article proposes a model to estimate the optimal cutting speed and depth of cut used in the machining process by chip removal during the turning operation, considering the structural integrity of the workpiece to be machined. The structural integrity model is proposed considering the rounding of the cutting tool nose as a measure of roughness requested in the workpiece, the electrical power capacity delivered by the machine tool motor as a load-limiting factor for the process, the geometry of the desired workpiece, and the physical machining parameters given by cutting tool manufacturers. Based on these criteria, an estimation algorithm is proposed that integrates these parameters and executes the search for the optimal cutting depth and cutting speed, meeting the structural integrity criterion in accordance with the minimum machining time criterion in the turning process, establishing a balance between process reliability and minimization of machining time. The proposed model is innovative since it presents a new methodology to determine the depth of cut and calculate the machining speed under the criterion of preserving the structural integrity of the piece to be machined to the maximum. This means that the depth of cut and spindle speed estimated under the proposed methodology guarantee that during the machining process, the workpiece will not suffer structural damage from the cutting forces involved in the machining process, minimizing the effects of loading in areas of stress concentration, thereby contributing to highlighting and involving the concept of process reliability. This model provides a new theoretical method for technologists involved in the calculation of the machining process, offering them a theoretical basis for their proposals for depth of cut and cutting speed. Full article
Show Figures

Figure 1

21 pages, 1476 KiB  
Article
AI-Driven Handover Management and Load Balancing Optimization in Ultra-Dense 5G/6G Cellular Networks
by Chaima Chabira, Ibraheem Shayea, Gulsaya Nurzhaubayeva, Laura Aldasheva, Didar Yedilkhan and Saule Amanzholova
Technologies 2025, 13(7), 276; https://doi.org/10.3390/technologies13070276 - 1 Jul 2025
Cited by 1 | Viewed by 1029
Abstract
This paper presents a comprehensive review of handover management and load balancing optimization (LBO) in ultra-dense 5G and emerging 6G cellular networks. With the increasing deployment of small cells and the rapid growth of data traffic, these networks face significant challenges in ensuring [...] Read more.
This paper presents a comprehensive review of handover management and load balancing optimization (LBO) in ultra-dense 5G and emerging 6G cellular networks. With the increasing deployment of small cells and the rapid growth of data traffic, these networks face significant challenges in ensuring seamless mobility and efficient resource allocation. Traditional handover and load balancing techniques, primarily designed for 4G systems, are no longer sufficient to address the complexity of heterogeneous network environments that incorporate millimeter-wave communication, Internet of Things (IoT) devices, and unmanned aerial vehicles (UAVs). The review focuses on how recent advances in artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), are being applied to improve predictive handover decisions and enable real-time, adaptive load distribution. AI-driven solutions can significantly reduce handover failures, latency, and network congestion, while improving overall user experience and quality of service (QoS). This paper surveys state-of-the-art research on these techniques, categorizing them according to their application domains and evaluating their performance benefits and limitations. Furthermore, the paper discusses the integration of intelligent handover and load balancing methods in smart city scenarios, where ultra-dense networks must support diverse services with high reliability and low latency. Key research gaps are also identified, including the need for standardized datasets, energy-efficient AI models, and context-aware mobility strategies. Overall, this review aims to guide future research and development in designing robust, AI-assisted mobility and resource management frameworks for next-generation wireless systems. Full article
Show Figures

Figure 1

22 pages, 1000 KiB  
Article
A Transfer-Learning-Based Approach to Symmetry-Preserving Dynamic Equivalent Modeling of Large Power Systems with Small Variations in Operating Conditions
by Lahiru Aththanayake, Devinder Kaur, Shama Naz Islam, Ameen Gargoom and Nasser Hosseinzadeh
Symmetry 2025, 17(7), 1023; https://doi.org/10.3390/sym17071023 - 29 Jun 2025
Viewed by 329
Abstract
Robust dynamic equivalents of large power networks are essential for fast and reliable stability analysis of bulk power systems. This is because the dimensionality of modern power systems raises convergence issues in modern stability-analysis programs. However, even with modern computational power, it is [...] Read more.
Robust dynamic equivalents of large power networks are essential for fast and reliable stability analysis of bulk power systems. This is because the dimensionality of modern power systems raises convergence issues in modern stability-analysis programs. However, even with modern computational power, it is challenging to find reduced-order models for power systems due to the following factors: the tedious mathematical analysis involved in the classical reduction techniques requires large amounts of computational power; inadequate information sharing between geographical areas prohibits the execution of model-dependent reduction techniques; and frequent fluctuations in the operating conditions (OPs) of power systems necessitate updates to reduced models. This paper focuses on a measurement-based approach that uses a deep artificial neural network (DNN) to estimate the dynamics of an external system (ES) of a power system, enabling stability analysis of a study system (SS). This DNN technique requires boundary measurements only between the SS and the ES. However, machine learning-based techniques like this DNN are known for their extensive training requirements. In particular, for power systems that undergo continuous fluctuations in operating conditions due to the use of renewable energy sources, the applications of this DNN technique are limited. To address this issue, a Deep Transfer Learning (DTL)-based technique is proposed in this paper. This approach accounts for variations in the OPs such as time-to-time variations in loads and intermittent power generation from wind and solar energy sources. The proposed technique adjusts the parameters of a pretrained DNN model to a new OP, leveraging symmetry in the balanced adaptation of model layers to maintain consistent dynamics across operating conditions. The experimental results were obtained by representing the Queensland (QLD) system in the simplified Australian 14 generator (AU14G) model as the SS and the rest of AU14G as the ES in five scenarios that represent changes to the OP caused by variations in loads and power generation. Full article
(This article belongs to the Special Issue Symmetry Studies and Application in Power System Stability)
Show Figures

Figure 1

24 pages, 649 KiB  
Systematic Review
Algorithms for Load Balancing in Next-Generation Mobile Networks: A Systematic Literature Review
by Juan Ochoa-Aldeán, Carlos Silva-Cárdenas, Renato Torres, Jorge Ivan Gonzalez and Sergio Fortes
Future Internet 2025, 17(7), 290; https://doi.org/10.3390/fi17070290 - 28 Jun 2025
Viewed by 392
Abstract
Background: Machine learning methods are increasingly being used in mobile network optimization systems, especially next-generation mobile networks. The need for enhanced radio resource allocation schemes, improved user mobility and increased throughput, driven by a rising demand for data, has necessitated the development of [...] Read more.
Background: Machine learning methods are increasingly being used in mobile network optimization systems, especially next-generation mobile networks. The need for enhanced radio resource allocation schemes, improved user mobility and increased throughput, driven by a rising demand for data, has necessitated the development of diverse algorithms that optimize output values based on varied input parameters. In this context, we identify the main topics related to cellular networks and machine learning algorithms in order to pinpoint areas where the optimization of parameters is crucial. Furthermore, the wide range of available algorithms often leads to confusion and disorder during classification processes. It is crucial to note that next-generation networks are expected to require reduced latency times, especially for sensitive applications such as Industry 4.0. Research Question: An analysis of the existing literature on mobile network load balancing methods was conducted to identify systems that operate using semi-automatic, automatic and hybrid algorithms. Our research question is as follows: What are the automatic, semi-automatic and hybrid load balancing algorithms that can be applied to next-generation mobile networks? Contribution: This paper aims to present a comprehensive analysis and classification of the algorithms used in this area of study; in order to identify the most suitable for load balancing optimization in next-generation mobile networks, we have organized the classification into three categories, automatic, semi-automatic and hybrid, which will allow for a clear and concise idea of both theoretical and field studies that relate these three types of algorithms with next-generation networks. Figures and tables illustrate the number of algorithms classified by type. In addition, the most important articles related to this topic from five different scientific databases are summarized. Methodology: For this research, we employed the PRISMA method to conduct a systematic literature review of the aforementioned study areas. Findings: The results show that, despite the scarce literature on the subject, the use of load balancing algorithms significantly influences the deployment and performance of next-generation mobile networks. This study highlights the critical role that algorithm selection should play in 5G network optimization, in particular to address latency reduction, dynamic resource allocation and scalability in dense user environments, key challenges for applications such as industrial automation and real-time communications. Our classification framework provides a basis for operators to evaluate algorithmic trade-offs in scenarios such as network fragmentation or edge computing. To fill existing gaps, we propose further research on AI-driven hybrid models that integrate real-time data analytics with predictive algorithms, enabling proactive load management in ultra-reliable 5G/6G architectures. Given this background, it is crucial to conduct further research on the effects of technologies used for load balancing optimization. This line of research is worthy of consideration. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

19 pages, 4217 KiB  
Review
Optimization of Rock-Cutting Tools: Improvements in Structural Design and Process Efficiency
by Yuecao Cao, Qiang Zhang, Shucheng Zhang, Ying Tian, Xiangwei Dong, Xiaojun Song and Dongxiang Wang
Computation 2025, 13(7), 152; https://doi.org/10.3390/computation13070152 - 23 Jun 2025
Viewed by 525
Abstract
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to [...] Read more.
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to premature wear, high specific energy, and suboptimal performance. Topology optimization, as an advanced computational design method, offers transformative potential for lightweight, high-strength cutter structures and adaptive cutting process control. This review systematically examines recent advancements in topology-optimized cutter design and its integration with rock-cutting mechanics. The structural innovations in cutter geometry and materials are analyzed, emphasizing solutions for stress distribution, wear/fatigue resistance, and dynamic load adaptation. The numerical methods for modeling rock–tool interactions are introduced, including discrete element method (DEM) simulations, smoothed particle hydrodynamics (SPH) methods, and machine learning (ML)-enhanced predictive models. The cutting process optimization strategies that leverage topology optimization to balance objectives such as energy efficiency, chip formation control, and tool lifespan are evaluated. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

28 pages, 2184 KiB  
Article
Advancing Sustainable Road Construction with Multiple Regression Analysis, Regression Tree Models, and Case-Based Reasoning for Environmental Load and Cost Estimation
by Joon-Soo Kim
Buildings 2025, 15(12), 2083; https://doi.org/10.3390/buildings15122083 - 17 Jun 2025
Viewed by 325
Abstract
The construction industry, particularly in road projects, faces pressing challenges related to environmental sustainability and cost management. As road construction contributes significantly to environmental degradation and demands large-scale investments, there is an urgent need for innovative solutions that balance environmental impact with economic [...] Read more.
The construction industry, particularly in road projects, faces pressing challenges related to environmental sustainability and cost management. As road construction contributes significantly to environmental degradation and demands large-scale investments, there is an urgent need for innovative solutions that balance environmental impact with economic feasibility. Despite advancements in building technologies and energy-efficient materials, accurate and reliable predictions for environmental load and construction costs during the planning and design stages remain limited due to insufficient data systems and complex project variables. This study explores the application of machine-learning techniques to predict environmental loads and construction costs in road projects, using a dataset of 100 national road construction cases in the Republic of Korea. The research employs multiple regression analysis, regression tree models, and case-based reasoning (CBR) to estimate these critical parameters at both the planning and design stages. A novel aspect of this research lies in its comparative analysis of different machine-learning models to address the challenge of limited and non-ideal data environments, offering valuable insights for enhancing predictive accuracy despite data scarcity. The results reveal that while regression models perform better in the design stage, achieving error rates of 12% for environmental load estimation and 23% for construction costs, the case-based reasoning model outperforms others in the planning stage, with a 15.9% average error rate for environmental load and 19.9% for construction costs. These findings highlight the potential of machine-learning techniques to drive environmentally conscious and economically sound decision-making in construction, despite data limitations. However, the study also identifies the need for larger, more diverse datasets and better integration of qualitative data to improve model accuracy, offering a roadmap for future research in sustainable construction management. Full article
Show Figures

Figure 1

16 pages, 2441 KiB  
Article
Midspan Deflection Prediction of Long-Span Cable-Stayed Bridge Based on DIWPSO-SVM Algorithm
by Lilin Li, Qing He, Hua Wang and Wensheng Wang
Appl. Sci. 2025, 15(10), 5581; https://doi.org/10.3390/app15105581 - 16 May 2025
Viewed by 310
Abstract
With the increasing emphasis on the safety and longevity of large-span cable-stayed bridges, the accurate prediction of midspan deflection has become a critical aspect of structural health monitoring (SHM). This study proposes a novel hybrid model, DIWPSO-SVM, which integrates dynamic inertia weight particle [...] Read more.
With the increasing emphasis on the safety and longevity of large-span cable-stayed bridges, the accurate prediction of midspan deflection has become a critical aspect of structural health monitoring (SHM). This study proposes a novel hybrid model, DIWPSO-SVM, which integrates dynamic inertia weight particle swarm optimization (DIWPSO) with support vector machines (SVMs) to enhance the prediction accuracy of midspan deflection. The model incorporates wavelet transform to decompose deflection signals into temperature and vehicle load effects, allowing for a more detailed analysis of their individual impacts. The DIWPSO algorithm dynamically adjusts the inertia weight to balance global exploration and local exploitation, optimizing SVM parameters for improved performance. The proposed model was validated using real-world data from a long-span cable-stayed bridge, demonstrating superior prediction accuracy compared to traditional SVM and PSO-SVM models. The DIWPSO-SVM model achieved an average prediction error of 1.43 mm and a root-mean-square error (RMSE) of 2.05, significantly outperforming the original SVM model, which had an average error of 5.29 mm and an RMSE of 5.62. These results highlight the effectiveness of the DIWPSO-SVM model in providing accurate and reliable midspan deflection predictions, offering a robust tool for bridge health monitoring and maintenance decision-making. Full article
Show Figures

Figure 1

Back to TopTop