Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (182)

Search Parameters:
Keywords = analytical redundancy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1753 KiB  
Article
Hybrid Cloud-Based Information and Control System Using LSTM-DNN Neural Networks for Optimization of Metallurgical Production
by Kuldashbay Avazov, Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov, Akmalbek Abdusalomov and Young Im Cho
Processes 2025, 13(7), 2237; https://doi.org/10.3390/pr13072237 - 13 Jul 2025
Viewed by 676
Abstract
A methodology for detecting systematic errors in sets of equally accurate, uncorrelated, aggregate measurements is proposed and applied within the automatic real-time dispatch control system of a copper concentrator plant (CCP) to refine the technical and economic performance indicators (EPIs) computed by the [...] Read more.
A methodology for detecting systematic errors in sets of equally accurate, uncorrelated, aggregate measurements is proposed and applied within the automatic real-time dispatch control system of a copper concentrator plant (CCP) to refine the technical and economic performance indicators (EPIs) computed by the system. This work addresses and solves the problem of selecting and obtaining reliable measurement data by exploiting the redundant measurements of process streams together with the balance equations linking those streams. This study formulates an approach for integrating cloud technologies, machine learning methods, and forecasting into information control systems (ICSs) via predictive analytics to optimize CCP production processes. A method for combining the hybrid cloud infrastructure with an LSTM-DNN neural network model has been developed, yielding a marked improvement in TEP for copper concentration operations. The forecasting accuracy for the key process parameters rose from 75% to 95%. Predictive control reduced energy consumption by 10% through more efficient resource use, while the copper losses to tailings fell by 15–20% thanks to optimized reagent dosing and the stabilization of the flotation process. Equipment failure prediction cut the amount of unplanned downtime by 30%. As a result, the control system became adaptive, automatically correcting the parameters in real time and lessening the reliance on operator decisions. The architectural model of an ICS for metallurgical production based on the hybrid cloud and the LSTM-DNN model was devised to enhance forecasting accuracy and optimize the EPIs of the CCP. The proposed model was experimentally evaluated against alternative neural network architectures (DNN, GRU, Transformer, and Hybrid_NN_TD_AIST). The results demonstrated the superiority of the LSTM-DNN in forecasting accuracy (92.4%), noise robustness (0.89), and a minimal root-mean-square error (RMSE = 0.079). The model shows a strong capability to handle multidimensional, non-stationary time series and to perform adaptive measurement correction in real time. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

19 pages, 1583 KiB  
Article
Modeling, Validation, and Controllability Degradation Analysis of a 2(P-(2PRU–PRPR)-2R) Hybrid Parallel Mechanism Using Co-Simulation
by Qing Gu, Zeqi Wu, Yongquan Li, Huo Tao, Boyu Li and Wen Li
Dynamics 2025, 5(3), 30; https://doi.org/10.3390/dynamics5030030 - 11 Jul 2025
Viewed by 199
Abstract
This work systematically addresses the dual challenges of non-inertial dynamic coupling and kinematic constraint redundancy encountered in dynamic modeling of serial–parallel–serial hybrid robotic mechanisms, and proposes an improved Newton–Euler modeling method with constraint compensation. Taking the Skiing Simulation Platform with 6-DOF as the [...] Read more.
This work systematically addresses the dual challenges of non-inertial dynamic coupling and kinematic constraint redundancy encountered in dynamic modeling of serial–parallel–serial hybrid robotic mechanisms, and proposes an improved Newton–Euler modeling method with constraint compensation. Taking the Skiing Simulation Platform with 6-DOF as the research mechanism, the inverse kinematic model of the closed-chain mechanism is established through GF set theory, with explicit analytical expressions derived for the motion parameters of limb mass centers. Introducing a principal inertial coordinate system into the dynamics equations, a recursive algorithm incorporating force/moment coupling terms is developed. Numerical simulations reveal a 9.25% periodic deviation in joint moments using conventional methods. Through analysis of the mechanism’s intrinsic properties, it is identified that the lack of angular momentum conservation constraints on the end-effector in non-inertial frames leads to system controllability degradation. Accordingly, a constraint compensation strategy is proposed: establishing linearly independent differential algebraic equations supplemented with momentum/angular momentum balance equations for the end platform. Co-Simulation results demonstrate that the optimized model reduces the maximum relative error of actuator joint moments to 0.98%, and maintains numerical stability across the entire configuration space. The constraint compensation framework provides a universal solution for dynamics modeling of complex closed-chain mechanisms, validated through applications in flight simulators and automotive driving simulators. Full article
Show Figures

Figure 1

26 pages, 33866 KiB  
Article
Three-Dimensional Multitemporal Game Engine Visualizations for Watershed Analysis, Lighting Simulation, and Change Detection in Built Environments
by Heikki Kauhanen, Toni Rantanen, Petri Rönnholm, Osama Bin Shafaat, Kaisa Jaalama, Arttu Julin and Matti Vaaja
ISPRS Int. J. Geo-Inf. 2025, 14(7), 265; https://doi.org/10.3390/ijgi14070265 - 5 Jul 2025
Viewed by 459
Abstract
This study explores the reuse of high-resolution 3D spatial datasets for multiple urban analyses within a game engine environment, aligning with circular economy principles in sustainable urban planning. The work is situated in two residential test areas in Finland, where watershed analysis, lighting [...] Read more.
This study explores the reuse of high-resolution 3D spatial datasets for multiple urban analyses within a game engine environment, aligning with circular economy principles in sustainable urban planning. The work is situated in two residential test areas in Finland, where watershed analysis, lighting simulation, and change detection were conducted using data acquired through drone photogrammetry and terrestrial laser scanning. These datasets were processed and visualized using Unreal Engine 5.5, enabling the interactive, multitemporal exploration of urban phenomena. The results demonstrate how a single photogrammetric dataset—originally captured for visual or structural purposes—can serve a broad range of analytical functions, such as simulating seasonal lighting conditions, modeling stormwater runoff, and visualizing spatial changes over time. The study highlights the importance of capturing data at a resolution that satisfies the most demanding intended use, while allowing simpler analyses to benefit simultaneously. Reflections on game engine capabilities, data quality thresholds, and user interactivity underline the feasibility of integrating such tools into citizen participation, housing company decision making, and urban governance. The findings advocate for a circular data approach in urban planning, reducing redundant fieldwork and supporting sustainable data practices through multi-purpose digital twins and spatial simulations. Full article
Show Figures

Figure 1

22 pages, 1330 KiB  
Article
Analysis of Age of Information in CSMA Network with Correlated Sources
by Long Liang and Siyuan Zhou
Electronics 2025, 14(13), 2688; https://doi.org/10.3390/electronics14132688 - 2 Jul 2025
Viewed by 274
Abstract
With the growing deployment of latency-sensitive applications, the Age of Information (AoI) has emerged as a key performance metric for the evaluation of data freshness in networked systems. While prior studies have extensively explored the AoI under centralized scheduling or random-access protocols such [...] Read more.
With the growing deployment of latency-sensitive applications, the Age of Information (AoI) has emerged as a key performance metric for the evaluation of data freshness in networked systems. While prior studies have extensively explored the AoI under centralized scheduling or random-access protocols such as carrier sense multiple access (CSMA) and ALOHA, most assume that sources generate independent information. However, in practical scenarios such as environmental monitoring and visual sensing, information correlation frequently exists among correlated sources, providing new opportunities to enhance network timeliness. In this paper, we propose a novel analytical framework that captures the interplay between CSMA channel contention and spatial information correlation among sources. By leveraging the stochastic hybrid systems (SHS) methodology, we jointly model random backoff behavior, medium access collisions, and correlated updates in a scalable and mathematically tractable manner. We derive closed-form expressions for the average AoI under general correlation structures and further propose a lightweight estimation approach for scenarios where the correlation matrix is partially known or unknown. To our knowledge, this is the first work that integrates correlation-aware modeling into AoI analysis under distributed CSMA protocols. Extensive simulations confirm the accuracy of the theoretical results and demonstrate that exploiting information redundancy can significantly reduce the AoI, particularly under high node densities and constrained sampling budgets. These findings offer practical guidance for the design of efficient and timely data acquisition strategies in dense or energy-constrained Internet of Things (IoT) networks. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

27 pages, 6102 KiB  
Article
Inverse Kinematics for Robotic Manipulators via Deep Neural Networks: Experiments and Results
by Ana Calzada-Garcia, Juan G. Victores, Francisco J. Naranjo-Campos and Carlos Balaguer
Appl. Sci. 2025, 15(13), 7226; https://doi.org/10.3390/app15137226 - 26 Jun 2025
Viewed by 391
Abstract
This paper explores the application of Deep Neural Networks (DNNs) to solve the Inverse Kinematics (IK) problem in robotic manipulators. The IK problem, crucial for ensuring precision in robotic movements, involves determining joint configurations for a manipulator to reach a desired position or [...] Read more.
This paper explores the application of Deep Neural Networks (DNNs) to solve the Inverse Kinematics (IK) problem in robotic manipulators. The IK problem, crucial for ensuring precision in robotic movements, involves determining joint configurations for a manipulator to reach a desired position or orientation. Traditional methods, such as analytical and numerical approaches, have limitations, especially for redundant manipulators, or involve high computational costs. Recent advances in machine learning, particularly with DNNs, have shown promising results and seem fit for addressing these challenges. This study investigates several DNN architectures, namely Feed-Forward Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), for solving the IK problem, using the TIAGo robotic arm with seven Degrees of Freedom (DOFs). Different training datasets, normalization techniques, and orientation representations are tested, and custom metrics are introduced to evaluate position and orientation errors. The performance of these models is compared, with a focus on curriculum learning to optimize training. The results demonstrate the potential of DNNs to efficiently solve the IK problem while avoiding issues such as singularities, competing with traditional methods in precision and speed. Full article
(This article belongs to the Special Issue Technological Breakthroughs in Automation and Robotics)
Show Figures

Figure 1

14 pages, 3649 KiB  
Article
Minimum Data Spherical Spiral NF/FF Transformations for Offset-Mounted Elongated AUTs: An Experimental Validation
by Francesco D’Agostino, Flaminio Ferrara, Claudio Gennarelli, Rocco Guerriero, Massimo Migliozzi, Luigi Pascarella and Giovanni Riccio
Appl. Sci. 2025, 15(13), 7202; https://doi.org/10.3390/app15137202 - 26 Jun 2025
Viewed by 214
Abstract
This paper concerns the experimental validation of optimized near-field (NF) spherical spiral scannings employing a minimum number of samples, when an offset-mounted elongated antenna under test (AUT), i.e., with its center shifted with respect to that of the measurement sphere, is considered. In [...] Read more.
This paper concerns the experimental validation of optimized near-field (NF) spherical spiral scannings employing a minimum number of samples, when an offset-mounted elongated antenna under test (AUT), i.e., with its center shifted with respect to that of the measurement sphere, is considered. In order to perform the standard NF/far-field transformation (NF/FFT) technique, a non-centered AUT would generally require the collection of a significantly increased amount of voltage data if compared to the onset scenario. This issue is addressed here by using the non-redundant (NR) sampling representations of electromagnetic (EM) fields. These representations, by leveraging the analytical properties of the EM field, allow one to perform the NR NF/FFTs for offset-mounted AUTs by using only a minimum number of (offset acquired) samples, equal to that required by the NR approaches for the onset case (over 85% fewer samples compared to the standard NF spherical scanning). In particular, these NR NF/FFTs are obtained by modeling the AUT with a prolate spheroid or a rounded cylinder and their effectiveness is fully assessed by the reported experimental results. Full article
Show Figures

Figure 1

26 pages, 11251 KiB  
Article
Design and Testing of a Four-Arm Multi-Joint Apple Harvesting Robot Based on Singularity Analysis
by Xiaojie Lei, Jizhan Liu, Houkang Jiang, Baocheng Xu, Yucheng Jin and Jianan Gao
Agronomy 2025, 15(6), 1446; https://doi.org/10.3390/agronomy15061446 - 13 Jun 2025
Viewed by 524
Abstract
The use of multi-joint arms in a high-spindle environment can solve complex problems, but the singularity problem of the manipulator related to the structure of the serial manipulator is prominent. Therefore, based on the general mathematical model of fruit spatial distribution in high-spindle [...] Read more.
The use of multi-joint arms in a high-spindle environment can solve complex problems, but the singularity problem of the manipulator related to the structure of the serial manipulator is prominent. Therefore, based on the general mathematical model of fruit spatial distribution in high-spindle apple orchards, this study proposes two harvesting system architecture schemes that can meet the constraints of fruit spatial distribution and reduce the singularity of harvesting robot operation, which are four-arm dual-module independent moving scheme (Scheme A) and four-arm single-module parallel moving scheme (Scheme B). Based on the link-joint method, the analytical expression of the singular configuration of the redundant degree of freedom arm group system under the two schemes is obtained. Then, the inverse kinematics solution method of the redundant arm group and the singularity avoidance picking trajectory planning strategy are proposed to realize the judgment and solution of the singular configuration in the complex working environment of the high-spindle. The singularity rate of Scheme A in the simulation environment is 17.098%, and the singularity rate of Scheme B is only 6.74%. In the field experiment, the singularity rate of Scheme A is 26.18%, while the singularity rate of Scheme B is 13.22%. The success rate of Schemes A and B are 80.49% and 72.33%, respectively. Through experimental comparison and analysis, Scheme B is more prominent in solving singular problems but still needs to improve the success rate in future research. This paper can provide a reference for solving the singular problems in the complex working environment of high spindles. Full article
Show Figures

Figure 1

19 pages, 1997 KiB  
Article
Highway-Transportation-Asset Criticality Estimation Leveraging Stakeholder Input Through an Analytical Hierarchy Process (AHP)
by Kwadwo Amankwah-Nkyi, Sarah Hernandez and Suman Kumar Mitra
Sustainability 2025, 17(11), 5212; https://doi.org/10.3390/su17115212 - 5 Jun 2025
Viewed by 474
Abstract
Transportation agencies face increasing challenges in identifying and prioritizing which infrastructure assets are most critical to maintain and protect, particularly amid aging networks, limited budgets, and growing threats from climate change and extreme events. However, existing prioritization approaches often lack consistency and fail [...] Read more.
Transportation agencies face increasing challenges in identifying and prioritizing which infrastructure assets are most critical to maintain and protect, particularly amid aging networks, limited budgets, and growing threats from climate change and extreme events. However, existing prioritization approaches often lack consistency and fail to adequately incorporate diverse stakeholder perspectives. This study develops a systematic, stakeholder-informed method for ranking transportation assets based on their criticality to the overall transportation system. As a novel approach, we use the analytical hierarchy process (AHP) and present a case study of the applied approach. Six criteria were identified for ranking assets: annual average daily traffic (AADT), redundancy, freight output, roadway classification, Social Vulnerability Index (SoVI), and tourism. Stakeholder input was collected via an AHP-based survey using pairwise comparisons and translated into weighted rankings. Thirty complete responses (13.2% response rate) from experts (i.e., engineers, analysts, planners, etc.) were analyzed, with the resulting ranks from highest to lowest priority being AADT, redundancy, freight output, roadway classification, SoVI, and tourism. Stability analysis confirmed that rankings were consistent with a minimum of 15 responses. The resulting method provides a practical, replicable tool for agencies to perform statewide vulnerability/resiliency assessments ensuring that decision-making reflects a broad range of expert perspectives. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

24 pages, 5214 KiB  
Article
Assessing Large-Scale Flood Risks: A Multi-Source Data Approach
by Mengyao Wang, Hong Zhu, Jiaqi Yao, Liuru Hu, Haojie Kang and An Qian
Sustainability 2025, 17(11), 5133; https://doi.org/10.3390/su17115133 - 3 Jun 2025
Viewed by 458
Abstract
Flood hazards caused by intense short-term precipitation have led to significant social and economic losses and pose serious threats to human life and property. Accurate disaster risk assessment plays a critical role in verifying disaster statistics and supporting disaster recovery and reconstruction processes. [...] Read more.
Flood hazards caused by intense short-term precipitation have led to significant social and economic losses and pose serious threats to human life and property. Accurate disaster risk assessment plays a critical role in verifying disaster statistics and supporting disaster recovery and reconstruction processes. In this study, a novel Large-Scale Flood Risk Assessment Model (LS-FRAM) is proposed, incorporating the dimensions of hazard, exposure, vulnerability, and coping capacity. Multi-source heterogeneous data are utilized for evaluating the flood risks. Soil erosion modeling is incorporated into the assessment framework to better understand the interactions between flood intensity and land surface degradation. An index system comprising 12 secondary indicators is constructed and screened using Pearson correlation analysis to minimize redundancy. Subsequently, the Analytic Hierarchy Process (AHP) is utilized to determine the weights of the primary-level indicators, while the entropy weight method, Fuzzy Analytic Hierarchy Process (FAHP), and an integrated weighting approach are combined to calculate the weights of the secondary-level indicators. This model addresses the complexity of large-scale flood risk assessment and management by incorporating multiple perspectives and leveraging diverse data sources. The experimental results demonstrate that the flood risk assessment model, utilizing multi-source data, achieves an overall accuracy of 88.49%. Specifically, the proportions of areas classified as high and very high flood risk are 54.11% in Henan, 31.74% in Shaanxi, and 18.2% in Shanxi. These results provide valuable scientific support for enhancing flood control, disaster relief capabilities, and risk management in the middle and lower reaches of the Yellow River. Furthermore, they can furnish the necessary data support for post-disaster reconstruction efforts in impacted areas. Full article
(This article belongs to the Special Issue Sustainable Water Management in Rapid Urbanization)
Show Figures

Figure 1

13 pages, 1410 KiB  
Article
Fault Detection, Isolation and Reconfiguration of Four-Bar Mechanism-Based Knee Exoskeleton
by Prakhar Jain, Tarun Kumar Bera, Ashish Singla, Sajid Rafique and Magnus Isaksson
Sensors 2025, 25(11), 3516; https://doi.org/10.3390/s25113516 - 3 Jun 2025
Viewed by 511
Abstract
Knee exoskeletons are sophisticated wearable devices engineered to aid or augment human movement, especially in rehabilitation and mobility assistance contexts. To address reliability concerns, the proposed knee exoskeleton incorporates a fault-tolerant control system using a fault detection, isolation and reconfiguration (FDI) technique. This [...] Read more.
Knee exoskeletons are sophisticated wearable devices engineered to aid or augment human movement, especially in rehabilitation and mobility assistance contexts. To address reliability concerns, the proposed knee exoskeleton incorporates a fault-tolerant control system using a fault detection, isolation and reconfiguration (FDI) technique. This system enables the exoskeleton to continue functioning even if one of the actuators experiences a fault, ensuring user safety and continuous operation. For actuator fault detection, analytical redundancy relations (ARRs) are derived from the bond graph model of the knee exoskeleton. ARRs are monitored for actuator fault detection and isolation. In this work, there is no fault initially; after some time, a fault is created in the rotary actuator; finally, the faulty actuator is reconfigured by another rotary actuator. Simulation findings illustrate the suggested FDI system’s effectiveness in improving the robustness of knee exoskeletons during the sit-to-stand motion. The proposed system successfully reconfigures itself in response to faults. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 4719 KiB  
Article
Urban Resilience and Energy Demand in Tropical Climates: A Functional Zoning Approach for Emerging Cities
by Javier Urquizo and Hugo Rivera-Torres
Urban Sci. 2025, 9(6), 203; https://doi.org/10.3390/urbansci9060203 - 2 Jun 2025
Viewed by 713
Abstract
The management of power supply and distribution is becoming increasingly challenging because of the significant increase in energy demand brought on by global population growth. Buildings are estimated to be accountable for 40% of the worldwide use of energy, which underlines how important [...] Read more.
The management of power supply and distribution is becoming increasingly challenging because of the significant increase in energy demand brought on by global population growth. Buildings are estimated to be accountable for 40% of the worldwide use of energy, which underlines how important accurate demand estimation is for the design and construction of electrical infrastructure. In this respect, transmission and distribution network planning must be adjusted to ensure a smooth transition to the National Interconnected System (NIS). A technical and analytical scientific approach to a modern neighbourhood in Ecuador called “the Nuevo Samborondón” case study (NSCS) is laid out in this article. Collecting geo-referenced data, evaluating the current electrical infrastructure, and forecasting energy demand constitute the first stages in this research procedure. The sector’s energy behaviour is accurately modelled using advanced programs such as 3D design software for modelling and drawing urban architecture along with a whole building energy simulation program and geographical information systems (GIS). For the purpose of recreating several operational situations and building the distribution infrastructure while giving priority to the current urban planning, an electrical system model is subsequently developed using power system analysis software at both levels of transmission and distribution. Furthermore, seamless digital substations are suggested as a component of the nation’s electrical infrastructure upgrade to provide redundancy and zero downtime. According to our findings, installing a 69 kV ring is a crucial step in electrifying NSCS and aligning electrical network innovations with urban planning. The system’s capacity to adjust and optimize power distribution would be strengthened provided the algorithms were given the freedom to react dynamically to changes or disruptions brought about by distributed generation sources. Full article
Show Figures

Figure 1

23 pages, 4204 KiB  
Article
Online Tuning of Koopman Operator for Fault-Tolerant Control: A Case Study of Mobile Robot Localising on Minimal Sensor Information
by Ravi Kiran Akumalla and Tushar Jain
Machines 2025, 13(6), 454; https://doi.org/10.3390/machines13060454 - 26 May 2025
Viewed by 590
Abstract
Self-localisation is a critical concept in the context of autonomous navigation and control of mobile robots. The most prevalent method for localisation is sensor fusion. Nevertheless, there are certain situations where the robots are compelled to localise on minimal sensor information. Furthermore, the [...] Read more.
Self-localisation is a critical concept in the context of autonomous navigation and control of mobile robots. The most prevalent method for localisation is sensor fusion. Nevertheless, there are certain situations where the robots are compelled to localise on minimal sensor information. Furthermore, the key challenge is determining how to localise if this minimal sensor information fails. This paper proposes a data-driven analytical redundancy technique to address this challenge in wheeled mobile robots. Initially, the localisation of the robot is performed using the encoder information alone to create a minimalistic approach. In such a situation, a fault or failure in the encoders makes the robot behave in an undesirable way. To mitigate this, we are proposing a method to use the information from the analytical models when a fault is detected. Specifically, we obtain the analytical models through data-driven techniques. By a step response experiment, the input voltage and output angular velocity data of the motor are collected. We then use the System Identification toolbox in MATLAB® (ver R2025a) and the Koopman framework to obtain different analytical models using the same data. We observe that these models experience errors at different input voltages of the motor, affecting the proposed method for handling the encoder fault. So, in this work, we use online tuning of the Koopman operator and experimentally demonstrate its effectiveness in handling the sensor fault on a mobile robot localising with minimal information. Full article
(This article belongs to the Special Issue Guidance, Navigation and Control of Mobile Robots)
Show Figures

Figure 1

19 pages, 2626 KiB  
Article
GTSDC: A Graph Theory Subspace-Based Analytical Algorithm for User Behavior
by Jianping Li, Yubo Tan, Jing Wang, Junwei Yu and Qiuyuan Hu
Electronics 2025, 14(10), 2049; https://doi.org/10.3390/electronics14102049 - 18 May 2025
Viewed by 447
Abstract
The exponential growth of multi-modal behavioral data in campus networks poses significant challenges for clustering analysis, including high dimensionality, redundancy, and attribute heterogeneity, which lead to degraded accuracy in existing methods. To address these issues, this study proposes a graph-theoretic subspace deep clustering [...] Read more.
The exponential growth of multi-modal behavioral data in campus networks poses significant challenges for clustering analysis, including high dimensionality, redundancy, and attribute heterogeneity, which lead to degraded accuracy in existing methods. To address these issues, this study proposes a graph-theoretic subspace deep clustering framework that synergizes a deep sparse auto-encoder (DSAE) with a method of graph partitioning based on normalized cut. First, a four-layer DSAE is designed to extract discriminative features while enforcing sparsity constraints, effectively reducing data dimensionality and mitigating noise. Second, the refined subspace representations are transformed into a similarity graph, where normalized cut optimization partitions users into coherent behavioral clusters by balancing intra-cluster cohesion and inter-cluster separation. Experimental validation on three datasets—USER_DATA, MNIST, and COIL20—demonstrates the superiority of GTSDC. It achieves 91% accuracy on USER_DATA, outperforming traditional algorithms (e.g., CLIQUE, K-means) by 120% and advanced methods (e.g., deep subspace clustering) by 15%. The proposed framework not only enhances network resource allocation through behavior-aware analytics but also lays the groundwork for personalized educational services. This work bridges the gap between graph theory and deep learning, offering a scalable solution for high-dimensional behavioral pattern recognition. In simple terms, this new algorithm can more accurately analyze user behavior in campus networks. It helps universities better allocate network resources, such as ensuring smooth online classes, and can also provide personalized educational services to students according to their behavior patterns. Full article
(This article belongs to the Special Issue Application of Big Data Mining and Analysis)
Show Figures

Figure 1

29 pages, 1367 KiB  
Article
Integrated Approach to Optimizing Selection and Placement of Water Pipeline Condition Monitoring Technologies
by Diego Calderon and Mohammad Najafi
Eng 2025, 6(5), 97; https://doi.org/10.3390/eng6050097 - 13 May 2025
Viewed by 804
Abstract
The gradual deterioration of underground water infrastructure requires constant condition monitoring to prevent catastrophic failures, reduce leaks, and avoid costly unexpected repairs. However, given the large scale and tight budgets of water utilities, it is essential to implement strategies for optimal selection and [...] Read more.
The gradual deterioration of underground water infrastructure requires constant condition monitoring to prevent catastrophic failures, reduce leaks, and avoid costly unexpected repairs. However, given the large scale and tight budgets of water utilities, it is essential to implement strategies for optimal selection and deployment of monitoring technologies. This article introduces a unified framework and methods for optimally selecting condition monitoring technologies while locating their deployment at the most vulnerable pipe segments. The approach is underpinned by an R-E-R-A-V (Redundant, Established, Reliable, Accurate, and Viable) principle and asset management concepts. The proposed method is supported by a thorough review of assessment and monitoring technologies, as well as common sensor placement approaches. The approach selects optimal technology using a combination of technology readiness levels and SFAHP (Spherical Fuzzy Analytic Hierarchy Process). Optimal placement is achieved with a k-Nearest Neighbors (kNN) model tuned with minimal topological and physical pipeline system features. Feature engineering is performed with OPTICS (Ordering Points to Identify the Clustering Structure) by evaluating the pipe segment vulnerability to failure-prone areas. Both the optimal technology selection and placement methods are integrated through a proposed algorithm. The optimal placement of monitoring technology is demonstrated through a modified benchmark network (Net3). The results reveal an accurate model with robust performance and a harmonic mean of precision and recall of approximately 65%. The model effectively identifies pipe segments requiring monitoring to prevent failures over a period of 11 years. The benefits and areas of future exploratory research are explained to encourage improvements and additional applications. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

26 pages, 18959 KiB  
Review
A Review on the Progressive Collapse of Reinforced Concrete Flat Slab–Column Structures
by Xiao Li, Tengfang Dong, Chengquan Wang, Weiwei Zhang, Rongyang Liu and Jingjing Wang
Materials 2025, 18(9), 2056; https://doi.org/10.3390/ma18092056 - 30 Apr 2025
Viewed by 591
Abstract
Reinforced concrete flat slab–column structures, lacking the redundancy provided by a beam–column system, are susceptible to punching shear failure under extreme loading conditions, which may lead to progressive collapse with catastrophic consequences. A systematic review of recent advancements in the progressive collapse resistance [...] Read more.
Reinforced concrete flat slab–column structures, lacking the redundancy provided by a beam–column system, are susceptible to punching shear failure under extreme loading conditions, which may lead to progressive collapse with catastrophic consequences. A systematic review of recent advancements in the progressive collapse resistance of flat slab–column systems has been provided, categorizing the methodologies into experimental investigation, theoretical analysis, and numerical simulation. Experimental studies primarily utilize the Alternative Load Path methodology, incorporating both quasi-static and dynamic loading protocols to assess structural performance. Different column removal scenarios (e.g., corner, edge, and interior column failures) clarify the load redistribution patterns and the evolution of resistance mechanisms. Theoretical frameworks focus on tensile and compressive membrane actions, punching shear mechanism, and post-punching shear mechanism. Analytical models, incorporating strain-hardening effects and deformation compatibility constraints, show improved correlation with experimental results. Numerical simulations use multi-scale modeling strategies, integrating micro-level joint models with macro-level structural assemblies. Advanced finite element analysis techniques effectively replicate collapse behaviors under various column failure scenarios, validated by full-scale test data. This synthesis identifies key research priorities and technical challenges in collapse-resistant design, establishing theoretical foundations for future investigations of flat slab systems under multi-hazard coupling effects. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

Back to TopTop