Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,363)

Search Parameters:
Keywords = value iterations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5971 KB  
Article
Improved MNet-Atten Electric Vehicle Charging Load Forecasting Based on Composite Decomposition and Evolutionary Predator–Prey and Strategy
by Xiaobin Wei, Qi Jiang, Huaitang Xia and Xianbo Kong
World Electr. Veh. J. 2025, 16(10), 564; https://doi.org/10.3390/wevj16100564 - 2 Oct 2025
Abstract
In the context of low carbon, achieving accurate forecasting of electrical energy is critical for power management with the continuous development of power systems. For the sake of improving the performance of load forecasting, an improved MNet-Atten electric vehicle charging load forecasting based [...] Read more.
In the context of low carbon, achieving accurate forecasting of electrical energy is critical for power management with the continuous development of power systems. For the sake of improving the performance of load forecasting, an improved MNet-Atten electric vehicle charging load forecasting based on composite decomposition and the evolutionary predator–prey and strategy model is proposed. In this light, through the data decomposition theory, each subsequence is processed using complementary ensemble empirical mode decomposition and filters out high-frequency white noise by using singular value decomposition based on matrix operation, which improves the anti-interference ability and computational efficiency of the model. In the model construction stage, the MNet-Atten prediction model is developed and constructed. The convolution module is used to mine the local dependencies of the sequences, and the long term and short-term features of the data are extracted through the loop and loop skip modules to improve the predictability of the data itself. Furthermore, the evolutionary predator and prey strategy is used to iteratively optimize the learning rate of the MNet-Atten for improving the forecasting performance and convergence speed of the model. The autoregressive module is used to enhance the ability of the neural network to identify linear features and improve the prediction performance of the model. Increasing temporal attention to give more weight to important features for global and local linkage capture. Additionally, the electric vehicle charging load data in a certain region, as an example, is verified, and the average value of 30 running times of the combined model proposed is 117.3231 s, and the correlation coefficient PCC of the CEEMD-SVD-EPPS-MNet-Atten model is closer to 1. Furthermore, the CEEMD-SVD-EPPS-MNet-Atten model has the lowest MAPE, RMSE, and PCC. The results show that the model in this paper can better extract the characteristics of the data, improve the modeling efficiency, and have a high data prediction accuracy. Full article
(This article belongs to the Section Charging Infrastructure and Grid Integration)
Show Figures

Graphical abstract

31 pages, 1105 KB  
Article
MoCap-Impute: A Comprehensive Benchmark and Comparative Analysis of Imputation Methods for IMU-Based Motion Capture Data
by Mahmoud Bekhit, Ahmad Salah, Ahmed Salim Alrawahi, Tarek Attia, Ahmed Ali, Esraa Eldesouky and Ahmed Fathalla
Information 2025, 16(10), 851; https://doi.org/10.3390/info16100851 - 1 Oct 2025
Abstract
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and [...] Read more.
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and environmental interference. Such limitations can introduce bias, prevent the fusion of critical data streams, and ultimately compromise the integrity of human activity analysis. Despite the plethora of data imputation techniques available, there have been few systematic performance evaluations of these techniques explicitly for the time series data of IMU-derived MoCap data. We address this by evaluating the imputation performance across three distinct contexts: univariate time series, multivariate across players, and multivariate across kinematic angles. To address this limitation, we propose a systematic comparative analysis of imputation techniques, including statistical, machine learning, and deep learning techniques, in this paper. We also introduce the first publicly available MoCap dataset specifically for the purpose of benchmarking missing value imputation, with three missingness mechanisms: missing completely at random, block missingness, and a simulated value-dependent missingness pattern simulated at the signal transition points. Using data from 53 karate practitioners performing standardized movements, we artificially generated missing values to create controlled experimental conditions. We performed experiments across the 53 subjects with 39 kinematic variables, which showed that discriminating between univariate and multivariate imputation frameworks demonstrates that multivariate imputation frameworks surpassunivariate approaches when working with more complex missingness mechanisms. Specifically, multivariate approaches achieved up to a 50% error reduction (with the MAE improving from 10.8 ± 6.9 to 5.8 ± 5.5) compared to univariate methods for transition point missingness. Specialized time series deep learning models (i.e., SAITS, BRITS, GRU-D) demonstrated a superior performance with MAE values consistently below 8.0 for univariate contexts and below 3.2 for multivariate contexts across all missing data percentages, significantly surpassing traditional machine learning and statistical methods. Notable traditional methods such as Generative Adversarial Imputation Networks and Iterative Imputers exhibited a competitive performance but remained less stable than the specialized temporal models. This work offers an important baseline for future studies, in addition to recommendations for researchers looking to increase the accuracy and robustness of MoCap data analysis, as well as integrity and trustworthiness. Full article
(This article belongs to the Section Information Processes)
9 pages, 852 KB  
Article
A Fast Designed Thresholding Algorithm for Low-Rank Matrix Recovery with Application to Missing English Text Completion
by Haizhen He, Angang Cui and Hong Yang
Mathematics 2025, 13(19), 3135; https://doi.org/10.3390/math13193135 - 1 Oct 2025
Abstract
This article is proposing a fast version of adaptive iterative matrix designed thresholding (AIMDT) algorithm which is studied in our previous work. In AIMDT algorithm, a designed thresholding operator is studied to the problem of recovering the low-rank matrices. By adjusting the size [...] Read more.
This article is proposing a fast version of adaptive iterative matrix designed thresholding (AIMDT) algorithm which is studied in our previous work. In AIMDT algorithm, a designed thresholding operator is studied to the problem of recovering the low-rank matrices. By adjusting the size of the parameter, this designed operator can apply less bias to the singular values of a matrice. Using this proposed designed operator, the AIMDT algorithm has been generated to solve the matrix rank minimization problem, and the numerical experiments have shown the superiority of AIMDT algorithm. However, the AIMDT algorithm converges slowly in general. In order to recover the low-rank matrices more quickly, we present a fast AIMDT algorithm to recover the low-rank matrices in this paper. The numerical results on some random low-rank matrix completion problems and a missing English text completion problem show that the our proposed fast algorithm has much faster convergence than the previous AIMDT algorithm. Full article
(This article belongs to the Special Issue Numerical Optimization: Algorithms and Applications)
Show Figures

Figure 1

21 pages, 2419 KB  
Article
Application Features of a VOF Method for Simulating Boiling and Condensation Processes
by Andrey Kozelkov, Andrey Kurkin, Andrey Puzan, Vadim Kurulin, Natalya Tarasova and Vitaliy Gerasimov
Algorithms 2025, 18(10), 604; https://doi.org/10.3390/a18100604 - 26 Sep 2025
Abstract
This article presents the results of a study on the possibility of using a single-speed multiphase model with free surface allowance for simulating boiling and condensation processes. The simulation is based on the VOF method, which allows the position of the interphase boundary [...] Read more.
This article presents the results of a study on the possibility of using a single-speed multiphase model with free surface allowance for simulating boiling and condensation processes. The simulation is based on the VOF method, which allows the position of the interphase boundary to be tracked. To increase the stability of the iterative procedure for numerically solving volume fraction transfer equations using a finite volume discretization method on arbitrary unstructured grids, the basic VOF method is been modified by writing these equations in a semi-divergent form. The models of Tanasawa, Lee, and Rohsenow are considered models of interphase mass transfer, in which the evaporated or condensed mass linearly depends on the difference between the local temperature and the saturation temperature with accuracy in empirical parameters. This paper calibrates these empirical parameters for each mass transfer model. The results of our study of the influence of the values of the empirical parameters of models on the intensity of boiling and evaporation, as well as on the dynamics of the interphase boundary, are presented. This research is based on Stefan’s problem of the movement of the interphase boundary due to the evaporation of a liquid and the problem of condensation of vapor bubbles water columns. As a result of a series of numerical experiments, it is shown that the average error in the position of the interfacial boundary for the Tanasawa and Lee models does not exceed 3–6%. For the Rohsenow model, the result is somewhat worse, since the interfacial boundary moves faster than it should move according to calculations based on analytical formulas. To investigate the possibility of condensation modeling, the results of a numerical solution of the problem of an emerging condensing vapor bubble are considered. A numerical assessment of its position in space and the shape and dynamics of changes in its diameter over time is carried out using the VOF method, taking into account the free surface. It is shown herein that the Tanasawa model has the highest accuracy for modeling the condensation process using a VOF method taking into account the free surface, while the Rohsenow model is most unstable and prone to deformation of the bubble shape. At the same time, the dynamics of bubble ascent are modeled by all three models. The results obtained confirm the fundamental possibility of using a VOF method to simulate the processes of boiling and condensation and taking into account the dynamics of the free surface. At the same time, the problem of the studied models of phase transitions is revealed, which consists of the need for individual selection of optimal values of empirical parameters for each specific task. Full article
Show Figures

Figure 1

21 pages, 7001 KB  
Article
CGNet: Remote Sensing Instance Segmentation Method Using Contrastive Language–Image Pretraining and Gated Recurrent Units
by Hui Zhang, Zhao Tian, Zhong Chen, Tianhang Liu, Xueru Xu, Junsong Leng and Xinyuan Qi
Remote Sens. 2025, 17(19), 3305; https://doi.org/10.3390/rs17193305 - 26 Sep 2025
Abstract
Instance segmentation in remote sensing imagery is a significant application area within computer vision, holding considerable value in fields such as land planning and aerospace. The target scales of remote sensing images are often small, the contours of different categories of targets can [...] Read more.
Instance segmentation in remote sensing imagery is a significant application area within computer vision, holding considerable value in fields such as land planning and aerospace. The target scales of remote sensing images are often small, the contours of different categories of targets can be remarkably similar, and the background information is complex, containing more noise interference. Therefore, it is essential for the network model to utilize the background and internal instance information more effectively. Considering all the above, to fully adapt to the characteristics of remote sensing images, a network named CGNet, which combines an enhanced backbone with a contour–mask branch, is proposed. This network employs gated recurrent units for the iteration of contour and mask branches and adopts the attention head for branch fusion. Additionally, to address the common issues of missed and misdetections in target detection, a supervised backbone network using contrastive pretraining for feature supplementation is introduced. The proposed method has been experimentally validated in the NWPU VHR-10 and SSDD datasets, achieving average precision metrics of 68.1% and 67.4%, respectively, which are 0.9% and 3.2% higher than those of the suboptimal methods. Full article
Show Figures

Figure 1

18 pages, 892 KB  
Article
Developing a Psychological Research Methodology for Evaluating AI-Powered Plush Robots in Education and Rehabilitation
by Anete Hofmane, Inese Tīģere, Airisa Šteinberga, Dina Bethere, Santa Meļķe, Undīne Gavriļenko, Aleksandrs Okss, Aleksejs Kataševs and Aleksandrs Vališevskis
Behav. Sci. 2025, 15(10), 1310; https://doi.org/10.3390/bs15101310 - 25 Sep 2025
Abstract
The integration of AI-powered plush robots in educational and therapeutic settings for children with Autism Spectrum Disorders (ASD) necessitates a robust interdisciplinary methodology to evaluate usability, psychological impact, and therapeutic efficacy. This study proposes and applies a four-phase research framework designed to guide [...] Read more.
The integration of AI-powered plush robots in educational and therapeutic settings for children with Autism Spectrum Disorders (ASD) necessitates a robust interdisciplinary methodology to evaluate usability, psychological impact, and therapeutic efficacy. This study proposes and applies a four-phase research framework designed to guide the development and assessment of AI-powered plush robots for social rehabilitation and education. Phase 1 involved semi-structured interviews with 13 ASD specialists to explore robot applications. Phase 2 tested initial usability with typically developing children (N = 10–15) through structured sessions. Phase 3 involved structured interaction sessions with children diagnosed with ASD (N = 6–8) to observe the robot’s potential for rehabilitation, observed by specialists and recorded on video. Finally, Phase 4 synthesized data via multidisciplinary triangulation. Results highlighted the importance of iterative, stakeholder-informed design, with experts emphasizing visual properties (color, texture), psychosocial aspects, and adjustable functions. The study identified key technical and psychological evaluation criteria, including engagement, emotional safety, and developmental alignment with ASD intervention models. Findings underscore the value of qualitative methodologies and phased testing in developing child-centered robotic tools. The research establishes a robust methodological framework and provides preliminary evidence for the potential of AI-powered plush robots to support personalized, ethically grounded interventions for children with ASD, though their therapeutic efficacy requires further longitudinal validation. This methodology bridges engineering innovation with psychological rigor, offering a template for future assistive technology research by prioritizing a rigorous, stakeholder-centered design process. Full article
(This article belongs to the Section Psychiatric, Emotional and Behavioral Disorders)
Show Figures

Figure 1

23 pages, 1623 KB  
Article
Integral and Numerical Formulations for Seeking the Period of Non-Conservative Nonlinear Oscillator With/Without the First Integral
by Chein-Shan Liu, Chia-Cheng Tsai and Chih-Wen Chang
Symmetry 2025, 17(9), 1584; https://doi.org/10.3390/sym17091584 - 22 Sep 2025
Viewed by 137
Abstract
For a non-conservative nonlinear oscillator (NCNO) having a periodic solution, the existence of the first integral is a certain symmetry of the nonlinear dynamical system, which signifies the balance of kinetic energy and potential energy. A first-order nonlinear ordinary differential equation (ODE) is [...] Read more.
For a non-conservative nonlinear oscillator (NCNO) having a periodic solution, the existence of the first integral is a certain symmetry of the nonlinear dynamical system, which signifies the balance of kinetic energy and potential energy. A first-order nonlinear ordinary differential equation (ODE) is used to derive the first integral, which, equipped with a right-end boundary condition, can determine an implicit potential function for computing the period by an exact integral formula. However, the integrand is singular, which renders a less accurate value of the period. A generalized integral conservation law endowed with a weight function is constructed, which is proved to be equivalent to the exact integral formula. Minimizing the error to satisfy the periodicity conditions, the optimal initial value of the weight function is determined. Two non-iterative methods are developed by integrating three first-order ODEs or two first-order ODEs to compute the period. Very accurate value of the period can be observed upon testing five examples. For the NCNO without having the first integral, the integral-type period formula is derived. Four examples belong to the Liénard equation, involving the van der Pol equation, are evaluated by the proposed iterative method to determine the oscillatory amplitude and period. For the case with one or more limit cycles, the amplitude and period can be estimated very accurately. For the NCNO of a broad type with or without having the first integral, the present paper features a solid theoretical foundation and contributes integral-type formulations for the determination of the oscillatory period. The development of new numerical algorithms and extensive validation across a diverse set of examples is given. Full article
Show Figures

Figure 1

18 pages, 1694 KB  
Article
FAIR-Net: A Fuzzy Autoencoder and Interpretable Rule-Based Network for Ancient Chinese Character Recognition
by Yanling Ge, Yunmeng Zhang and Seok-Beom Roh
Sensors 2025, 25(18), 5928; https://doi.org/10.3390/s25185928 - 22 Sep 2025
Viewed by 150
Abstract
Ancient Chinese scripts—including oracle bone carvings, bronze inscriptions, stone steles, Dunhuang scrolls, and bamboo slips—are rich in historical value but often degraded due to centuries of erosion, damage, and stylistic variability. These issues severely hinder manual transcription and render conventional OCR techniques inadequate, [...] Read more.
Ancient Chinese scripts—including oracle bone carvings, bronze inscriptions, stone steles, Dunhuang scrolls, and bamboo slips—are rich in historical value but often degraded due to centuries of erosion, damage, and stylistic variability. These issues severely hinder manual transcription and render conventional OCR techniques inadequate, as they are typically trained on modern printed or handwritten text and lack interpretability. To tackle these challenges, we propose FAIR-Net, a hybrid architecture that combines the unsupervised feature learning capacity of a deep autoencoder with the semantic transparency of a fuzzy rule-based classifier. In FAIR-Net, the deep autoencoder first compresses high-resolution character images into low-dimensional, noise-robust embeddings. These embeddings are then passed into a Fuzzy Neural Network (FNN), whose hidden layer leverages Fuzzy C-Means (FCM) clustering to model soft membership degrees and generate human-readable fuzzy rules. The output layer uses Iteratively Reweighted Least Squares Estimation (IRLSE) combined with a Softmax function to produce probabilistic predictions, with all weights constrained as linear mappings to maintain model transparency. We evaluate FAIR-Net on CASIA-HWDB1.0, HWDB1.1, and ICDAR 2013 CompetitionDB, where it achieves a recognition accuracy of 97.91%, significantly outperforming baseline CNNs (p < 0.01, Cohen’s d > 0.8) while maintaining the tightest confidence interval (96.88–98.94%) and lowest standard deviation (±1.03%). Additionally, FAIR-Net reduces inference time to 25 s, improving processing efficiency by 41.9% over AlexNet and up to 98.9% over CNN-Fujitsu, while preserving >97.5% accuracy across evaluations. To further assess generalization to historical scripts, FAIR-Net was tested on the Ancient Chinese Character Dataset (9233 classes; 979,907 images), achieving 83.25% accuracy—slightly higher than ResNet101 but 2.49% lower than SwinT-v2-small—while reducing training time by over 5.5× compared to transformer-based baselines. Fuzzy rule visualization confirms enhanced robustness to glyph ambiguities and erosion. Overall, FAIR-Net provides a practical, interpretable, and highly efficient solution for the digitization and preservation of ancient Chinese character corpora. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 6827 KB  
Article
Promoting the Sustainable Development of Industrial Heritage Tourism Through Digital Intelligence: User Acceptance and Interface Design Research
by Huilan Wei, Rui Zhu and Jinyi Wu
Sustainability 2025, 17(18), 8432; https://doi.org/10.3390/su17188432 - 19 Sep 2025
Viewed by 350
Abstract
With the deepening integration of cultural heritage preservation and tourism, the application of intelligent and digital technologies in industrial heritage tourism has increasingly become a research focus for promoting cultural sustainability. This study takes the Wuhan Iron and Steel Company industrial heritage in [...] Read more.
With the deepening integration of cultural heritage preservation and tourism, the application of intelligent and digital technologies in industrial heritage tourism has increasingly become a research focus for promoting cultural sustainability. This study takes the Wuhan Iron and Steel Company industrial heritage in Wuhan, Hubei Province, China, as the research object. Based on the UTAUT2 model, a structural equation model was constructed to examine the factors influencing users’ behavioral intention, with technology anxiety introduced as an additional variable to explore the acceptance mechanism of intelligent and digital tourism platforms for industrial heritage. Through path analysis and mediation effect testing, the study proposes a platform design strategy of “technological adaptation–emotional resonance–behavioral guidance,” develops the platform interface design, and conducts a multidimensional evaluation using the fuzzy comprehensive evaluation method to verify its feasibility. The findings not only extend the applicability of the UTAUT2 model in the field of cultural heritage tourism at the theoretical level, but also provide practical pathways for optimizing user experience and guiding functional iteration of industrial heritage tourism platforms. Moreover, the study offers practical support for the revitalization, digital inheritance, and sustainable preservation of industrial cultural heritage, thereby fostering the integrated development of cultural value, social value, and the tourism industry. Full article
(This article belongs to the Section Tourism, Culture, and Heritage)
Show Figures

Figure 1

17 pages, 1271 KB  
Article
Flexible Interconnection Planning Towards Mutual Energy Support in Low-Voltage Distribution Networks
by Hao Bai, Yingjie Tan, Qian Rao, Wei Li and Yipeng Liu
Electronics 2025, 14(18), 3696; https://doi.org/10.3390/electronics14183696 - 18 Sep 2025
Viewed by 272
Abstract
The increasing uncertainty of distributed energy resources (DERs) challenges the secure and resilient operation of low-voltage distribution networks (LVDNs). Flexible interconnection via power-electronic devices enables controllable links among LVDAs, supporting capacity expansion, reliability, load balancing, and renewable integration. This paper proposes a two-stage [...] Read more.
The increasing uncertainty of distributed energy resources (DERs) challenges the secure and resilient operation of low-voltage distribution networks (LVDNs). Flexible interconnection via power-electronic devices enables controllable links among LVDAs, supporting capacity expansion, reliability, load balancing, and renewable integration. This paper proposes a two-stage robust optimization framework for flexible interconnection planning in LVDNs. The first stage determines investment decisions on siting and sizing of interconnection lines, while the second stage schedules short-term operations under worst-case wind, solar, and load uncertainties. The bi-level problem is reformulated into a master–subproblem structure and solved using a column-and-constraint generation (CCG) algorithm combined with a distributed iterative method. Case studies on typical scenarios and a modified IEEE 33-bus system show that the proposed approach mitigates overloads and cross-area imbalances, improves voltage stability, and maintains high DER utilization. Although the robust plan incurs slightly higher costs, its advantages in reliability and renewable accommodation confirm its practical value for uncertainty-aware interconnection planning in future LVDNs. Case studies on typical scenarios and a modified IEEE 33-bus system demonstrate that under the highest uncertainty the proposed method reduces the voltage fluctuation index from 0.0093 to 0.0079, lowers the autonomy index from 0.0075 to 0.0019, and eliminates all overload events compared with stochastic planning. Even under the most adverse conditions, DER utilization remains above 84%. Although the robust plan increases daily operating costs by about $70, this moderate premium yields significant gains in reliability and renewable accommodation. In addition, the decomposition-based algorithm converges within only 39 s, confirming the practical efficiency of the proposed framework for uncertainty-aware interconnection planning in future LVDNs. Full article
(This article belongs to the Special Issue Reliability and Artificial Intelligence in Power Electronics)
Show Figures

Figure 1

47 pages, 3785 KB  
Article
Interpretable ML Model for Predicting Magnification Factors in Open Ground-Storey Columns to Prevent Soft-Storey Collapse
by Rahul Ghosh and Rama Debbarma
Buildings 2025, 15(18), 3383; https://doi.org/10.3390/buildings15183383 - 18 Sep 2025
Viewed by 268
Abstract
Open Ground-Storey (OGS) buildings, widely adopted for functional openness, are highly vulnerable to seismic collapse due to stiffness irregularity at the ground storey (GS). The magnification factor (MF), defined as the amplification applied to GS column design forces, acts as a practical strengthening [...] Read more.
Open Ground-Storey (OGS) buildings, widely adopted for functional openness, are highly vulnerable to seismic collapse due to stiffness irregularity at the ground storey (GS). The magnification factor (MF), defined as the amplification applied to GS column design forces, acts as a practical strengthening measure to enhance GS stiffness and thereby mitigate the soft storey failure mechanism. While earlier studies recommended fixed MF values, their lack of adaptability often left stiffness deficiencies unresolved. This study develops a rational framework to quantify and predict the required MF for OGS columns, enabling safe yet functionally efficient design. A comprehensive set of three-dimensional reinforced concrete OGS models was analyzed under seismic loads, covering variations in plan geometry, ground-to-upper-storey height ratio (Hr), and GS infill percentage. Iterative stiffness-based evaluations established the MF demand needed to overcome stiffness deficiencies. To streamline prediction, advanced machine learning (ML) models were applied. Among these, black-box models achieved high predictive accuracy, but Symbolic Regression (SR) offered an interpretable closed-form equation that balances accuracy with transparency, making it suitable for design practice. A sensitivity analysis confirmed the Hr as the most influential parameter, with additional contributions from other variables. Validation on additional OGS configurations confirmed the reliability of the SR model, while seismic response comparisons showed that Modified OGS (MOGS) frames with the proposed MF achieved improved stiffness, reduced lateral displacements, uniform drift distribution, and shorter fundamental periods. The study highlights the novelty of integrating interpretable ML into structural design, providing a codifiable and practical tool for resilient OGS construction. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

22 pages, 1346 KB  
Article
Towards Digital Transformation in the Construction Industry: A Selection Framework of Building Information Modeling Lifecycle Service Providers (BLSPs)
by Guangchong Chen, Qianqin Feng, Chengcheng Jiang, Shengxi Zhang and Qiming Li
Systems 2025, 13(9), 816; https://doi.org/10.3390/systems13090816 - 18 Sep 2025
Viewed by 453
Abstract
Purpose: The construction industry is now experiencing a thorough transformation through digital technologies, especially with building information modeling (BIM). Despite significant BIM advantages, most construction projects suffer from low BIM performance due to the fragmented BIM use mode. To facilitate lifecycle-integrated BIM implementation, [...] Read more.
Purpose: The construction industry is now experiencing a thorough transformation through digital technologies, especially with building information modeling (BIM). Despite significant BIM advantages, most construction projects suffer from low BIM performance due to the fragmented BIM use mode. To facilitate lifecycle-integrated BIM implementation, this study demonstrates that introducing BIM lifecycle service providers (BLSPs) is feasible and offers significant improvements in terms of BIM benefits. Hence, this study proposes a customized framework to select BLSPs. Approach: This study utilized both qualitative and quantitative methods. It first adopted semi-structured interviews as part of the qualitative method to deduce the initial criteria for BLSPs’ selection. 30 interviews were conducted iteratively with managers proficient and experienced in selecting BLSPs, through which 25 initial criteria were identified. Then, as the basis of the applied quantitative method, a questionnaire survey was used to evaluate these criteria by determining the critical ones, identifying the latent factor groupings, and assigning criteria weights. Subsequently, an assessment framework was established. Finally, the study was in favor of eight construction projects, highlighting the practicality and validity of the framework. Findings: The results depicted that project BIM service capability is a primary factor for BLSPs’ selection. Within this factor, several specialized criteria need to be considered, such as “boundary spanning competence of the BIM manager” and “BIM service plans with lifecycle cognition.” Meanwhile, “past innovative BIM service practices” and “BIM research and development (R&D)” that originate in corporate innovation capacity were emphasized when selecting BLSPs. Furthermore, for holistic assessment and recognizing the peculiarities of digital BIM service, the study found that criteria like “Privacy and security” and “Backup system” are required, which demonstrate BIM service reliability. Originality/value: This study expands on the conventional partner selection frameworks in the construction sector and thus defines and validates a tailored one for BLSPs’ selection. Moreover, drawing such a reference solution from the framework, the study enables the selection of appropriate BLSPs for clients. Full article
Show Figures

Figure 1

20 pages, 542 KB  
Article
Diffusion-Q Synergy (DQS): A Generative Approach to Policy Optimization via Denoised Action Spaces
by Ao Li, Xinghui Zhu and Haoyi Que
Appl. Sci. 2025, 15(18), 10141; https://doi.org/10.3390/app151810141 - 17 Sep 2025
Viewed by 288
Abstract
In this paper, we propose a novel algorithm that integrates diffusion models with reinforcement learning, called Diffusion-Q Synergy (DQS). The methodology formalizes an equivalence relationship between the iterative denoising process in diffusion models and the policy improvement mechanism in Markov Decision Processes. Central [...] Read more.
In this paper, we propose a novel algorithm that integrates diffusion models with reinforcement learning, called Diffusion-Q Synergy (DQS). The methodology formalizes an equivalence relationship between the iterative denoising process in diffusion models and the policy improvement mechanism in Markov Decision Processes. Central to this framework is a dual-learning mechanism: (1) a parametric Q-function is trained to evaluate noise prediction trajectories through temporal difference learning, effectively serving as a differentiable critic for action quality assessment; and (2) this learned Q-scoring function is then structurally integrated into the training objective of a conditional diffusion model, formulating a constrained optimization problem that simultaneously maximizes expected returns while minimizing policy deviation from behavioral priors. The algorithmic superiority of DQS stems from its hybrid architecture combining the (i) diffusion policy cloning for stable behavior regularization and (ii) adaptive noise rectification through Q-value-guided key denoising step correction, which is particularly effective for refining suboptimal action sequences, thereby guiding the entire diffusion trajectory toward policy optimality. Rigorous ablation studies across benchmark environments demonstrate statistically significant performance improvements (p<0.01) over baseline methods in both computational efficiency and asymptotic policy quality. The implementation has been open-sourced at AOLIGOOD/Diffusion_Q_Synergy, to facilitate reproducibility. Full article
Show Figures

Figure 1

23 pages, 8222 KB  
Article
Development of a Global Maximum Power Point Tracker for Photovoltaic Module Arrays Based on the Idols Algorithm
by Kuei-Hsiang Chao and Yi-Chan Kuo
Mathematics 2025, 13(18), 2999; https://doi.org/10.3390/math13182999 - 17 Sep 2025
Viewed by 277
Abstract
The main objective of this paper is to develop a maximum power point tracker (MPPT) for a photovoltaic module array (PVMA) under conditions of partial shading and sudden changes in solar irradiance. PVMAs exhibit nonlinear characteristics with respect to temperature and solar irradiance [...] Read more.
The main objective of this paper is to develop a maximum power point tracker (MPPT) for a photovoltaic module array (PVMA) under conditions of partial shading and sudden changes in solar irradiance. PVMAs exhibit nonlinear characteristics with respect to temperature and solar irradiance conditions. Therefore, when some modules in the array are shaded or when there is a sudden change in solar irradiance, the maximum power point (MPP) of the array will also change, and the power–voltage (P-V) characteristic curve may exhibit multiple peaks. Under such conditions, if the tracking algorithm employs a fixed step size, the time required to reach the MPP may be significantly prolonged, potentially causing the tracker to converge on a local maximum power point (LMPP). To address the issues mentioned above, this paper proposes a novel MPPT technique based on the nature-inspired idols algorithm (IA). The technique allows the promotion value (PM) to be adjusted through the anti-fans weight (afw) in the iteration formula, thereby achieving global maximum power point (GMPP) tracking for PVMAs. To verify the effectiveness of the proposed algorithm, a model of a 4-series–3-parallel PVMA was first established using MATLAB (2024b version) software under both non-shading and partial shading conditions. The voltage and current of the PVMAs were fed back, and the IA was then applied for GMPP tracking. The simulation results demonstrate that the IA proposed in this study outperforms existing MPPT techniques, such as particle swarm optimization (PSO), cat swarm optimization (CSO), and the bat algorithm (BA), in terms of tracking speed, dynamic response, and steady-state performance, especially when the array is subjected to varying shading ratios and sudden changes in solar irradiance. Full article
(This article belongs to the Special Issue Evolutionary Algorithms and Applications)
Show Figures

Figure 1

19 pages, 1509 KB  
Article
A New Two-Component Hybrid Model for Highly Right-Skewed Data: Estimation Algorithm and Application to Rainfall Data from South Tyrol, Italy
by Patrick Osatohanmwen
Mathematics 2025, 13(18), 2987; https://doi.org/10.3390/math13182987 - 16 Sep 2025
Viewed by 188
Abstract
In many real-life processes, data with high positive skewness are very common. Moreover, these data tend to exhibit heterogeneous characteristics in such a manner that using one parametric univariate probability distribution becomes inadequate to model such data. When the heterogeneity of such data [...] Read more.
In many real-life processes, data with high positive skewness are very common. Moreover, these data tend to exhibit heterogeneous characteristics in such a manner that using one parametric univariate probability distribution becomes inadequate to model such data. When the heterogeneity of such data can be appropriately separated into two components—the main innovation component, where the bulk of data is centered, and the tail component which contains some few extreme observations—in such a way, and without a loss in generality, that the data possesses high skewness to the right, the use of hybrid models to model the data becomes very viable. In this paper, a new two-component hybrid model which combines the half-normal distribution for the main innovation of a positive and highly right-skewed data with the generalized Pareto distribution (GPD) for the observations in the data above a certain threshold is proposed. To enhance efficiency in the estimation of the parameters of the hybrid model, an unsupervised iterative algorithm (UIA) is adopted. The hybrid model is applied to model the intensity of rainfall which triggered some debris flow events in the South Tyrol region of Italy. Results from Monte Carlo simulations, as well as from the model’s application to the real data, clearly show how the UIA enhances the estimation of the free parameters of the hybrid model to offer good fits to positive and highly right-skewed data. Application results of the hybrid model are also compared with the results of other two-component hybrid models and graphical threshold selection methodologies in extreme value theory. Full article
(This article belongs to the Special Issue Advanced Statistical Applications for Practical Problems in Business)
Show Figures

Figure 1

Back to TopTop