Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,033)

Search Parameters:
Keywords = iteration procedure

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 21927 KB  
Article
Rapid Identification Method for Surface Damage of Red Brick Heritage in Traditional Villages in Putian, Fujian
by Linsheng Huang, Yian Xu, Yile Chen and Liang Zheng
Coatings 2025, 15(10), 1140; https://doi.org/10.3390/coatings15101140 - 2 Oct 2025
Abstract
Red bricks serve as an important material for load-bearing or enclosing structures in traditional architecture and are widely used in construction projects both domestically and internationally. Fujian red bricks, due to geographical, trade, and immigration-related factors, have spread to Taiwan and various regions [...] Read more.
Red bricks serve as an important material for load-bearing or enclosing structures in traditional architecture and are widely used in construction projects both domestically and internationally. Fujian red bricks, due to geographical, trade, and immigration-related factors, have spread to Taiwan and various regions in Southeast Asia, giving rise to distinctive red brick architectural complexes. To further investigate the types of damage, such as cracking and missing bricks, that occur in traditional red brick buildings due to multiple factors, including climate and human activities, this study takes Fujian red brick buildings as its research subject. It employs the YOLOv12 rapid detection method to conduct technical support research on structural assessment, type detection, and damage localization of surface damage in red brick building materials. The experimental model was conducted through the following procedures: on-site photo collection, slice marking, creation of an image training set, establishment of an iterative model training, accuracy analysis, and experimental result verification. Based on this, the causes of damage types and corresponding countermeasures were analyzed. The objective of this study is to attempt to utilize computer vision image recognition technology to provide practical, automated detection and efficient identification methods for damage types in red brick building brick structures, particularly those involving physical and mechanical structural damage that severely threaten the overall structural safety of the building. This research model will reduce the complex manual processes typically involved, thereby improving work efficiency. This enables the development of customized intervention strategies with minimal impact and enhanced timeliness for the maintenance, repair, and preservation of red brick buildings, further advancing the practical application of intelligent protection for architectural heritage. Full article
(This article belongs to the Section Surface Characterization, Deposition and Modification)
Show Figures

Graphical abstract

18 pages, 1612 KB  
Article
Theoretical Method for Calculating the Second-Order Effect and Reinforcement of Reinforced Concrete Box Section Columns
by Lu Li, Gang Chen, Donghua Zhou and Xuefeng Guo
Buildings 2025, 15(19), 3528; https://doi.org/10.3390/buildings15193528 - 1 Oct 2025
Abstract
Calculating the second-order effect and reinforcement of reinforced concrete box section columns has geometric nonlinearity and material nonlinearity. It requires integration and iterative solutions and is inconvenient in practical applications; moreover, China’s “Code for Design of Concrete Structures” (GB 50010-2010) uses the same [...] Read more.
Calculating the second-order effect and reinforcement of reinforced concrete box section columns has geometric nonlinearity and material nonlinearity. It requires integration and iterative solutions and is inconvenient in practical applications; moreover, China’s “Code for Design of Concrete Structures” (GB 50010-2010) uses the same formula as that for rectangular sections when calculating geometric nonlinearity. To find out a calculation method by hand that is specific to box-shaped sections and does not require iterative procedures, the theoretical derivation is adopted and divided into two gradations: (1) in terms of cross-section: using strain as the known variable to solve the internal force, thus solving the calculation problem of the bearing capacity of the cross-section; (2) in terms of members, the model column method can be used to solve the calculation problem of second-order effects of members. Finally, nomograms that can calculate the second-order effect and reinforcement of columns without iterative calculation are drawn, which contain five parameters, namely first-order bending moment, axial force, curvature, slenderness ratio, and the mechanical ratio of reinforcement. One of the nomograms corresponds to the cross-section resistance, and the other corresponds to the balance of internal resistance and external effect. Compared with the GB 50010-2010, the differences in the total bending moment and reinforcement ratio are within 10% and 20%, respectively. Compared with the numerical calculation results, the remaining examples are within 10% under normal load conditions. Full article
(This article belongs to the Special Issue Trends and Prospects in Civil Engineering Structures)
Show Figures

Figure 1

21 pages, 2419 KB  
Article
Application Features of a VOF Method for Simulating Boiling and Condensation Processes
by Andrey Kozelkov, Andrey Kurkin, Andrey Puzan, Vadim Kurulin, Natalya Tarasova and Vitaliy Gerasimov
Algorithms 2025, 18(10), 604; https://doi.org/10.3390/a18100604 - 26 Sep 2025
Abstract
This article presents the results of a study on the possibility of using a single-speed multiphase model with free surface allowance for simulating boiling and condensation processes. The simulation is based on the VOF method, which allows the position of the interphase boundary [...] Read more.
This article presents the results of a study on the possibility of using a single-speed multiphase model with free surface allowance for simulating boiling and condensation processes. The simulation is based on the VOF method, which allows the position of the interphase boundary to be tracked. To increase the stability of the iterative procedure for numerically solving volume fraction transfer equations using a finite volume discretization method on arbitrary unstructured grids, the basic VOF method is been modified by writing these equations in a semi-divergent form. The models of Tanasawa, Lee, and Rohsenow are considered models of interphase mass transfer, in which the evaporated or condensed mass linearly depends on the difference between the local temperature and the saturation temperature with accuracy in empirical parameters. This paper calibrates these empirical parameters for each mass transfer model. The results of our study of the influence of the values of the empirical parameters of models on the intensity of boiling and evaporation, as well as on the dynamics of the interphase boundary, are presented. This research is based on Stefan’s problem of the movement of the interphase boundary due to the evaporation of a liquid and the problem of condensation of vapor bubbles water columns. As a result of a series of numerical experiments, it is shown that the average error in the position of the interfacial boundary for the Tanasawa and Lee models does not exceed 3–6%. For the Rohsenow model, the result is somewhat worse, since the interfacial boundary moves faster than it should move according to calculations based on analytical formulas. To investigate the possibility of condensation modeling, the results of a numerical solution of the problem of an emerging condensing vapor bubble are considered. A numerical assessment of its position in space and the shape and dynamics of changes in its diameter over time is carried out using the VOF method, taking into account the free surface. It is shown herein that the Tanasawa model has the highest accuracy for modeling the condensation process using a VOF method taking into account the free surface, while the Rohsenow model is most unstable and prone to deformation of the bubble shape. At the same time, the dynamics of bubble ascent are modeled by all three models. The results obtained confirm the fundamental possibility of using a VOF method to simulate the processes of boiling and condensation and taking into account the dynamics of the free surface. At the same time, the problem of the studied models of phase transitions is revealed, which consists of the need for individual selection of optimal values of empirical parameters for each specific task. Full article
Show Figures

Figure 1

7 pages, 1072 KB  
Proceeding Paper
Selective Intersection Flow: A Lightweight Optical Flow Algorithm for Micro Drones
by Che Liu, Chen-Fu Yeh and Chung-Chuan Lo
Eng. Proc. 2025, 108(1), 47; https://doi.org/10.3390/engproc2025108047 - 22 Sep 2025
Viewed by 171
Abstract
In this study, selective intersection flow (SIF), a lightweight optical flow algorithm, was used to enhance efficiency and accuracy by filtering out non-contributive pixels. SIF, derived from the differential category of algorithms, is used to compute optical flow by analyzing intersections of equations [...] Read more.
In this study, selective intersection flow (SIF), a lightweight optical flow algorithm, was used to enhance efficiency and accuracy by filtering out non-contributive pixels. SIF, derived from the differential category of algorithms, is used to compute optical flow by analyzing intersections of equations from selected pixels rather than solving for all pixels. It replaces warping with a minimal computational procedure for initial flow estimate and employs a sliding window for optimized single-core performance. SIF runs 1.7–1.8 times faster and achieves 1.2–1.4 higher accuracy than the single iteration of the Lucas–Kanade method, showing promise for real-time micro drone navigation. Full article
Show Figures

Figure 1

25 pages, 2438 KB  
Article
Interior Point-Driven Throughput Maximization for TS-SWIPT Multi-Hop DF Relays: A Log Barrier Approach
by Yang Yu, Xiaoqing Tang and Guihui Xie
Sensors 2025, 25(18), 5901; https://doi.org/10.3390/s25185901 - 21 Sep 2025
Viewed by 132
Abstract
This paper investigates a simultaneous wireless information and power transfer (SWIPT) decode-and-forward (DF) relay network, where a source node transmits data to a destination node through the assistance of multi-hop passive relays. We employ the time-switching (TS) protocol, enabling the relays to harvest [...] Read more.
This paper investigates a simultaneous wireless information and power transfer (SWIPT) decode-and-forward (DF) relay network, where a source node transmits data to a destination node through the assistance of multi-hop passive relays. We employ the time-switching (TS) protocol, enabling the relays to harvest energy from the received previous hop signal to support data forwarding. We first prove that the system throughput monotonically increases with the transmit power of the source node. Next, by employing logarithmic transformations, we convert the non-convex problem of obtaining optimal TS ratios at each relay to maximize the system throughput into a convex optimization problem. Comprehensively taking into account the convergence rate, computational complexity per iteration, and robustness, we selected the log barrier method—a type of interior point method—to address this convex optimization problem, along with providing a detailed implementation procedure. The simulation results validate the optimality of the proposed method and demonstrate its applicability to practical communication systems. For instance, the proposed scheme achieves 1437.3 bps throughput at 40 dBm maximum source power in a 2-relay network—278.6% higher than that of the scheme with TS ratio fixed at 0.75 (379.68 bps). On the other hand, it converges within a 1.36 ms computation time for 5 relays, 6 orders of magnitude faster than exhaustive search (1730 s). Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

17 pages, 8532 KB  
Article
An Effective Two-Step Procedure Allowing the Retrieval of the Non-Redundant Spherical Near-Field Samples from the 3-D Mispositioned Ones
by Francesco D'Agostino, Flaminio Ferrara, Claudio Gennarelli, Rocco Guerriero, Massimo Migliozzi and Luigi Pascarella
Sensors 2025, 25(18), 5626; https://doi.org/10.3390/s25185626 - 9 Sep 2025
Viewed by 415
Abstract
In this article, a novel procedure is developed to properly handle the 3-D mispositioning of the scanning probe in the near-field to far-field (NFtFF) transformations with spherical scanning for quasi-planar antennas under test, which make use of a non-redundant (NR) number of samples. [...] Read more.
In this article, a novel procedure is developed to properly handle the 3-D mispositioning of the scanning probe in the near-field to far-field (NFtFF) transformations with spherical scanning for quasi-planar antennas under test, which make use of a non-redundant (NR) number of samples. It proceeds through two stages. In the former, a phase correction technique, named spherical wave correction, is applied to compensate for the phase shifts of the collected NF samples, which do not belong to the measurement sphere, due to mechanical defects of the arc, or inaccuracy of the robotic arm employed in the considered NF facility driving the probe. Once the phase shifts have been compensated, the recovered NF samples belong to the set spherical surface, but their positions differ from those prescribed by the adopted NR representation, because of an imprecise control and/or inaccuracy of the positioning system. Thus, the resulting sampling arrangement is affected by 2-D mispositioning errors. Accordingly, an iterative procedure is used in the latter step to restore the NF samples at their exact locations from those determined at the first step. Once the correct sampling arrangement has been retrieved from the 3-D mispositioned one, an optimal sampling interpolation formula is employed to obtain the massive input NF data necessary for the classical spherical NFtFF transformation technique. Numerical results, showing the precision of the NF and FF reconstructions, assessed the efficacy of the developed procedure. Full article
(This article belongs to the Special Issue Recent Advances in Antenna Measurement Techniques)
Show Figures

Figure 1

19 pages, 1115 KB  
Article
Shaping the Future of DHT Assessment: Insights on Industry Challenges, Developer Needs, and a Harmonized, European HTA Framework
by Fruzsina Mezei, Emmanouil Tsiasiotis, Michele Basile, Ilaria Sciomenta, Elena Maria Calosci, Debora Antonini, Adam Lukacs, Rossella Di Bidino, Americo Cicchetti and Dario Sacchini
J. Mark. Access Health Policy 2025, 13(3), 46; https://doi.org/10.3390/jmahp13030046 - 4 Sep 2025
Viewed by 510
Abstract
Introduction: Market access, pricing, and reimbursement of digital health technologies (DHTs) in Europe are significantly challenged by regulatory fragmentation and various assessment methodologies. Understanding the challenges and priorities of technology developers is essential for developing effective and relevant policy responses. This study explores [...] Read more.
Introduction: Market access, pricing, and reimbursement of digital health technologies (DHTs) in Europe are significantly challenged by regulatory fragmentation and various assessment methodologies. Understanding the challenges and priorities of technology developers is essential for developing effective and relevant policy responses. This study explores perceived barriers and developer-driven priorities to inform the development of a harmonized health technology assessment (HTA) framework under the EDiHTA project. Methods: A mixed-methods approach was adopted, including a scoping review to identify key challenges, a survey of 20 DHT developers, and interviews and focus groups with 29 industry representatives from startups to multinational companies across 10 European countries during 2024. Results: Key challenges included a lack of transparency in reimbursement processes, fragmented HTA requirements, and misalignment between traditional evidence models and the agile development of DHTs. Developers highlighted the need to integrate real-world evidence, consider usability and implementation factors, and provide structured, lifecycle-based guidance. Financial barriers and procedural burdens were particularly significant for small and medium-sized enterprises. Conclusions: These findings highlight the need for an HTA framework that reflects the iterative nature of digital development, integrates real-world evidence, and reduces uncertainty for developers. The EDiHTA project aims to respond to these challenges by building a harmonized and flexible approach that aligns with the goals of the European HTA Regulation. Full article
(This article belongs to the Collection European Health Technology Assessment (EU HTA))
Show Figures

Figure 1

18 pages, 1414 KB  
Article
Increasing Measurement Agreement Between Different Instruments in Sports Environments: A Jump Height Estimation Case Study
by Chiara Carissimo, Annalisa D’Ermo, Angelo Rodio, Cecilia Provenzale, Gianni Cerro, Luigi Fattorini and Tommaso Di Libero
Sensors 2025, 25(17), 5354; https://doi.org/10.3390/s25175354 - 29 Aug 2025
Viewed by 627
Abstract
The assessment of physical quantity values, especially in case of sports-related activities, is critical to evaluate the performance and fitness level of athletes. In real-world applications, motion analysis tools are often employed to assess motor performance in subjects. In case the methods used [...] Read more.
The assessment of physical quantity values, especially in case of sports-related activities, is critical to evaluate the performance and fitness level of athletes. In real-world applications, motion analysis tools are often employed to assess motor performance in subjects. In case the methods used to calculate a specific quantity of interest differ from each other, different values may be provided as output. Therefore, there is the need to get a coherent final measurement, giving the possibility to compare results homogeneously, combining the different methodologies used by the instruments. These tools vary in measurement capabilities and the physical principles underlying the measurement procedures. Emerging differences in results could lead to non-uniform evaluation metrics, thus making a fair comparison unpracticable. A possible solution to this problem is provided in this paper by implementing an iterative approach, working on two measurement time series acquired by two different instruments, specifically focused on jump height estimation. In the analyzed case study, two instruments estimate the jump height exploiting two different technologies: the inertial and the vision-based ones. In the first case, the measurement value depends on the movement of the center of gravity during jump activity, while, in the second case, the jump height is derived by estimating the maximum distance ground–foot during the jump action. These approaches clearly could lead to different values, also considering the same jump test, due to their observation point. The developed methodology can provide three different ways out: (i) mapping the inertial values towards the vision-based reference system; (ii) mapping the vision-based values towards the inertial reference system; (iii) determining a comprehensive measurement, incorporating both contributions, thus making measurements comparable in time (performance progression) and space (comparison among subjects), eventually adopting only one of the analyzed instruments and applying the transformation algorithm to get the final measurement value. Full article
(This article belongs to the Special Issue Sensors Technologies for Measurements and Signal Processing)
Show Figures

Figure 1

20 pages, 948 KB  
Article
High-Accuracy Classification of Parkinson’s Disease Using Ensemble Machine Learning and Stabilometric Biomarkers
by Ana Carolina Brisola Brizzi, Osmar Pinto Neto, Rodrigo Cunha de Mello Pedreiro and Lívia Helena Moreira
Neurol. Int. 2025, 17(9), 133; https://doi.org/10.3390/neurolint17090133 - 26 Aug 2025
Viewed by 863
Abstract
Background: Accurate differentiation of Parkinson’s disease (PD) from healthy aging is crucial for timely intervention and effective management. Postural sway abnormalities are prominent motor features of PD. Quantitative stabilometry and machine learning (ML) offer a promising avenue for developing objective markers to [...] Read more.
Background: Accurate differentiation of Parkinson’s disease (PD) from healthy aging is crucial for timely intervention and effective management. Postural sway abnormalities are prominent motor features of PD. Quantitative stabilometry and machine learning (ML) offer a promising avenue for developing objective markers to support the diagnostic process. This study aimed to develop and validate high-performance ML models to classify individuals with PD and age-matched healthy older adults (HOAs) using a comprehensive set of stabilometric parameters. Methods: Thirty-seven HOAs (mean age 70 ± 6.8 years) and 26 individuals with idiopathic PD (Hoehn and Yahr stages 2–3, on medication; mean age 66 years ± 2.9 years), all aged 60–80 years, participated. Stabilometric data were collected using a force platform during quiet stance under eyes-open (EO) and eyes-closed (EC) conditions, from which 34 parameters reflecting the time- and frequency-domain characteristics of center-of-pressure (COP) sway were extracted. After data preprocessing, including mean imputation for missing values and feature scaling, three ML classifiers (Random Forest, Gradient Boosting, and Support Vector Machine) were hyperparameter-tuned using GridSearchCV with three-fold cross-validation. An ensemble voting classifier (soft voting) was constructed from these tuned models. Model performance was rigorously evaluated using 15 iterations of stratified train–test splits (70% train and 30% test) and an additional bootstrap procedure of 1000 iterations to derive reliable 95% confidence intervals (CIs). Results: Our optimized ensemble voting classifier achieved excellent discriminative power, distinguishing PD from HOAs with a mean accuracy of 0.91 (95% CI: 0.81–1.00) and a mean Area Under the ROC Curve (AUC ROC) of 0.97 (95% CI: 0.92–1.00). Importantly, feature analysis revealed that anteroposterior sway velocity with eyes open (V-AP) and total sway path with eyes closed (TOD_EC, calculated using COP displacement vectors from its mean position) are the most robust and non-invasive biomarkers for differentiating the groups. Conclusions: An ensemble ML approach leveraging stabilometric features provides a highly accurate, non-invasive method to distinguish PD from healthy aging and may augment clinical assessment and monitoring. Full article
(This article belongs to the Section Movement Disorders and Neurodegenerative Diseases)
Show Figures

Graphical abstract

29 pages, 5184 KB  
Article
Enhanced Optimization Strategies for No-Wait Flow Shop Scheduling with Sequence-Dependent Setup Times: A Hybrid NEH-GRASP Approach for Minimizing the Total Weighted Flow Time and Energy Cost
by Hafsa Mimouni, Abdelilah Jalid and Said Aqil
Sustainability 2025, 17(17), 7599; https://doi.org/10.3390/su17177599 - 22 Aug 2025
Viewed by 662
Abstract
Efficient production scheduling is a key challenge in industrial operations and continues to attract significant interest within the field of operations research. This paper investigates a range of methodological approaches designed to solve the permutation flow shop scheduling problem (PFSP) with sequence-dependent setup [...] Read more.
Efficient production scheduling is a key challenge in industrial operations and continues to attract significant interest within the field of operations research. This paper investigates a range of methodological approaches designed to solve the permutation flow shop scheduling problem (PFSP) with sequence-dependent setup times (SDST). The main objective is to minimize the total weighted flow time (TWFT) while ensuring a no-wait production environment. The proposed solution strategy is based on using algorithms with a mixed integer linear programming (MILP) formulation, heuristics, and their combination. The heuristics utilized in this paper include an advanced greedy randomized adaptive search procedure (GRASP) based on a priority rule and Hybrid-GRASP-NEH (HGRASP), where Nawaz-Enscore-Ham (NEH) takes place to initiate solutions, based on iterative global and local search methods to refine exploration capabilities and improve solution quality. These approaches were validated using a comprehensive set of experiments across diverse instance sizes that proved the efficiency of HGRASP, with the results showing a high-performance level that closely matched that of the exact MILP approach. Statistical analysis via the Friedman test (χ2 = 46.75, p = 7.04 × 10−11) confirmed significant performance differences among MILP, GRASP, and HGRASP. While MILP guarantees theoretical optimality, its practical effectiveness was limited by imposed computational time constraints, and HGRASP consistently achieved near-optimal solutions with superior computational efficiency, as demonstrated across diverse instance sizes. Full article
Show Figures

Figure 1

32 pages, 4168 KB  
Article
An AI-Driven News Impact Monitoring Framework Through Attention Tracking
by Anastasia Katsaounidou, Paris Xylogiannis, Thomai Baltzi, Theodora Saridou, Symeon Papadopoulos and Charalampos Dimoulas
Societies 2025, 15(8), 233; https://doi.org/10.3390/soc15080233 - 21 Aug 2025
Viewed by 763
Abstract
The paper presents the motivation, development, and evaluation of an AI-driven framework for media stream impact analysis at the consumption end, employing user reactions monitoring through attention tracking (i.e., eye and mouse tracking). The adopted methodology elaborates on software and system engineering processes, [...] Read more.
The paper presents the motivation, development, and evaluation of an AI-driven framework for media stream impact analysis at the consumption end, employing user reactions monitoring through attention tracking (i.e., eye and mouse tracking). The adopted methodology elaborates on software and system engineering processes, combining elements of rapid prototyping models with interdisciplinary participatory design and evaluation, leaning on the foundation of information systems design science research to enable continuous refinement through repeated cycles of stakeholder engagement, feedback, technical iteration, and validation. A dynamic Form Builder has been implemented to supplement these tools, allowing the construction and management of pre- and post-intervention questionnaires, thus helping associate collected data with the respective tracking maps. The present begins with the detailed presentation of the tools’ implementation, the respective technology, and the offered functionalities, emphasizing the perception of tampered visual content that is used as a pilot evaluation and validation case. The significance of the research lies in the practical applications of AI-assisted monitoring to effectively analyze and understand media dynamics and user reactions. The so-called iMedius framework introduces an integration of innovative multidisciplinary procedures that bring together research instruments from the social sciences and multimodal analysis tools from the digital world. Full article
Show Figures

Figure 1

17 pages, 2007 KB  
Article
A General Numerical Method to Calculate Cutter Profiles for Formed Milling of Helical Surfaces with Machinability Analysis
by Po Hu, Jingbo Zhou and Yuehua Li
Appl. Sci. 2025, 15(16), 9077; https://doi.org/10.3390/app15169077 - 18 Aug 2025
Viewed by 371
Abstract
Formed milling is one of the most commonly used methods for machining the helical surfaces of various screw rotors. The profile of a formed cutter is designed according to the profile of the helical surface, which is usually represented by discrete points. The [...] Read more.
Formed milling is one of the most commonly used methods for machining the helical surfaces of various screw rotors. The profile of a formed cutter is designed according to the profile of the helical surface, which is usually represented by discrete points. The most widely used analytical method is rather complex, and it is easy to obtain singular points. To obtain a reliable cutter profile and simplify the solution procedure, a general numerical method suited for rotors with an arbitrary tooth profile is proposed. The proposed method does not need to establish and solve the complex nonlinear contact equation and can determine the contact point accurately. Firstly, a series of intersection planes that are perpendicular to the revolving axis of the cutter is constructed. The searching of the contact points of the selected tooth curves with each intersection plane is achieved using the subdivision method. By this means, the plane–curve intersection is simplified to a straight line–curve intersection that can easily be solved via Newton iteration. Meanwhile, the machinability related to the profile of the formed cutter can also be analyzed. Two cutter profiles are used to validate the proposed method. The cutter profiles generated by the proposed method are compared with the profiles generated by the analytical method. The results indicate that the accuracy and computational efficiency increase significantly. Furthermore, the proposed method can also be applied to the design of formed grinding wheels. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

14 pages, 2664 KB  
Article
On Sn Iteration for Fixed Points of (E)-Operators with Numerical Analysis and Polynomiography
by Cristian Ciobanescu
Mathematics 2025, 13(16), 2625; https://doi.org/10.3390/math13162625 - 15 Aug 2025
Viewed by 340
Abstract
The first part of this study is related to the search of fixed points for (E)-operators (Garcia-Falset operators), in the Banach setting, by means of a three-step iteration procedure. The main results reveal some conclusions related to weak and strong convergence [...] Read more.
The first part of this study is related to the search of fixed points for (E)-operators (Garcia-Falset operators), in the Banach setting, by means of a three-step iteration procedure. The main results reveal some conclusions related to weak and strong convergence of the considered iterative scheme toward a fixed point. On the other hand, the usefulness of the Sn iterative scheme is once again revealed by demonstrating through numerical simulations the advantages of using it for solving the problem of the maximum modulus of complex polynomials compared to standard algorithms, such as Newton, Halley, or Kalantary’s so-called B4 iteration. Full article
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications: 3rd Edition)
Show Figures

Figure 1

26 pages, 819 KB  
Article
Critical Success Factors in Agile-Based Digital Transformation Projects
by Meiying Chen, Xinyu Sun and Meixi Liu
Systems 2025, 13(8), 694; https://doi.org/10.3390/systems13080694 - 13 Aug 2025
Viewed by 1204
Abstract
Digital transformation (DT) requires organizations to navigate complex technological and organizational changes, often under conditions of uncertainty. While agile methodologies are widely adopted to address the iterative and cross-functional nature of DT, limited attention has been paid to identifying critical success factors (CSFs) [...] Read more.
Digital transformation (DT) requires organizations to navigate complex technological and organizational changes, often under conditions of uncertainty. While agile methodologies are widely adopted to address the iterative and cross-functional nature of DT, limited attention has been paid to identifying critical success factors (CSFs) from a socio-technical systems (STS) perspective. This study addresses that gap by integrating and prioritizing CSFs as interdependent elements within a layered socio-technical framework. Drawing on a systematic review of 17 empirical and conceptual studies, we adapt Chow and Cao’s agile success model and validate a set of 14 CSFs across five domains—organizational, people, process, technical, and project—through a Delphi-informed Analytic Hierarchy Process (AHP). The findings reveal that organizational and people-related enablers, particularly management commitment, team capability, and organizational environment, carry the greatest weight in agile-based DT contexts. These results inform a three-layered framework—comprising organizational readiness, agile delivery, and project artefacts—which reflects how social, technical, and procedural factors interact systemically. The study contributes both theoretically, by operationalizing STS theory in the agile DT domain, and practically, by providing a prioritized CSF model to guide strategic planning and resource allocation in transformation initiatives. Full article
(This article belongs to the Special Issue Advancing Project Management Through Digital Transformation)
Show Figures

Figure 1

18 pages, 2279 KB  
Article
MvAl-MFP: A Multi-Label Classification Method on the Functions of Peptides with Multi-View Active Learning
by Yuxuan Peng, Jicong Duan, Yuanyuan Dan and Hualong Yu
Curr. Issues Mol. Biol. 2025, 47(8), 628; https://doi.org/10.3390/cimb47080628 - 6 Aug 2025
Viewed by 498
Abstract
The rapid expansion of peptide libraries and the increasing functional diversity of peptides have highlighted the significance of predicting the multifunctional properties of peptides in bioinformatics research. Although supervised learning methods have made advancements, they typically necessitate substantial amounts of labeled data for [...] Read more.
The rapid expansion of peptide libraries and the increasing functional diversity of peptides have highlighted the significance of predicting the multifunctional properties of peptides in bioinformatics research. Although supervised learning methods have made advancements, they typically necessitate substantial amounts of labeled data for yielding accurate prediction. This study presents MvAl-MFP, a multi-label active learning approach that incorporates multiple feature views of peptides. This method takes advantage of the natural properties of multi-view representation for amino acid sequences, meets the requirement of the query-by-committee (QBC) active learning paradigm, and further significantly diminishes the requirement for labeled samples while training high-performing models. First, MvAl-MFP generates nine distinct feature views for a few labeled peptide amino acid sequences by considering various peptide characteristics, including amino acid composition, physicochemical properties, evolutionary information, etc. Then, on each independent view, a multi-label classifier is trained based on the labeled samples. Next, a QBC strategy based on the average entropy of predictions across all trained classifiers is adopted to select a specific number of most valuable unlabeled samples to submit them to human experts for labeling by wet-lab experiments. Finally, the aforementioned procedure is iteratively conducted with a constantly expanding labeled set and updating classifiers until it meets the default stopping criterion. The experiments are conducted on a dataset of multifunctional therapeutic peptides annotated with eight functional labels, including anti-bacterial properties, anti-inflammatory properties, anti-cancer properties, etc. The results clearly demonstrate the superiority of the proposed MvAl-MFP method, as it can rapidly improve prediction performance while only labeling a small number of samples. It provides an effective tool for more precise multifunctional peptide prediction while lowering the cost of wet-lab experiments. Full article
(This article belongs to the Special Issue Challenges and Advances in Bioinformatics and Computational Biology)
Show Figures

Figure 1

Back to TopTop