Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,155)

Search Parameters:
Keywords = twinning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3013 KB  
Article
Step-Gradient Twin-Column Recycling Chromatography for Efficient Integrated Purification of Fidaxomicin Based on Complementary Binary Solvent Selectivity
by Haolei Wu, Feng Wei and Huagang Ni
Separations 2026, 13(5), 131; https://doi.org/10.3390/separations13050131 (registering DOI) - 25 Apr 2026
Abstract
Crude fidaxomicin contains difficult-to-separate impurities, and conventional dual-step purification usually requires intermediate concentration and transfer, which increases process complexity and may aggravate product loss or degradation. To address this challenge, this study exploits the complementary selectivity of methanol/water (80/20, v/v) [...] Read more.
Crude fidaxomicin contains difficult-to-separate impurities, and conventional dual-step purification usually requires intermediate concentration and transfer, which increases process complexity and may aggravate product loss or degradation. To address this challenge, this study exploits the complementary selectivity of methanol/water (80/20, v/v) and acetonitrile/water (70/30, v/v) binary mobile phases and proposes two purification processes based on step-gradient twin-column recycling chromatography, namely spatial integration and system integration. In the spatial integration strategy, dual-stage separations that are conventionally performed in separate chromatographic systems are sequentially integrated into a single twin-column recycling system in combination with on-line heart-cutting, thereby eliminating intermediate off-line processing steps. In contrast, the system integration strategy merges the two binary mobile phases in defined proportions to construct a single ternary mobile phase composed of methanol/acetonitrile/water (37.5/37.5/25, v/v/v), enabling one-step complete separation. The results demonstrate that the spatial integration strategy, employing binary mobile-phase switching, produces fidaxomicin with a purity of 99.9%, recoveries ranging from 75.27% to 78.77%, and productivities ranging from 307.22 to 328.82 g·L−1·day−1, regardless of the switching sequence. The system integration strategy, based on one-step elution with the ternary mobile phase, achieves the same product purity of 99.9% without mobile-phase switching, with a recovery of 70.41% and a productivity of 246.33 g·L−1·day−1. These results confirm the applicability and flexibility of both integrated strategies for fidaxomicin purification, while indicating that the spatial integration strategy provides better overall preparative performance and the system integration strategy offers a simpler one-step operation. Full article
(This article belongs to the Section Chromatographic Separations)
Show Figures

Figure 1

20 pages, 10687 KB  
Systematic Review
Future Research Directions for Megaprojects on Sustainable and Smart Cities in the Construction 5.0 Era
by Didem Ugurlu Akdemir and Begum Sertyesilisik
Buildings 2026, 16(9), 1691; https://doi.org/10.3390/buildings16091691 (registering DOI) - 25 Apr 2026
Abstract
Construction projects contributing to smart city (SC) development largely consist of megaprojects due to their complex and multidisciplinary nature and their high costs. Effective project management (PM) is essential for the implementation of these projects in the Construction 5.0 era. This study aims [...] Read more.
Construction projects contributing to smart city (SC) development largely consist of megaprojects due to their complex and multidisciplinary nature and their high costs. Effective project management (PM) is essential for the implementation of these projects in the Construction 5.0 era. This study aims to systematically analyze the research trends and identify FRDs in construction PM for megaprojects, which are essential for the development of SCs in the Construction 5.0 era. With this aim, a systematic literature review based on the PRISMA 2020 checklist was performed through a bibliometric analysis using VOSviewer version 1.6 Studies are gathered under five main clusters (i.e., the PM cluster, the smart construction and data security cluster, the SC and technology cluster, the spatial data integration cluster, and the lifecycle cluster). It has been determined that two main nodes (i.e., SC and digital twin) are located at the center of all these clusters. As a result of the analysis, two future research directions are determined (i.e., the relationship between megaprojects and SCs and the relationship between construction project management and SCs). As the identified clusters, nodes, and future research directions are interrelated and comply with the PMBOK 7th edition performance domains, focusing on them to support construction PM performance complies with efforts to facilitate the successful implementation of megaprojects integrated with SCs. The findings demonstrate the lack of a PM model within the SC ecosystem that synchronizes all phases of megaproject construction with SCs. Thus, this study can contribute to the development of smart, sustainable, and resilient cities. Full article
Show Figures

Figure 1

25 pages, 2895 KB  
Article
Evaluation of a Hybrid Physical–LSTM Model for Air-to-Air Heat Pump Control: Insights from Multi-Day Closed-Loop Simulations in Mediterranean Climate
by Ivica Glavan, Ivan Gospić and Igor Poljak
Modelling 2026, 7(3), 81; https://doi.org/10.3390/modelling7030081 - 24 Apr 2026
Abstract
Air-to-air heat pumps are a key technology for improving energy efficiency and reducing carbon emissions in residential buildings, yet their optimal control remains challenging under real-world conditions. This study evaluates the performance of a hybrid physical–LSTM model for controlling an air-to-air heat pump [...] Read more.
Air-to-air heat pumps are a key technology for improving energy efficiency and reducing carbon emissions in residential buildings, yet their optimal control remains challenging under real-world conditions. This study evaluates the performance of a hybrid physical–LSTM model for controlling an air-to-air heat pump in a residential building in Zadar, Croatia. The hybrid framework integrates a first-order energy balance model of the building envelope with LSTM-based temperature correction using adaptive weighting. The physical model was calibrated and validated against 52,128 real IoT measurements collected during the 2024/2025 heating season, achieving high accuracy (RMSE ≈ 0.076 °C). Rolling one-day and continuous multi-day closed-loop simulations (up to 15 days) show that the hybrid model yields slightly lower RMSE in long-term runs compared to the pure physical model. However, this apparent statistical improvement is accompanied by systematic underestimation of indoor temperature and significantly higher simulated energy consumption. The results indicate that the observed effect originates from an implicit virtual heat flux introduced by the LSTM correction, which affects thermodynamic consistency in closed-loop operation. The findings highlight that short-term error metrics such as RMSE alone are insufficient for evaluating hybrid models intended for model predictive control (MPC). The main contribution of this study is the explicit demonstration and quantification of an implicit virtual heat flux generated by the LSTM correction in closed-loop multi-day operation, which leads to misleading statistical improvements while causing significant thermodynamic inconsistency and energy overconsumption. In 15-day continuous simulations the hybrid model (ω = 0.05–0.10) caused an indoor temperature underestimation of 1.25–1.31 °C and increased simulated electricity consumption by more than 300% (316 kWh vs. 72 kWh) compared to the physical model. These results have direct implications for the development of reliable digital twins and model predictive control strategies in residential HVAC systems. Full article
15 pages, 7748 KB  
Article
Effect of Mn Content on the Microstructure, Mechanical Properties, and Damping Capacity of Mn-Cu Alloys
by Bin Wu, Bibo Li, Zhaobo Wu, Fengshuang Lu, Ran Li, Xiaojun Zhang, Xinqing Zhao, Feiyu Zhao and Dongliang Zhao
Materials 2026, 19(9), 1742; https://doi.org/10.3390/ma19091742 (registering DOI) - 24 Apr 2026
Abstract
This study investigated the influence of Mn content (70 wt.%, 75 wt.%, and 80 wt.%) on the microstructure, mechanical properties and damping capacity of Mn-Cu alloys using X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), mechanical testing and dynamic mechanical [...] Read more.
This study investigated the influence of Mn content (70 wt.%, 75 wt.%, and 80 wt.%) on the microstructure, mechanical properties and damping capacity of Mn-Cu alloys using X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), mechanical testing and dynamic mechanical analysis (DMA). The results indicate that during cooling after aging, the Mn-Cu alloy undergoes martensitic transformation, resulting in a dual-phase structure of fcc and fct. The 70 wt.% Mn alloy exhibits a mixed-grain structure with mostly long, straight twin bands, while the 75 wt.% and 80 wt.% Mn alloys consist of fine equiaxed grains with mostly intersecting twin bands. The microstructure determines the properties of the alloy. As the Mn content increases, the mechanical properties initially increase and then decrease, and the 75 wt.% Mn alloy has the best mechanical performance (UTS = 534 MPa, YS = 263 MPa). In contrast, the damping capacity shows a decreasing trend, and the 70 wt.% Mn alloy exhibits the best damping capacity (tanδ = 0.064). The main damping peak of tanδ in Mn-Cu alloys is derived from the relaxation of the twin boundaries, and the less obvious secondary peak is the internal friction peak of martensitic transformation. Full article
14 pages, 1278 KB  
Article
Tool Geometry for the Modular Manufacturing of Hypotrochoidal Profiles Standardized According to DIN 3689 by Means of Rolling Processes
by Masoud Ziaei
Appl. Mech. 2026, 7(2), 38; https://doi.org/10.3390/applmech7020038 - 24 Apr 2026
Abstract
Despite their excellent torsional and bending strength, the economical production of hypotrochoidal profiles (H-profiles) remains an obstacle to their use. Due to the tool clearance angle, the commercially available twin-spindle turning process has limited ability to manufacture many of the profiles standardized according [...] Read more.
Despite their excellent torsional and bending strength, the economical production of hypotrochoidal profiles (H-profiles) remains an obstacle to their use. Due to the tool clearance angle, the commercially available twin-spindle turning process has limited ability to manufacture many of the profiles standardized according to DIN 3689 (Deutsches Institut für Normung). On the other hand, the manufacturing of cycloidal as a non-involute special geometry using generating processes (hobbing or continuous generating grinding) depends critically on the accuracy of the tool geometry—whether a hobbing cutter or a grinding worm. Conventional tool design methods—based on approximations, involute-derived profiles, or iterative trial-and-error corrections—face fundamental limitations: unpredictable cutting force variations, elevated surface roughness, and limited process capability. However, if the exact tool geometry has been determined analytically, the same machine achieves significantly better performance. In this work, the exact tool geometry conjugated to the H-profile for profile manufacturing is determined based on the gearing law. This provides modular H-profile manufacturing without deviations. Consequently, a design concept that enables the implementation of all existing rolling processes—including gear hobbing, gear shaping, gear planning, and other variants such as gear grinding—is presented. For profile shaping of hollow contours, the transfer ratio is considered and a curve conjugated to the profile contour is determined for the tool. A CAD-based simulation shows very good consistency with the analytically determined tool geometry. Full article
21 pages, 6210 KB  
Article
Robust Path Planning via Deep Reinforcement Learning
by Daeyeol Kang, Jongyoon Park and Pileun Kim
Sensors 2026, 26(9), 2658; https://doi.org/10.3390/s26092658 - 24 Apr 2026
Abstract
Deep reinforcement learning (DRL) for autonomous mobile robot navigation faces several inherent limitations. The stochastic nature of actions generated by DRL policies can undermine performance consistency, while inefficient exploration frequently delays the learning process or prevents the discovery of optimal solutions. This research [...] Read more.
Deep reinforcement learning (DRL) for autonomous mobile robot navigation faces several inherent limitations. The stochastic nature of actions generated by DRL policies can undermine performance consistency, while inefficient exploration frequently delays the learning process or prevents the discovery of optimal solutions. This research aims to enhance the robustness of path planning by addressing these challenges. To achieve this goal, we propose a hybrid approach that integrates the flexible decision-making capabilities of deep reinforcement learning with the stability of traditional path planning. The proposed model adopts the Twin Delayed Deep Deterministic Policy Gradient (TD3) network as its base. Notably, we pre-process LiDAR point cloud data to extract only essential features for the state representation, thereby preventing performance degradation from high-dimensional inputs and improving computational efficiency. Our model optimizes the learning process through two core strategies. First, it prioritizes experience data generated during training based on negative rewards, guiding the model to learn more frequently from critical failures rather than redundant successes. Second, it dynamically compares the action proposed by the TD3 network with a goal-oriented action from a classical path-planning algorithm in real time. By selecting the action with the higher estimated value, the model guides the policy toward a stable and effective trajectory from the earliest stages of training. To validate the efficacy of our approach, we conducted simulation-based experiments comparing the performance of the proposed model with existing reinforcement learning networks. To ensure statistical significance and mitigate the impact of random initialization, all reported results are averaged over 10 independent runs with different random seeds. The results quantitatively demonstrate that our model achieves significantly higher and more stable reward values, confirming a robust improvement in the path-planning process. Full article
(This article belongs to the Special Issue Advancements in Autonomous Navigation Systems for UAVs)
51 pages, 1208 KB  
Review
Biopolymer—Nanoparticle Interactions in 3D-Printing for Biomedical Applications: Advantages, Limitations and Future Perspectives
by Miguel Muñoz-Silva, Rafaela García-Álvarez, Elena Pérez, Carla Jiménez-Jiménez and Adrián Esteban-Arranz
Polymers 2026, 18(9), 1038; https://doi.org/10.3390/polym18091038 - 24 Apr 2026
Abstract
This review comprehensively examines the incorporation of nanoparticles (NPs) into biopolymers for 3D printing in biomedical applications, integrating material design, processing strategies, and translational considerations within a unified framework. Different types of NPs are analyzed regarding their effects on mechanical reinforcement, rheological modulation, [...] Read more.
This review comprehensively examines the incorporation of nanoparticles (NPs) into biopolymers for 3D printing in biomedical applications, integrating material design, processing strategies, and translational considerations within a unified framework. Different types of NPs are analyzed regarding their effects on mechanical reinforcement, rheological modulation, and structural organization of biopolymeric matrices. The discussion covers principal additive manufacturing technologies, including extrusion-based systems such as fused deposition modeling (FDM) and direct ink writing (DIW), vat photopolymerization, powder-bed fusion (SLS), and emerging in situ nanoparticle formation approaches, emphasizing how nanoparticle loading and surface functionalization govern yield stress, shear-thinning behavior, viscoelastic recovery, and dimensional fidelity while mitigating agglomeration and optimizing interfacial interactions. Comparative evaluation of compressive modulus, strength, toughness, crystallinity, and porosity establishes structure–property–processing relationships directly linked to printability and functional performance. Biomedical applications are addressed in tissue engineering, biosensing, controlled and targeted drug delivery, and bioimaging, highlighting the balance between bioactivity and manufacturability. Finally, critical challenges—including compatibility, reproducibility, biological safety, long-term stability, regulatory adaptation, and environmental impact—are discussed, alongside future perspectives focused on green nanomaterials, AI-driven predictive formulation design, and digital twins for real-time monitoring and quality control in nano-enabled additive manufacturing. Full article
(This article belongs to the Special Issue Functional Biopolymer Composites for Advanced Biomedical Applications)
42 pages, 3267 KB  
Systematic Review
Fiber-Optic Sensor-Based Structural Health Monitoring with Machine Learning: A Task-Oriented and Cross-Domain Review
by Yasir Mahmood, Nof Yasir, Kathryn Quenette, Gul Badin, Ying Huang and Luyang Xu
Sensors 2026, 26(9), 2641; https://doi.org/10.3390/s26092641 - 24 Apr 2026
Abstract
Structural health monitoring (SHM) plays an increasingly important role in managing aging, safety-critical infrastructure under growing environmental and operational demands. In recent years, fiber-optic sensors (FOSs) have attracted significant attention for SHM applications due to their immunity to electromagnetic interference, durability in harsh [...] Read more.
Structural health monitoring (SHM) plays an increasingly important role in managing aging, safety-critical infrastructure under growing environmental and operational demands. In recent years, fiber-optic sensors (FOSs) have attracted significant attention for SHM applications due to their immunity to electromagnetic interference, durability in harsh environments, multiplexing capability, and suitability for both localized and fully distributed measurements. In parallel, advances in machine learning (ML) have enabled new approaches for extracting actionable insights from large, high-dimensional sensing datasets. This paper presents a systematic review of FOS-based SHM systems integrated with ML across civil, transportation, energy, marine, and aerospace infrastructures. Following PRISMA 2020 guidelines, peer-reviewed studies were identified and synthesized to examine sensing principles, deployment configurations, data characteristics, and learning-based analytical strategies. Fiber optic technologies are categorized into point-based, quasi-distributed, and fully distributed systems, and their capabilities for capturing strain, temperature, and spatiotemporal structural responses are critically evaluated. ML approaches are examined from a task-oriented perspective, including damage detection, localization, severity assessment, environmental compensation, and prognosis, with emphasis on the alignment between sensing configurations and appropriate learning paradigms. Key challenges remain, particularly regarding large data volumes, environmental variability, limited labeled damage datasets, model generalization, and system-level integration. Emerging directions such as physics-informed and hybrid learning, transfer learning, uncertainty-aware modeling, and integration with digital twins are discussed as pathways toward more robust and scalable SHM systems. By jointly addressing sensing physics and data-driven intelligence, this review provides a structured reference and practical roadmap for advancing intelligent FOS-based SHM in next-generation infrastructure. Full article
(This article belongs to the Special Issue Smart Sensor Technology for Structural Health Monitoring)
14 pages, 9229 KB  
Article
Effect of Thermomechanical Processing on Grain Boundary Character Distribution and Creep Properties of SP2215 Heat-Resistant Steel
by Wen Feng, Ting Sun, Tianyu Zhao, Junjie Zhou and Zhengyu Han
Crystals 2026, 16(5), 282; https://doi.org/10.3390/cryst16050282 - 24 Apr 2026
Abstract
This study presented an application of thermomechanical processing consisting of cold rolling and subsequent annealing in SP2215 heat-resistant steel to investigate the effects of thermomechanical processing parameters on the evolution of grain boundary character distribution (GBCD) and to elucidate the relationship between GBCD [...] Read more.
This study presented an application of thermomechanical processing consisting of cold rolling and subsequent annealing in SP2215 heat-resistant steel to investigate the effects of thermomechanical processing parameters on the evolution of grain boundary character distribution (GBCD) and to elucidate the relationship between GBCD and creep properties. The experimental results show that the optimal process, characterized by 10% cold rolling reduction followed by annealing at 1100 °C for 10 min, was determined to significantly increase the fraction of low-Σ coincidence site lattice (CSL) boundaries up to 74.27%, and effectively disrupt the connectivity of the random boundary network, as corroborated by the highest average twin-related domain (TRD) size of 42.58 μm and average number of grains per TRD of 7.28. Such a modified GBCD leads to a notable enhancement in creep performance, resulting from the induction of a high fraction of low-Σ CSL boundaries and the disruption of the random boundary network, which effectively inhibits intergranular crack initiation and propagation during creep deformation. Full article
25 pages, 6049 KB  
Article
FMEA-Guided Selective Multi-Fidelity Modeling for Computationally Efficient Digital Twin-Based Fault Detection
by Euicheol Shin, Seohee Jang, Seongwan Kim, Chan Roh, Heemoon Kim, Jongsu Kim, Daehong Lee and Hyeonmin Jeon
Machines 2026, 14(5), 480; https://doi.org/10.3390/machines14050480 (registering DOI) - 24 Apr 2026
Abstract
Autonomous navigation technologies have been widely adopted in the automotive and aviation sectors, significantly reducing human-error-induced accidents and operational costs. However, their application to maritime systems remains limited due to the complexity of conventional propulsion systems. Electric propulsion ships, with well-defined system boundaries [...] Read more.
Autonomous navigation technologies have been widely adopted in the automotive and aviation sectors, significantly reducing human-error-induced accidents and operational costs. However, their application to maritime systems remains limited due to the complexity of conventional propulsion systems. Electric propulsion ships, with well-defined system boundaries and accessible operational data, offer a promising platform for autonomous navigation. In this study, we propose an FMEA-guided selective multi-fidelity digital twin framework for fault detection, where model fidelity is adaptively selected between low- and high-fidelity models based on risk priority numbers derived from failure mode and effects analysis. This approach enables selective execution of computationally expensive models only under high-risk conditions, thereby improving computational efficiency. In addition, a sliding window-based algebraic aggregation method is employed to achieve lightweight and real-time fault diagnosis. The proposed framework is validated using operational sensor data from a 100 kW electric propulsion ship under multiple fault scenarios, including power supply faults and signal anomalies. Experimental results show that the proposed method reduces computational cost while maintaining stable real-time performance, compared to conventional data-driven AI-based approaches. These results demonstrate that the proposed framework provides an effective and efficient solution for enhancing the reliability and safety of autonomous ship systems. Full article
Show Figures

Figure 1

41 pages, 3214 KB  
Review
The Intelligent Home: A Systematic Review of Technological Pillars, Emerging Paradigms, and Future Directions
by Khalil M. Abdelnaby, Mohammed A. F. Al-Husainy, Mohammad O. Alhawarat, Mohamed A. Rohaim, Khairy M. Assar and Khaled A. Elshafey
Symmetry 2026, 18(5), 718; https://doi.org/10.3390/sym18050718 - 24 Apr 2026
Abstract
Home automation is undergoing a paradigm shift from connected IoT environments with rule based control to intelligent homes exhibiting ambient intelligence and proactive adaptation. Artificial intelligence, privacy-preserving sensing, and converging connectivity standards are the primary forces driving this transition. This systematic literature review [...] Read more.
Home automation is undergoing a paradigm shift from connected IoT environments with rule based control to intelligent homes exhibiting ambient intelligence and proactive adaptation. Artificial intelligence, privacy-preserving sensing, and converging connectivity standards are the primary forces driving this transition. This systematic literature review synthesizes the technological foundations, architectural developments, emerging paradigms, and socio-technical challenges characterizing the next generation of smart homes, evaluated against the original Ambient Intelligence (AmI) vision. Following PRISMA 2020 guidelines, searches were conducted across four databases—IEEE Xplore, ACM Digital Library, Scopus, and Web of Science—covering studies published between January 2020 and June 2025. From 3450 records, 113 studies were selected through a two-reviewer screening procedure with inter-rater reliability assessments. Quality was assessed using a modified JBI Critical Appraisal Checklist, and findings were synthesized through thematic analysis. Three converging technological pillars were identified: multi-modal privacy-preserving sensing including mmWave radar; a hierarchical cloud-edge TinyML intelligence engine; and unified connectivity through the Matter/Thread standard. Emerging paradigms include LLM-based cognitive orchestration, hyper-personalization, Digital Twin simulation, and grid-interactive prosumer energy management. Realizing that the intelligent home vision requires addressing the privacy–security–trust trilemma, algorithmic bias, system reliability, and human–agent collaboration, a research roadmap encompassing explainable AI, privacy-by-design, lifelong learning, and standardized ethical auditing is proposed. Full article
40 pages, 1948 KB  
Systematic Review
Edge–Cloud Collaboration for Machine Condition Monitoring: A Comprehensive Review of Mechanisms, Models, and Applications
by Liyuan Yu, Jitao Fang, Qiuyan Wang, Fajia Li and Haining Liu
Machines 2026, 14(5), 476; https://doi.org/10.3390/machines14050476 (registering DOI) - 24 Apr 2026
Abstract
Machine condition monitoring increasingly depends on distributed sensing, edge intelligence, and cloud analytics, yet timely and trustworthy health assessment remains constrained by latency, bandwidth, privacy, and reliability requirements. Cloud-only architectures provide scalable computation and historical data integration but often fail to satisfy real-time [...] Read more.
Machine condition monitoring increasingly depends on distributed sensing, edge intelligence, and cloud analytics, yet timely and trustworthy health assessment remains constrained by latency, bandwidth, privacy, and reliability requirements. Cloud-only architectures provide scalable computation and historical data integration but often fail to satisfy real-time industrial needs, whereas edge-only deployments are limited by restricted computing resources and fragmented local knowledge. Edge–cloud collaboration has, therefore, emerged as a practical architecture for distributing perception, inference, learning, and coordination across hierarchical industrial systems. This review examines 147 publications on edge–cloud collaboration for machine condition monitoring published between 2019 and February 2026. A four-dimensional taxonomy is developed to organize the literature into model-centric, data-centric, resource and task-centric, and architecture and trust-centric mechanisms, while 13 survey and review papers are considered separately for contextual comparison. On this basis, the review analyzes representative collaboration mechanisms and enabling technologies, with particular attention to federated learning, transfer learning, knowledge distillation, digital twins, and deep reinforcement learning, and surveys their deployment in manufacturing, energy, transportation, and infrastructure monitoring scenarios. The literature remains dominated by model-centric collaboration, while architecture and trust-centric studies increasingly provide the system foundations required for practical deployment. The review further identifies major open challenges, including robust generalization under changing operating conditions, efficient data transmission, real-time resource coordination, interoperability, and trustworthy large-scale deployment, and outlines future directions in foundation-model-based edge–cloud collaboration, continual learning, dual digital twins, trustworthy collaboration, and privacy-preserving industrial ecosystems. Full article
17 pages, 29084 KB  
Case Report
Comparative Evaluation of a Clear Functional Jaw Corrector and a Conventional Twin Block Appliance in Monozygotic Twins with Skeletal Class II Malocclusion: A Case Report
by Shubhangi Mani, Rutvi Karia, Sameehan Bodas, Nandalal Toshniwal and Sumeet Mishra
Int. J. Orofac. Myol. Myofunct. Ther. 2026, 52(1), 5; https://doi.org/10.3390/ijom52010005 (registering DOI) - 24 Apr 2026
Abstract
Background: Functional appliance therapy is widely employed for the management of skeletal Class II malocclusion in growing patients. However, treatment outcomes are influenced by multiple biological and behavioural variables, including genetic background, craniofacial growth pattern, neuromuscular adaptability, orofacial resting postures, and patient [...] Read more.
Background: Functional appliance therapy is widely employed for the management of skeletal Class II malocclusion in growing patients. However, treatment outcomes are influenced by multiple biological and behavioural variables, including genetic background, craniofacial growth pattern, neuromuscular adaptability, orofacial resting postures, and patient adherence. These factors often limit direct comparison of different appliance systems. Monozygotic twin studies provide a unique biological model by minimizing genetic and environmental variability, allowing more accurate evaluation of appliance-specific effects. Methods: This case report presents a comparative evaluation of a clear functional jaw corrector and a conventional twin block appliance in two 11-year-old female monozygotic twins at cervical vertebral maturation index stage 3. Both patients exhibited similar skeletal Class II patterns, vertical growth tendencies, proclined maxillary incisors, and convex soft tissue profiles. Twin A was treated with a removable clear functional jaw corrector fabricated using mandibular advancement blocks incorporated into a 1.5-mm Essix retainer sheet, while Twin B received a conventional twin block appliance. Treatment objectives, wear protocol, and duration were identical. Neither patient received orofacial myofunctional therapy. Results: Post-treatment clinical and cephalometric evaluation demonstrated improvement in sagittal jaw relationships, facial profile, and occlusal relationships in both patients. However, differences were observed in the magnitude of skeletal correction, dentoalveolar effects, vertical control, and the extent of molar and canine relationship correction. Conclusions: Both appliance designs were effective in improving sagittal relationships under similar biological conditions, with minor differences favoring the clear functional jaw corrector. However, the findings also highlight that orthodontic appliance therapy alone does not address underlying orofacial myofunctional factors, emphasizing the importance of incorporating functional assessment and adjunctive myofunctional therapy for optimal and stable outcomes. Full article
Show Figures

Figure 1

23 pages, 416 KB  
Article
Formal Integration of ISO/IEC Digital Twin Standards: A Layered Compliance Model with Uncertainty Quantification
by George Balan, Elena Serea, Alexandru Sălceanu and Dorin-Dumitru Lucache
Mathematics 2026, 14(9), 1425; https://doi.org/10.3390/math14091425 - 23 Apr 2026
Abstract
Digital Twin (DT) implementations in electrical and industrial systems are governed by fragmented ISO/IEC and IEC standards spanning terminology, architecture, interoperability, lifecycle management, and cybersecurity. This paper proposes a mathematical framework that integrates these standards into a unified compliance model. A layered DT [...] Read more.
Digital Twin (DT) implementations in electrical and industrial systems are governed by fragmented ISO/IEC and IEC standards spanning terminology, architecture, interoperability, lifecycle management, and cybersecurity. This paper proposes a mathematical framework that integrates these standards into a unified compliance model. A layered DT architecture is defined as a finite set of functional abstractions, and standards are linked to layers through a multivalued mapping and an incidence matrix. Traceability, interoperability, fidelity, and security/governance indicators are normalized and aggregated through a bounded weighted functional to obtain a deterministic compliance score. The model is then extended by treating selected indicators as random variables, which enables probabilistic maturity classification and Monte Carlo-based robustness analysis. The resulting functional is bounded, monotone, and stable under bounded perturbations. Numerical experiments on a synthetic portfolio illustrate deterministic scoring and uncertainty effects. The framework provides a proof-of-concept basis for structured DT compliance assessment across heterogeneous electrical systems; however, broader empirical validation is still required before operational deployment. Full article
(This article belongs to the Special Issue Mathematical Applications in Electrical Engineering, 2nd Edition)
22 pages, 1390 KB  
Article
BIM Collaboration Format (BCF) as an Example of Reification and Serialization in Building Information Modeling (BIM) Practice
by Andrzej Szymon Borkowski, Magdalena Kładź and Mikołaj Michalak
Buildings 2026, 16(9), 1669; https://doi.org/10.3390/buildings16091669 - 23 Apr 2026
Abstract
Building Information Modeling (BIM) has fundamentally changed the way interdisciplinary coordination works in construction projects; however, the theoretical mechanisms underlying open collaboration standards in this field remain insufficiently explored. This article fills this gap by presenting a systematic analysis of the BIM Collaboration [...] Read more.
Building Information Modeling (BIM) has fundamentally changed the way interdisciplinary coordination works in construction projects; however, the theoretical mechanisms underlying open collaboration standards in this field remain insufficiently explored. This article fills this gap by presenting a systematic analysis of the BIM Collaboration Format (BCF) through the lens of reification and serialization, two fundamental concepts in information systems theory. Although the BCF format is widely used in the industry and implemented in major BIM tools for clash detection and issue tracking, the existing literature treats it primarily as an operational tool, overlooking the deeper information systems principles that govern its architecture. The analysis demonstrates that BCF achieves reification by transforming informal coordination knowledge—such as verbally communicated clashes, scattered email threads, and undocumented design decisions—into first-class objects (Topic, Comment, Viewpoint) equipped with unique identifiers, typed attributes, ownership, temporal metadata, and formalized inter-object relationships. Further analysis was conducted on BCF’s serialization mechanisms, including XML encoding for file exchange, JSON for RESTful API communication, and ZIP archiving as a distribution container, each of which was selected to balance human readability, schema validation, compression, and cross-platform portability. The complementarity of these two mechanisms was examined: reification determines what to preserve and in what structure, while serialization determines how to encode and in what format, which together enable interoperable, auditable, and automatable coordination workflows in heterogeneous software environments. The analysis was illustrated with a real-world BCF example from a major infrastructure project in Poland, demonstrating practical alignment between theoretical constructs and their implementation. The research results provide both a conceptual foundation for researchers working on openBIM standards and practical guidance for practitioners seeking to optimize issue management, the implementation of a Common Data Environment (CDE), and the specification of Exchange Information Requirements (EIR). The study contributes new knowledge in three areas: (1) To the best of the authors’ knowledge, it provides the first systematic theoretical analysis of BCF through the lens of reification and serialization, filling a gap between the format’s widespread practical use and its limited theoretical understanding. (2) It demonstrates how the formal criteria of reification (unique identity, typed attributes, ownership, temporal metadata, and inter-object relationships) map onto specific BCF entities, offering a transferable analytical framework for evaluating other openBIM standards. (3) It identifies the complementarity of reification and serialization as a design principle that can guide the development of future standards for digital twins and IoT-based facility management. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Back to TopTop