Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (955)

Search Parameters:
Keywords = boundary concept

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6081 KiB  
Article
A New Methodological Approach to the Reachability Analysis of Aerodynamic Interceptors
by Tuğba Bayoğlu Akalın, Gökcan Akalın and Ali Türker Kutay
Aerospace 2025, 12(8), 657; https://doi.org/10.3390/aerospace12080657 - 24 Jul 2025
Abstract
Advanced air defense methods are essential to address the growing complexity of aerial threats. The increasing number of targets necessitates better defensive coordination, and a promising strategy involves the use of interceptors together to protect a specific area. This task fundamentally depends on [...] Read more.
Advanced air defense methods are essential to address the growing complexity of aerial threats. The increasing number of targets necessitates better defensive coordination, and a promising strategy involves the use of interceptors together to protect a specific area. This task fundamentally depends on accurately predicting their kinematic envelopes, or reachable sets. This paper presents a novel approach to determine the boundaries of reachable sets for aerodynamic interceptors, accounting for energy loss from drag, energy gain from thrust, variable acceleration limits, and autopilot dynamics. The devised numerical method approximates reachable sets for nonlinear problems using a constrained model predictive programming concept. Results demonstrate that explicitly accounting for input constraints, such as acceleration limits, significantly impacts the shape and area of the reachable boundaries. Furthermore, a sensitivity analysis was conducted to demonstrate the impact of parameter variations on the reachable set. Revealing the reachable set’s sensitivity to variations in thrust and drag coefficients, this analysis serves as a framework for considering parameter uncertainty and enables the evaluation of these effects prior to embedding the reachability boundaries into an offline database for guidance applications. The resulting boundaries, representing minimum and maximum ranges for various initial parameters, can be stored offline, allowing interceptors to estimate their own or allied platforms’ kinematic capabilities for cooperative strategies. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

20 pages, 3978 KiB  
Article
Cotton-YOLO: A Lightweight Detection Model for Falled Cotton Impurities Based on Yolov8
by Jie Li, Zhoufan Zhong, Youran Han and Xinhou Wang
Symmetry 2025, 17(8), 1185; https://doi.org/10.3390/sym17081185 - 24 Jul 2025
Abstract
As an important pillar of the global economic system, the cotton industry faces critical challenges from non-fibrous impurities (e.g., leaves and debris) during processing, which severely degrade product quality, inflate costs, and reduce efficiency. Traditional detection methods suffer from insufficient accuracy and low [...] Read more.
As an important pillar of the global economic system, the cotton industry faces critical challenges from non-fibrous impurities (e.g., leaves and debris) during processing, which severely degrade product quality, inflate costs, and reduce efficiency. Traditional detection methods suffer from insufficient accuracy and low efficiency, failing to meet practical production needs. While deep learning models excel in general object detection, their massive parameter counts render them ill-suited for real-time industrial applications. To address these issues, this study proposes Cotton-YOLO, an optimized yolov8 model. By leveraging principles of symmetry in model design and system setup, the study integrates the CBAM attention module—with its inherent dual-path (channel-spatial) symmetry—to enhance feature capture for tiny impurities and mitigate insufficient focus on key areas. The C2f_DSConv module, exploiting functional equivalence via quantization and shift operations, reduces model complexity by 12% (to 2.71 million parameters) without sacrificing accuracy. Considering angle and shape variations in complex scenarios, the loss function is upgraded to Wise-IoU for more accurate boundary box regression. Experimental results show that Cotton-YOLO achieves 86.5% precision, 80.7% recall, 89.6% mAP50, 50.1% mAP50–95, and 50.51 fps detection speed, representing a 3.5% speed increase over the original yolov8. This work demonstrates the effective application of symmetry concepts (in algorithmic structure and performance balance) to create a model that balances lightweight design and high efficiency, providing a practical solution for industrial impurity detection and key technical support for automated cotton sorting systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

30 pages, 3932 KiB  
Article
Banking on the Metaverse: Systemic Disruption or Techno-Financial Mirage?
by Alina Georgiana Manta and Claudia Gherțescu
Systems 2025, 13(8), 624; https://doi.org/10.3390/systems13080624 - 24 Jul 2025
Abstract
This study delivers a rigorous and in-depth bibliometric examination of 693 scholarly publications addressing the intersection of metaverse technologies and banking, retrieved from the Web of Science Core Collection. Through advanced scientometric tools, including VOSviewer and Bibliometrix, the research systematically unpacks the evolving [...] Read more.
This study delivers a rigorous and in-depth bibliometric examination of 693 scholarly publications addressing the intersection of metaverse technologies and banking, retrieved from the Web of Science Core Collection. Through advanced scientometric tools, including VOSviewer and Bibliometrix, the research systematically unpacks the evolving intellectual and thematic contours of this interdisciplinary frontier. The co-occurrence analysis of keywords reveals a landscape shaped by seven core thematic clusters, encompassing immersive user environments, digital infrastructure, experiential design, and ethical considerations. Factorial analysis uncovers a marked bifurcation between experience-driven narratives and technology-centric frameworks, with integrative concepts such as technology, information, and consumption serving as conceptual bridges. Network visualizations of authorship patterns point to the emergence of high-density collaboration clusters, particularly centered around influential contributors such as Dwivedi and Ooi, while regional distribution patterns indicate a tri-continental dominance led by Asia, North America, and Western Europe. Temporal analysis identifies a significant surge in academic interest beginning in 2022, aligning with increased institutional and commercial experimentation in virtual financial platforms. Our findings argue that the incorporation of metaverse paradigms into banking is not merely a technological shift but a systemic transformation in progress—one that blurs the boundaries between speculative innovation and tangible implementation. This work contributes foundational insights for future inquiry into digital finance systems, algorithmic governance, trust architecture, and the wider socio-economic consequences of banking in virtualized environments. Whether a genuine leap toward financial evolution or a sophisticated illusion, the metaverse in banking must now be treated as a systemic phenomenon worthy of serious scrutiny. Full article
Show Figures

Figure 1

18 pages, 416 KiB  
Article
Beyond the Cowboy Economy: Proposing Teaching and Research Agendas for Ecological Economics
by Daniel Caixeta Andrade, Debora Nayar Hoff and Junior Ruiz Garcia
Reg. Sci. Environ. Econ. 2025, 2(3), 20; https://doi.org/10.3390/rsee2030020 - 24 Jul 2025
Abstract
This article presents an initial effort to systematize two interrelated research fronts within ecological economics (EE): ecological microeconomics and ecological macroeconomics. In response to the field’s transdisciplinary and plural nature—attributes that, while enriching, may limit its political influence—the article proposes a conceptual delineation [...] Read more.
This article presents an initial effort to systematize two interrelated research fronts within ecological economics (EE): ecological microeconomics and ecological macroeconomics. In response to the field’s transdisciplinary and plural nature—attributes that, while enriching, may limit its political influence—the article proposes a conceptual delineation of these two domains as a means to strengthen EE’s analytical identity and facilitate dialogue with other economic approaches. Ecological microeconomics focuses on the material and energy intensity of economic activity, the complementarity of natural capital in production processes, and the redesign of consumption and firm behavior under ecological constraints. Ecological macroeconomics, in turn, centers on the biophysical limits to growth, the concept of sustainable and optimal scale, and the integration of environmental variables into macroeconomic indicators and policy frameworks. The article argues that both fronts, despite their distinct emphases, are united by the need for long-term structural change and a normative commitment to sustainability. Together, they offer a coherent basis for rethinking prosperity within the ecological boundaries of the Earth system. Full article
Show Figures

Figure 1

23 pages, 2250 KiB  
Article
Machine Learning Techniques for Uncertainty Estimation in Dynamic Aperture Prediction
by Carlo Emilio Montanari, Robert B. Appleby, Davide Di Croce, Massimo Giovannozzi, Tatiana Pieloni, Stefano Redaelli and Frederik F. Van der Veken
Computers 2025, 14(7), 287; https://doi.org/10.3390/computers14070287 - 18 Jul 2025
Viewed by 192
Abstract
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as [...] Read more.
The dynamic aperture is an essential concept in circular particle accelerators, providing the extent of the phase space region where particle motion remains stable over multiple turns. The accurate prediction of the dynamic aperture is key to optimising performance in accelerators such as the CERN Large Hadron Collider and is crucial for designing future accelerators like the CERN Future Circular Hadron Collider. Traditional methods for computing the dynamic aperture are computationally demanding and involve extensive numerical simulations with numerous initial phase space conditions. In our recent work, we have devised surrogate models to predict the dynamic aperture boundary both efficiently and accurately. These models have been further refined by incorporating them into a novel active learning framework. This framework enhances performance through continual retraining and intelligent data generation based on informed sampling driven by error estimation. A critical attribute of this framework is the precise estimation of uncertainty in dynamic aperture predictions. In this study, we investigate various machine learning techniques for uncertainty estimation, including Monte Carlo dropout, bootstrap methods, and aleatory uncertainty quantification. We evaluated these approaches to determine the most effective method for reliable uncertainty estimation in dynamic aperture predictions using machine learning techniques. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

32 pages, 23012 KiB  
Article
A DEM Study on the Macro- and Micro-Mechanical Characteristics of an Irregularly Shaped Soil–Rock Mixture Based on the Analysis of the Contact Force Skeleton
by Chenglong Jiang, Lingling Zeng, Yajing Liu, Yu Mu and Wangyi Dong
Appl. Sci. 2025, 15(14), 7978; https://doi.org/10.3390/app15147978 - 17 Jul 2025
Viewed by 133
Abstract
The mechanical characteristics of soil–rock mixtures (S-RMs) are essential for ensuring geotechnical engineering stability and are significantly influenced by the microstructure’s contact network configuration. Due to the irregularity of particle shapes and the variability in particle grading with S-RMs, their macro-mechanical characteristics and [...] Read more.
The mechanical characteristics of soil–rock mixtures (S-RMs) are essential for ensuring geotechnical engineering stability and are significantly influenced by the microstructure’s contact network configuration. Due to the irregularity of particle shapes and the variability in particle grading with S-RMs, their macro-mechanical characteristics and mesoscopic contact skeleton distribution exhibit increased complexity. To further elucidate the macro-mesoscopic mechanical behavior of S-RMs, this study employed the DEM to develop a model incorporating irregular specimens representing various states, based on CT scan outlines, and applied flexible boundary conditions. A main skeleton system of contact force chains is an effective methodology for characterizing the dominant structural features that govern the mechanical behavior of soil–rock mixture specimens. The results demonstrate that the strength of S-RMs was significantly influenced by gravel content and consolidation state; however, the relationship is not merely linear but rather intricately associated with the strength and distinctiveness of the contact force chain skeleton. In the critical state, the mechanical behavior of S-RMs was predominantly governed by the characteristics of the principal contact force skeleton: the contact force skeleton formed by gravel–gravel, despite having fewer contact forces, exhibits strong contact characteristics and an exceptionally high-density distribution of weak contacts, conferring the highest shear strength to the specimens. Conversely, the principal skeleton formed through gravel–sand exhibits contact characteristics that are less distinct compared to those associated with strong contacts. Simultaneously, the probability density distribution of weak contacts diminishes, resulting in reduced shear strength. The contact skeleton dominated by sand–sand contact forces displays similar micro-mechanical characteristics yet possesses the weakest macroscopic behavior strength. Consequently, the concept of the main skeleton of contact force chains utilized in this study presents a novel research approach for elucidating the macro- and micro-mechanical characteristics of multiphase media. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

22 pages, 1946 KiB  
Article
Exploring the Development Trajectory of Digital Transformation
by Pin-Shin Wang, Tzu-Chuan Chou and Jau-Rong Chen
Systems 2025, 13(7), 568; https://doi.org/10.3390/systems13070568 - 10 Jul 2025
Viewed by 207
Abstract
Digital transformation (DT) has become a critical focus in both academia and industry. However, its rapid evolution complicates our understanding of its core concepts and developmental patterns. Understanding the development path of DT is crucial for both scholars and practitioners because it provides [...] Read more.
Digital transformation (DT) has become a critical focus in both academia and industry. However, its rapid evolution complicates our understanding of its core concepts and developmental patterns. Understanding the development path of DT is crucial for both scholars and practitioners because it provides a structured view of how the field has progressed over time. This study employs main path analysis (MPA), a citation-based scientometric method, to systematically review and trace the intellectual trajectory of DT research over the past 30 years. Drawing on 1790 academic articles from the Web of Science database, the study identifies key influential works and maps the primary citation paths that shape the field. The analysis reveals three major developmental phases of DT research—engagement, enablement, and enhancement—each characterized by distinct thematic and conceptual shifts. Furthermore, five emerging research trends are uncovered: reinventing digital innovation affordance, value-creation paths of DT, synergistic DT with business and management practices, disciplinary boundaries of DT, and digital leadership. Understanding the intellectual trajectory and emerging trends of DT helps practitioners anticipate technological shifts and align transformation efforts, guiding decision-makers in effectively managing their DT processes. Also, these findings provide a structured framework for understanding the evolution of DT and offer valuable directions for future research in information systems and digital innovation. Full article
Show Figures

Figure 1

20 pages, 1669 KiB  
Article
Multi-Level Asynchronous Robust State Estimation for Distribution Networks Considering Communication Delays
by Xianglong Zhang, Ying Liu, Songlin Gu, Yuzhou Tian and Yifan Gao
Energies 2025, 18(14), 3640; https://doi.org/10.3390/en18143640 - 9 Jul 2025
Viewed by 251
Abstract
With the hierarchical evolution of distribution network control architectures, distributed state estimation has become a focal point of research. To address communication delays arising from inter-level data exchanges, this paper proposes a multi-level, asynchronous, robust state estimation algorithm that accounts for such delays. [...] Read more.
With the hierarchical evolution of distribution network control architectures, distributed state estimation has become a focal point of research. To address communication delays arising from inter-level data exchanges, this paper proposes a multi-level, asynchronous, robust state estimation algorithm that accounts for such delays. First, a multi-level state estimation model is formulated based on the concept of a maximum normal measurement rate, and a hierarchical decoupling modeling approach is developed. Then, an event-driven broadcast transmission strategy is designed to unify boundary information exchanged between levels during iteration. A multi-threaded parallel framework is constructed to decouple receiving, computation, and transmission tasks, thereby enhancing asynchronous scheduling capabilities across threads. Additionally, a round-based synchronization mechanism is proposed to enforce fully synchronized iterations in the initial stages, thereby improving the overall process of asynchronous state estimation. Case study results demonstrate that the proposed algorithm achieves high estimation accuracy and strong robustness, while reducing the average number of iterations by nearly 40% and shortening the runtime by approximately 35% compared to conventional asynchronous methods, exhibiting superior estimation performance and computational efficiency under communication delay conditions. Full article
Show Figures

Figure 1

26 pages, 3107 KiB  
Review
Downscaling Planetary Boundaries: How Does the Framework’s Localization Hinder the Concept’s Operationalization?
by Damien Rieutor, Gwendoline De Oliveira-Neves, Guillaume Mandil and Cecilia Bertozzi
World 2025, 6(3), 96; https://doi.org/10.3390/world6030096 - 8 Jul 2025
Viewed by 561
Abstract
This article investigates issues in the local operationalization of the Planetary Boundaries concept (PBc), crucial for assessing human impacts on the Earth system and guiding sustainable development policies. Originally designed for the global scale, this concept requires local adaptation to align territorial actions [...] Read more.
This article investigates issues in the local operationalization of the Planetary Boundaries concept (PBc), crucial for assessing human impacts on the Earth system and guiding sustainable development policies. Originally designed for the global scale, this concept requires local adaptation to align territorial actions with global environmental goals. Following a qualitative analysis of 34 review articles, a systematic categorization method is employed to identify recurrent localization and operationalization issues. Their analysis provides three main contributions that improve the understanding of PBc downscaling mechanisms. First, it identifies a prevalent quantification-based localization approach. Second, it categorizes local operationalization constraints into three distinct groups. Third, it reveals underlying patterns demonstrating that the prevalent approach, despite ensuring scientific rigor, generates methodological and practical constraints to effective local operationalization. This “operational paradox” reveals fundamental tensions between the PBc’s biophysical interpretation, localization by quantification, and local operationalization, contrasting measurement or meaning, precision or participation, and standardized solutions or locally adapted responses. For future research, the analysis of the interactions between these contributions suggests operating a paradigm shift based on a socio-biophysical interpretation of the PBc and the contextualization of the resulting components. This alternative approach could prioritize territorial anchoring, stakeholder inclusion, and the co-construction of sustainability trajectories. Full article
Show Figures

Figure 1

17 pages, 4414 KiB  
Article
Mechanical Characteristics of 26H2MF and St12T Steels Under Torsion at Elevated Temperatures
by Waldemar Dudda
Materials 2025, 18(13), 3204; https://doi.org/10.3390/ma18133204 - 7 Jul 2025
Viewed by 236
Abstract
The concept of “material effort” appears in continuum mechanics wherever the response of a material to the currently existing state of loads and boundary conditions loses its previous, predictable character. However, within the material, which still descriptively remains a continuous medium, new physical [...] Read more.
The concept of “material effort” appears in continuum mechanics wherever the response of a material to the currently existing state of loads and boundary conditions loses its previous, predictable character. However, within the material, which still descriptively remains a continuous medium, new physical structures appear and new previously unused physical features of the continuum are activated. The literature is dominated by a simplified way of thinking, which assumes that all these states can be characterized and described by one and the same measure of effort—for metals it is the Huber–Mises–Hencky equivalent stress. Quantitatively, perhaps 90% of the literature is dedicated to this equivalent stress. The remaining authors, as well as the author of this paper, assume that there is no single universal measure of effort that would “fit” all operating conditions of materials. Each state of the structure’s operation may have its own autonomous measure of effort, which expresses the degree of threat from a specific destruction mechanism. In the current energy sector, we are increasingly dealing with “low-cycle thermal fatigue states”. This is related to the fact that large, difficult-to-predict renewable energy sources have been added. Professional energy based on coal and gas units must perform many (even about 100 per year) starts and stops, and this applies not only to the hot state, but often also to the cold state. The question arises as to the allowable shortening of start and stop times that would not to lead to dangerous material effort, and whether there are necessary data and strength characteristics for heat-resistant steels that allow their effort to be determined not only in simple states, but also in complex stress states. Do these data allow for the description of the material’s yield surface? In a previous publication, the author presented the results of tension and compression tests at elevated temperatures for two heat-resistant steels: St12T and 26H2MF. The aim of the current work is to determine the properties and strength characteristics of these steels in a pure torsion test at elevated temperatures. This allows for the analysis of the strength of power turbine components operating primarily on torsion and for determining which of the two tested steels is more resistant to high temperatures. In addition, the properties determined in all three tests (tension, compression, torsion) will allow the determination of the yield surface of these steels at elevated temperatures. They are necessary for the strength analysis of turbine elements in start-up and shutdown cycles, in states changing from cold to hot and vice versa. A modified testing machine was used for pure torsion tests. It allowed for the determination of the sample’s torsion moment as a function of its torsion angle. The experiments were carried out at temperatures of 20 °C, 200 °C, 400 °C, 600 °C, and 800 °C for St12T steel and at temperatures of 20 °C, 200 °C, 400 °C, 550 °C, and 800 °C for 26H2MF steel. Characteristics were drawn up for each sample and compared on a common graph corresponding to the given steel. Based on the methods and relationships from the theory of strength, the yield stress and torsional strength were determined. The yield stress of St12T steel at 600 °C was 319.3 MPa and the torsional strength was 394.4 MPa. For 26H2MH steel at 550 °C, the yield stress was 311.4 and the torsional strength was 382.8 MPa. St12T steel was therefore more resistant to high temperatures than 26H2MF. The combined data from the tension, compression, and torsion tests allowed us to determine the asymmetry and plasticity coefficients, which allowed us to model the yield surface according to the Burzyński criterion as a function of temperature. The obtained results also allowed us to determine the parameters of the Drucker-Prager model and two of the three parameters of the Willam-Warnke and Menetrey-Willam models. The research results are a valuable contribution to the design and diagnostics of power turbine components. Full article
Show Figures

Figure 1

17 pages, 2885 KiB  
Article
Research on Construction and Application of Water Processes Based on Knowledge Graph: Analysis of Dynamic Paths and Impact Factors
by Yanhong Song, Ping Ai, Chuansheng Xiong, Jintao Li and Shicheng Gong
Water 2025, 17(13), 2020; https://doi.org/10.3390/w17132020 - 4 Jul 2025
Viewed by 218
Abstract
The water process refers to the movement and changes in water on Earth, encompassing changes among its three states and its spatial movement. This process is vital for human society as it directly influences water resources, environmental sustainability, and climate regulation. Previous studies [...] Read more.
The water process refers to the movement and changes in water on Earth, encompassing changes among its three states and its spatial movement. This process is vital for human society as it directly influences water resources, environmental sustainability, and climate regulation. Previous studies have used various related factors to analyze the water process but have not explained the rationale behind selecting these factors from the perspective of pathways. Based on this, the paper explores the construction and application of a top-down water process knowledge graph to clarify the changing process of water movement and the sources of influencing factors. Firstly, we define the concept of the water process and classify its entities based on the concept of water boundaries. Secondly, we identify key knowledge components of the water process, including water bodies, processes, and influencing factors. Finally, we construct and analyze a knowledge graph of the water process and its influencing factors. Results show that (1) the paths of water process help us understand the movement and change process of the water bodies; (2) the number of paths increases with the length of the connection between entities, reflecting the complexity of water process relationships; and (3) tracing these pathways can help identify their influencing factors, providing a data foundation for applying deep learning algorithms in water process research. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

23 pages, 372 KiB  
Article
Computability of the Zero-Error Capacity of Noisy Channels
by Holger Boche and Christian Deppe
Information 2025, 16(7), 571; https://doi.org/10.3390/info16070571 - 3 Jul 2025
Viewed by 276
Abstract
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is [...] Read more.
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is known for computing this capacity. In this work, we investigate the computability-theoretic boundaries of the zero-error capacity and establish several fundamental limitations. Our main result shows that the zero-error capacity of noisy channels is not Banach–Mazur-computable and therefore is also not Borel–Turing-computable. This provides a strong form of non-computability that goes beyond classical undecidability, capturing the inherent discontinuity of the capacity function. As a further contribution, we analyze the deep connections between (i) the zero-error capacity of DMCs, (ii) the Shannon capacity of graphs, and (iii) Ahlswede’s operational characterization via the maximum-error capacity of 0–1 arbitrarily varying channels (AVCs). We prove that key semi-decidability questions are equivalent for all three capacities, thus unifying these problems into a common algorithmic framework. While the computability status of the Shannon capacity of graphs remains unresolved, our equivalence result clarifies what makes this problem so challenging and identifies the logical barriers that must be overcome to resolve it. Together, these results chart the computational landscape of zero-error information theory and provide a foundation for further investigations into the algorithmic intractability of exact capacity computations. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
24 pages, 4350 KiB  
Article
HECS4MQTT: A Multi-Layer Security Framework for Lightweight and Robust Encryption in Healthcare IoT Communications
by Saud Alharbi, Wasan Awad and David Bell
Future Internet 2025, 17(7), 298; https://doi.org/10.3390/fi17070298 - 30 Jun 2025
Viewed by 319
Abstract
Internet of Things (IoT) technology in healthcare has enabled innovative services that enhance patient monitoring, diagnostics and medical data management. However, securing sensitive health data while maintaining system efficiency of resource-constrained IoT devices remains a critical challenge. This work presents a comprehensive end-to-end [...] Read more.
Internet of Things (IoT) technology in healthcare has enabled innovative services that enhance patient monitoring, diagnostics and medical data management. However, securing sensitive health data while maintaining system efficiency of resource-constrained IoT devices remains a critical challenge. This work presents a comprehensive end-to-end IoT security framework for healthcare environments, addressing encryption at two key levels: lightweight encryption at the edge for resource-constrained devices and robust end-to-end encryption when transmitting data to the cloud via MQTT cloud brokers. The proposed system leverages multi-broker MQTT architecture to optimize resource utilization and enhance message reliability. At the edge, lightweight cryptographic techniques ensure low-latency encryption before transmitting data via a secure MQTT broker hosted within the hospital infrastructure. To safeguard data as it moves beyond the hospital to the cloud, stronger end-to-end encryption are applied to ensure end-to-end security, such as AES-256 and TLS 1.3, to ensure confidentiality and resilience over untrusted networks. A proof-of-concept Python 3.10 -based MQTT implementation is developed using open-source technologies. Security and performance evaluations demonstrate the feasibility of the multi-layer encryption approach, effectively balancing computational overhead with data protection. Security and performance evaluations demonstrate that our novel HECS4MQTT (Health Edge Cloud Security for MQTT) framework achieves a unique balance between efficiency and security. Unlike existing solutions that either impose high computational overhead at the edge or rely solely on transport-layer protection, HECS4MQTT introduces a layered encryption strategy that decouples edge and cloud security requirements. This design minimizes processing delays on constrained devices while maintaining strong cryptographic protection when data crosses trust boundaries. The framework also introduces a lightweight bridge component for re-encryption and integrity enforcement, thereby reducing broker compromise risk and supporting compliance with healthcare security regulations. Our HECS4MQTT framework offers a scalable, adaptable, and trust-separated security model, ensuring enhanced confidentiality, integrity, and availability of healthcare data while remaining suitable for deployment in real-world, latency-sensitive, and resource-limited medical environments. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

20 pages, 3893 KiB  
Article
Research on Boundary Displacement of Probe Trajectory Considering Deviations in Five-Axis Sweep Scanning Measurement
by Peng Chen, Tao Fang, Zhiyong Chang, Bowen Xue and Neng Wan
Micromachines 2025, 16(7), 759; https://doi.org/10.3390/mi16070759 - 27 Jun 2025
Viewed by 231
Abstract
Five-axis sweep scanning measurement technology, as a novel contact measurement technology, offers excellent reachability and high measurement efficiency for complex parts. However, deviations between the measurement instructions based on the model and the workpiece exist, leading to mismatches between the intended and actual [...] Read more.
Five-axis sweep scanning measurement technology, as a novel contact measurement technology, offers excellent reachability and high measurement efficiency for complex parts. However, deviations between the measurement instructions based on the model and the workpiece exist, leading to mismatches between the intended and actual sweep scanning areas, which manifest as displacements of the scanning boundaries and subsequently impact the acquisition of sampling points. When these sampling points are utilized to evaluate the machining quality of workpieces, the accuracy and reliability of the assessment results are compromised. Therefore, by focusing on the phenomenon of boundary displacement in a five-axis sweep scanning measurement, the sampling principle has been analyzed, the constrained sector for the probe tip trajectory in a five-axis scanning measurement has been defined, and the concept of the trajectory constrained sector effect has been proposed for the first time. The constrained sector effect reveals how deviations affect the scanning boundary positions and acquisition of sampling points. Based on the constrained sector effect, the influence of deviations on boundary displacement and sampling point acquisition in single-patch and multiple-patch measurement scenarios is discussed. Furthermore, practical engineering recommendations are provided, aiming to reduce the impact of deviations on the completeness of sampling point acquisition. Full article
(This article belongs to the Section E:Engineering and Technology)
Show Figures

Figure 1

24 pages, 78199 KiB  
Article
IPACN: Information-Preserving Adaptive Convolutional Network for Remote Sensing Change Detection
by Hongchao Qi, Xin Gao, Jiaqiang Lei and Fenglei Wang
Remote Sens. 2025, 17(13), 2121; https://doi.org/10.3390/rs17132121 - 20 Jun 2025
Cited by 1 | Viewed by 322
Abstract
Very high resolution (VHR) remote sensing change detection (CD) is crucial for monitoring Earth’s dynamics but faces challenges in capturing fine-grained changes and distinguishing them from pseudo-changes due to varying acquisition conditions. Existing deep learning methods often suffer from information loss via downsampling, [...] Read more.
Very high resolution (VHR) remote sensing change detection (CD) is crucial for monitoring Earth’s dynamics but faces challenges in capturing fine-grained changes and distinguishing them from pseudo-changes due to varying acquisition conditions. Existing deep learning methods often suffer from information loss via downsampling, obscuring details, and lack filter adaptability to spatial heterogeneity. To address these issues, we introduce Information-Preserving Adaptive Convolutional Network (IPACN). IPACN features a novel Information-Preserving Backbone (IPB), leveraging principles adapted from reversible networks to minimize feature degradation during hierarchical bi-temporal feature extraction, enhancing the preservation of fine spatial details, essential for accurate change delineation. Crucially, IPACN incorporates a Frequency-Adaptive Difference Enhancement Module (FADEM) that applies adaptive filtering, informed by frequency analysis concepts, directly to the bi-temporal difference features. The FADEM dynamically refines change signals based on local spectral characteristics, improving discrimination. This synergistic approach, combining high-fidelity feature preservation (IPB) with adaptive difference refinement (FADEM), yields robust change representations. Comprehensive experiments on benchmark datasets demonstrate that IPACN achieves state-of-the-art performance, showing significant improvements in F1 score and IoU, enhanced boundary delineation, and improved robustness against pseudo-changes, offering an effective solution for very high resolution remote sensing CD. Full article
Show Figures

Figure 1

Back to TopTop