Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,106)

Search Parameters:
Keywords = open platform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 17688 KB  
Article
A GIS-Based Platform for Efficient Governance of Illegal Land Use and Construction: A Case Study of Xiamen City
by Chuxin Li, Yuanrong He, Yuanmao Zheng, Yuantong Jiang, Xinhui Wu, Panlin Hao, Min Luo and Yuting Kang
Land 2026, 15(2), 209; https://doi.org/10.3390/land15020209 (registering DOI) - 25 Jan 2026
Abstract
By addressing the challenges of management difficulties, insufficient integration of driver analysis, and single-dimensional analysis in the governance of illegal land use and illegal construction (collectively referred to as the “Two Illegalities”) under rapid urbanization, this study designs and implements a GIS-based governance [...] Read more.
By addressing the challenges of management difficulties, insufficient integration of driver analysis, and single-dimensional analysis in the governance of illegal land use and illegal construction (collectively referred to as the “Two Illegalities”) under rapid urbanization, this study designs and implements a GIS-based governance system using Xiamen City as the study area. First, we propose a standardized data-processing workflow and construct a comprehensive management platform integrating multi-source data fusion, spatiotemporal visualization, intelligent analysis, and customized report generation, effectively lowering the barrier for non-professional users. Second, utilizing methods integrated into the platform, such as Moran’s I and centroid trajectory analysis, we deeply analyze the spatiotemporal evolution and driving mechanisms of “Two Illegalities” activities in Xiamen from 2018 to 2023. The results indicate that the distribution of “Two Illegalities” exhibits significant spatial clustering, with hotspots concentrated in urban–rural transition zones. The spatial morphology evolved from multi-core diffusion to the contraction of agglomeration belts. This evolution is essentially the result of the dynamic adaptation between regional economic development gradients, urbanization processes, and policy-enforcement synergy mechanisms. Through a modular, open technical architecture and a “Data-Technology-Enforcement” collaborative mechanism, the system significantly improves information management efficiency and the scientific basis of decision-making. It provides a replicable and scalable technical framework and practical paradigm for similar cities to transform “Two Illegalities” governance from passive disposal to active prevention and control. Full article
Show Figures

Figure 1

15 pages, 556 KB  
Review
Robotic Rectus Muscle Flap Reconstruction After Pelvic Exenteration in Gynecological Oncology: Current and Future Perspectives—A Narrative Review
by Gurhan Guney, Ritchie M. Delara, Johnny Yi, Evrim Erdemoglu and Kristina A. Butler
Cancers 2026, 18(3), 375; https://doi.org/10.3390/cancers18030375 (registering DOI) - 25 Jan 2026
Abstract
Background/Objectives: Pelvic exenteration is a radical procedure performed for recurrent gynecologic cancers. The goal of exenteration is to prolong survival, but this procedure also results in extensive tissue loss and consequently high morbidity. Reconstruction using vascularized flaps, particularly the VRAM flap, is [...] Read more.
Background/Objectives: Pelvic exenteration is a radical procedure performed for recurrent gynecologic cancers. The goal of exenteration is to prolong survival, but this procedure also results in extensive tissue loss and consequently high morbidity. Reconstruction using vascularized flaps, particularly the VRAM flap, is crucial to restoring pelvic integrity and decreasing complications resulting from extensive tissue loss. With the rise of minimally invasive surgery, the traditionally open abdominal approach to exenteration and reconstruction can now be performed with the assistance of robotic platforms. This review aims to summarize available evidence, describe techniques, and propose future directions for robotic rectus flap reconstruction after pelvic exenteration. Methods: This narrative review was conducted following the SANRA guidelines for narrative synthesis. A comprehensive search of PubMed, Embase, Scopus, and Web of Science was conducted for studies published between January 2000 and November 2025 on pelvic exenteration followed by robotic rectus abdominis flap reconstruction in gynecologic oncology. Eligible studies were retrospective or prospective reports, technical descriptions, case series, or comparative analyses. Non-robotic techniques and animal studies were excluded. Although the primary focus was gynecologic oncology, technically relevant studies from other oncologic disciplines were included when the reconstructive approach was directly applicable to pelvic exenteration. Extracted data included patient demographics, surgical details, and perioperative and oncologic outcomes. Results: The literature search identified primarily case reports and small single-center series describing robot-assisted rectus muscle-based flap reconstruction after pelvic exenteration. Reported cases demonstrated technical feasibility and successful flap harvest using robotic platforms, with adequate pelvic defect coverage. Potential benefits, such as reduced wound morbidity and preservation of a minimally invasive workflow, have been described. However, patient numbers were small, techniques varied, and standardized outcome measures or comparative data with open approaches were lacking. Conclusions: Robotic rectus flap reconstruction represents a promising advancement in pelvic exenteration surgery, potentially reducing morbidity and improving recovery. Further research, including multicenter prospective studies, is needed to validate these findings and establish standardized protocols. Full article
Show Figures

Figure 1

51 pages, 1843 KB  
Systematic Review
Remote Sensing of Woody Plant Encroachment: A Global Systematic Review of Drivers, Ecological Impacts, Methods, and Emerging Innovations
by Abdullah Toqeer, Andrew Hall, Ana Horta and Skye Wassens
Remote Sens. 2026, 18(3), 390; https://doi.org/10.3390/rs18030390 - 23 Jan 2026
Abstract
Globally, grasslands, savannas, and wetlands are degrading rapidly and increasingly being replaced by woody vegetation. Woody Plant Encroachment (WPE) disrupts natural landscapes and has significant consequences for biodiversity, ecosystem functioning, and key ecosystem services. This review synthesizes findings from 159 peer-reviewed studies identified [...] Read more.
Globally, grasslands, savannas, and wetlands are degrading rapidly and increasingly being replaced by woody vegetation. Woody Plant Encroachment (WPE) disrupts natural landscapes and has significant consequences for biodiversity, ecosystem functioning, and key ecosystem services. This review synthesizes findings from 159 peer-reviewed studies identified through a PRISMA-guided systematic literature review to evaluate the drivers of WPE, its ecological impacts, and the remote sensing (RS) approaches used to monitor it. The drivers of WPE are multifaceted, involving interactions among climate variability, topographic and edaphic conditions, hydrological change, land use transitions, and altered fire and grazing regimes, while its impacts are similarly diverse, influencing land cover structure, water and nutrient cycles, carbon and nitrogen dynamics, and broader implications for ecosystem resilience. Over the past two decades, RS has become central to WPE monitoring, with studies employing classification techniques, spectral mixture analysis, object-based image analysis, change detection, thresholding, landscape pattern and fragmentation metrics, and increasingly, machine learning and deep learning methods. Looking forward, emerging advances such as multi-sensor fusion (optical– synthetic aperture radar (SAR), Light Detection and Ranging (LiDAR)–hyperspectral), cloud-based platforms including Google Earth Engine, Microsoft Planetary Computer, and Digital Earth, and geospatial foundation models offer new opportunities for scalable, automated, and long-term monitoring. Despite these innovations, challenges remain in detecting early-stage encroachment, subcanopy woody growth, and species-specific patterns across heterogeneous landscapes. Key knowledge gaps highlighted in this review include the need for long-term monitoring frameworks, improved socio-ecological integration, species- and ecosystem-specific RS approaches, better utilization of SAR, and broader adoption of analysis-ready data and open-source platforms. Addressing these gaps will enable more effective, context-specific strategies to monitor, manage, and mitigate WPE in rapidly changing environments. Full article
23 pages, 305 KB  
Article
Towards Digital Transformation in University Teaching: Diagnosis of the Level and Profile of Digital Competence Based on the DigCompEdu and OpenEdu Frameworks Among University Lecturers in Chile
by Irma Riquelme-Plaza and Jesús Marolla-Gajardo
Educ. Sci. 2026, 16(2), 174; https://doi.org/10.3390/educsci16020174 - 23 Jan 2026
Viewed by 23
Abstract
This study diagnoses the level and profile of university lecturers’ digital competence at a Chilean higher education institution, drawing on the DigCompEdu and OpenEdu frameworks. A non-experimental correlational design was used, based on a self-perception questionnaire adapted from the DigCompEdu Check-In tool and [...] Read more.
This study diagnoses the level and profile of university lecturers’ digital competence at a Chilean higher education institution, drawing on the DigCompEdu and OpenEdu frameworks. A non-experimental correlational design was used, based on a self-perception questionnaire adapted from the DigCompEdu Check-In tool and administered to 569 lecturers through the Qualtrics platform. The instrument underwent external expert validation and demonstrated excellent internal consistency (Cronbach’s α = 0.96). Results indicate that 44% of lecturers position themselves at the “Integrator” level, 22% at the “Explorer” level, and 19% at the “Expert” level, with three clearly differentiated competence profiles. These findings informed the development of a structured training programme centred on three components: the pedagogical use of digital technologies, the incorporation of open educational practices aligned with OpenEdu, and the strengthening of students’ digital competence. The programme includes modular workshops, mentoring led by high-competence lecturers, and the creation of open educational resources. Overall, the study provides empirical evidence to guide institutional policies and to foster a reflective, ethical, and pedagogically grounded integration of digital technologies in university teaching. Full article
(This article belongs to the Section Teacher Education)
55 pages, 3089 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 - 22 Jan 2026
Viewed by 29
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

26 pages, 1611 KB  
Article
Evaluating a Virtual Learning Environment for Secondary English in a Public School: Usability, Motivation, and Engagement
by Myriam Tatiana Velarde Orozco and Bárbara Luisa de Benito Crosetti
Educ. Sci. 2026, 16(1), 169; https://doi.org/10.3390/educsci16010169 - 22 Jan 2026
Viewed by 17
Abstract
Public schools often operate with shared devices, unstable connectivity, and limited support for digital tools, which can make feature-heavy platforms difficult to adopt and sustain. This study reports the first formal design iteration and formative evaluation of VLEPIC, a school-centred virtual learning environment [...] Read more.
Public schools often operate with shared devices, unstable connectivity, and limited support for digital tools, which can make feature-heavy platforms difficult to adopt and sustain. This study reports the first formal design iteration and formative evaluation of VLEPIC, a school-centred virtual learning environment (VLE) developed to support secondary English as a Foreign Language in a low-resource Ecuadorian public school. Using a design-based research approach with a convergent mixed-methods design, one Grade 10 cohort (n = 42; two intact classes) used VLEPIC for one month as a complement to regular lessons. Data were collected through questionnaires on perceived usability and motivation, platform usage logs, and open-ended feedback from students and the teacher; results were analysed descriptively and thematically and then integrated to inform design decisions. Students reported high perceived usability and strong motivational responses in attention, relevance, and satisfaction, while confidence was more heterogeneous. Usage logs indicated recurrent but uneven engagement, with distinct low-, medium-, and high-activity profiles. Qualitative feedback highlighted enjoyment and clarity alongside issues with progress tracking between missions, navigation on mobile devices, and task submission reliability. The main contribution is a set of empirically grounded, context-sensitive design principles linking concrete interface and task-design decisions to perceived usability, motivation, and real-world usage patterns in constrained school settings. Full article
Show Figures

Figure 1

24 pages, 1137 KB  
Article
Detecting TLS Protocol Anomalies Through Network Monitoring and Compliance Tools
by Diana Gratiela Berbecaru and Marco De Santo
Future Internet 2026, 18(1), 62; https://doi.org/10.3390/fi18010062 - 21 Jan 2026
Viewed by 32
Abstract
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, [...] Read more.
The Transport Layer Security (TLS) protocol is widely used nowadays to create secure communications over TCP/IP networks. Its purpose is to ensure confidentiality, authentication, and data integrity for messages exchanged between two endpoints. In order to facilitate its integration into widely used applications, the protocol is typically implemented through libraries, such as OpenSSL, BoringSSL, LibreSSL, WolfSSL, NSS, or mbedTLS. These libraries encompass functions that execute the specialized TLS handshake required for channel establishment, as well as the construction and processing of TLS records, and the procedures for closing the secure channel. However, these software libraries may contain vulnerabilities or errors that could potentially jeopardize the security of the TLS channel. To identify flaws or deviations from established standards within the implemented TLS code, a specialized tool known as TLS-Anvil can be utilized. This tool also verifies the compliance of TLS libraries with the specifications outlined in the Request for Comments documents published by the IETF. TLS-Anvil conducts numerous tests with a client/server configuration utilizing a specified TLS library and subsequently generates a report that details the number of successful tests. In this work, we exploit the results obtained from a selected subset of TLS-Anvil tests to generate rules used for anomaly detection in Suricata, a well-known signature-based Intrusion Detection System. During the tests, TLS-Anvil generates .pcap capture files that report all the messages exchanged. Such files can be subsequently analyzed with Wireshark, allowing for a detailed examination of the messages exchanged during the tests and a thorough understanding of their structure on a byte-by-byte basis. Through the analysis of the TLS handshake messages produced during testing, we develop customized Suricata rules aimed at detecting TLS anomalies that result from flawed implementations within the intercepted traffic. Furthermore, we describe the specific test environment established for the purpose of deriving and validating certain Suricata rules intended to identify anomalies in nodes utilizing a version of the OpenSSL library that does not conform to the TLS specification. The rules that delineate TLS deviations or potential attacks may subsequently be integrated into a threat detection platform supporting Suricata. This integration will enhance the capability to identify TLS anomalies arising from code that fails to adhere to the established specifications. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
12 pages, 5353 KB  
Review
State-of-the-Art Overview of Smooth-Edged Material Distribution for Optimizing Topology (SEMDOT) Algorithm
by Minyan Liu, Wanghua Hu, Xuhui Gong, Hao Zhou and Baolin Zhao
Computation 2026, 14(1), 27; https://doi.org/10.3390/computation14010027 - 21 Jan 2026
Viewed by 63
Abstract
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote [...] Read more.
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote the practical application of topology optimization structures. This review outlines the theoretical evolution of SEMDOT, including both penalty-based and non-penalty-based formulations, while also providing access to open access codes. SEMDOT’s applications cover diverse areas, including self-supporting structures, energy-efficient manufacturing, bone tissue scaffolds, heat transfer systems, and building parts, demonstrating the versatility of SEMDOT. While SEMDOT addresses boundary issues in topology optimization structures, further theoretical refinement is needed to develop it into a comprehensive platform. This work consolidates the advances in SEMDOT, highlights its interdisciplinary impact, and identifies future research and implementation directions. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

44 pages, 2586 KB  
Review
Cellular Automata and Phase-Field Modeling of Microstructure Evolution in Metal Additive Manufacturing: Recent Advances, Hybrid Frameworks, and Pathways to Predictive Control
by Łukasz Łach
Metals 2026, 16(1), 124; https://doi.org/10.3390/met16010124 - 21 Jan 2026
Viewed by 193
Abstract
Metal additive manufacturing (AM) generates complex microstructures through extreme thermal gradients and rapid solidification, critically influencing mechanical performance and industrial qualification. This review synthesizes recent advances in cellular automata (CA) and phase-field (PF) modeling to predict grain-scale microstructure evolution during AM. CA methods [...] Read more.
Metal additive manufacturing (AM) generates complex microstructures through extreme thermal gradients and rapid solidification, critically influencing mechanical performance and industrial qualification. This review synthesizes recent advances in cellular automata (CA) and phase-field (PF) modeling to predict grain-scale microstructure evolution during AM. CA methods provide computational efficiency, enabling large-domain simulations and excelling in texture prediction and multi-layer builds. PF approaches deliver superior thermodynamic fidelity for interface dynamics, solute partitioning, and nonequilibrium rapid solidification through CALPHAD coupling. Hybrid CA–PF frameworks strategically balance efficiency and accuracy by allocating PF to solidification fronts and CA to bulk grain competition. Recent algorithmic innovations—discrete event-inspired CA, GPU acceleration, and machine learning—extend scalability while maintaining predictive capability. Validated applications across Ni-based superalloys, Ti-6Al-4V, tool steels, and Al alloys demonstrate robust process–microstructure–property predictions through EBSD and mechanical testing. Persistent challenges include computational scalability for full-scale components, standardized calibration protocols, limited in situ validation, and incomplete multi-physics coupling. Emerging solutions leverage physics-informed machine learning, digital twin architectures, and open-source platforms to enable predictive microstructure control for first-time-right manufacturing in aerospace, biomedical, and energy applications. Full article
Show Figures

Figure 1

8 pages, 178 KB  
Proceeding Paper
FIWARE-Powered Smart Farming: Integrating Sensor Networks for Sustainable Soil Management
by Christos Hitiris, Cleopatra Gkola, Dimitrios J. Vergados, Vasiliki Karamerou and Angelos Michalas
Proceedings 2026, 134(1), 58; https://doi.org/10.3390/proceedings2026134058 - 21 Jan 2026
Viewed by 72
Abstract
Digital transformation in agriculture addresses key challenges such as climate change, water shortages, and sustainable production. Precision agriculture technologies rely on the Internet of Things (IoT) sensor networks, analytics, and automated systems to manage resources efficiently and increase productivity. Fragmented infrastructures and vendor-specific [...] Read more.
Digital transformation in agriculture addresses key challenges such as climate change, water shortages, and sustainable production. Precision agriculture technologies rely on the Internet of Things (IoT) sensor networks, analytics, and automated systems to manage resources efficiently and increase productivity. Fragmented infrastructures and vendor-specific platforms lead to unintegrated data silos that obstruct regional solutions. This paper will emphasize FIWARE, an open-source, standard-based platform that can be integrated with existing agricultural sensors in municipalities or regions. FIWARE takes all these disparate sensors (soil probes, weather stations, and irrigation meters) and integrates them into a single real-time information system, providing a set of decision support tools to the user to facilitate adaptive irrigation. Case studies show the benefits of FIWARE, including water savings, reduced runoff, better decision-making, and improved climate resilience. Full article
29 pages, 2699 KB  
Article
Atmospheric Aerial Optical Links: Assessing Channel Constraints for Stable Long-Range Communications—A Historical Perspective
by Fabrizio Gerardi and Silvello Betti
Appl. Sci. 2026, 16(2), 1054; https://doi.org/10.3390/app16021054 - 20 Jan 2026
Viewed by 98
Abstract
New-generation communications aim for ubiquitous and pervasive communications with high data rates. Electromagnetic spectrum saturation and increasing data volumes can employ the use of free-space optical communication to ease capacity loads in modern networks. In this writing, we review the impact of the [...] Read more.
New-generation communications aim for ubiquitous and pervasive communications with high data rates. Electromagnetic spectrum saturation and increasing data volumes can employ the use of free-space optical communication to ease capacity loads in modern networks. In this writing, we review the impact of the atmospheric channel on the optical signal dynamics for long-range data links between high-speed and maneuverability suborbital platforms in full atmosphere. This work presents the main propagation constraints, such as path loss, turbulence, and aero-optics, which are environment-dependent and geometry-dependent for this worst-case scenario. To carry out our study, we recall experimental results collected in the literature since the early times, showing system constraints and performance limits. This provides a historical timeline perspective. Theoretical models and channel management techniques that appeared through time are briefly summarized, and their impact on link budget and stability on reference link geometries is addressed through analytical simulation. In conclusion, this paper shows that an integrated approach to this kind of link is successful mainly with a convergence of mitigation techniques and tailored engineering, which cannot neglect the knowledge of the operating environment and strongly relies on accurate physics modeling, which remains an area of active open research. Full article
(This article belongs to the Special Issue Communication Networks: From Technology, Methods to Applications)
Show Figures

Figure 1

26 pages, 9979 KB  
Article
An Intelligent Multi-Port Temperature Control Scheme with Open-Circuit Fault Diagnosis for Aluminum Heating Systems
by Song Xu, Yiqi Rui, Lijuan Wang, Pengqiang Nie, Wei Jiang, Linfeng Sun and Seiji Hashimoto
Processes 2026, 14(2), 362; https://doi.org/10.3390/pr14020362 - 20 Jan 2026
Viewed by 96
Abstract
Industrial aluminum-block heating processes exhibit nonlinear dynamics, substantial time delays, and stringent requirements for fault detection and diagnosis, especially in semiconductor manufacturing and other high-precision electronic processes, where slight temperature deviations can accelerate device degradation or even cause catastrophic failures. To address these [...] Read more.
Industrial aluminum-block heating processes exhibit nonlinear dynamics, substantial time delays, and stringent requirements for fault detection and diagnosis, especially in semiconductor manufacturing and other high-precision electronic processes, where slight temperature deviations can accelerate device degradation or even cause catastrophic failures. To address these challenges, this study presents a digital twin-based intelligent heating platform for aluminum blocks with a dual-artificial-intelligence framework (dual-AI) for control and diagnosis, which is applicable to multi-port aluminum-block heating systems. The system enables real-time observation and simulation of high-temperature operational conditions via virtual-real interaction. The platform precisely regulates a nonlinear temperature control system with a prolonged time delay by integrating a conventional proportional–integral–derivative (PID) controller with a Levenberg–Marquardt-optimized backpropagation (LM-optimized BP) neural network. Simultaneously, a relay is employed to sever the connection to the heater, thereby simulating an open-circuit fault. Throughout this procedure, sensor data are gathered simultaneously, facilitating the creation of a spatiotemporal time-series dataset under both normal and fault conditions. A one-dimensional convolutional neural network (1D-CNN) is trained to attain high-accuracy fault detection and localization. PID+LM-BP achieves a response time of about 200 s in simulation. In the 100 °C to 105 °C step experiment, it reaches a settling time of 6 min with a 3 °C overshoot. Fault detection uses a 0.38 °C threshold defined based on the absolute minute-to-minute change of the 1-min mean temperature. Full article
Show Figures

Figure 1

25 pages, 4582 KB  
Article
Assessing Radiance Contributions Above Near-Space over the Ocean Using Radiative Transfer Simulation
by Chunxia Li, Jia Liu, Qingying He, Ming Xu and Mengqi Li
Remote Sens. 2026, 18(2), 337; https://doi.org/10.3390/rs18020337 - 20 Jan 2026
Viewed by 116
Abstract
Using the near-space platform to conduct radiometric calibrations of ocean color sensors is a promising method for refining calibration precision, but there is knowledge gap about the radiance contributions above near-space over the open ocean. We used the radiative transfer (RT) model (PCOART) [...] Read more.
Using the near-space platform to conduct radiometric calibrations of ocean color sensors is a promising method for refining calibration precision, but there is knowledge gap about the radiance contributions above near-space over the open ocean. We used the radiative transfer (RT) model (PCOART) to assess the contributions (LR) of the upwelling radiance received at the near-space balloons to the total radiance (Lt) measured at the top of the atmosphere (TOA). The results indicated that the LR displayed distinct geometric dependencies with exceeding 2% across most observation geometries. Moreover, the LR increased with wavelengths under the various solar zenith angles, and the LR values fell below 1% only for the two near-infrared bands. Additionally, the influences of variations in oceanic constituents on LR were negligible across various azimuth angles and spectral bands, except in nonalgal particle (NAP)-dominated waters. Furthermore, the influences of aerosol optical thicknesses (AOTs) and atmospheric vertical distributions on LR were examined. Outside glint-contaminated areas, the atmosphere-associated LR variations could exceed 2% but declined substantially as AOTs increased under most observation geometries. The mean height of the vertically inhomogeneous layer (hm) significantly influenced LR, and the differences in Lt could exceed 5% when comparing atmospheric vertical distributions following homogeneous versus Gaussian-like distributions. Finally, the transformability from near-space radiance to Lt was examined based on a multiple layer perceptron (MLP) model, which exhibited high agreement with the RT simulations. The MAPD averaged 0.420% across the eight bands, ranging from 0.218% to 0.497%. Overall, the radiometric calibration utilizing near-space represents a significant innovation method for satellite-borne ocean color sensors. Full article
(This article belongs to the Special Issue Remote Sensing for Monitoring Water and Carbon Cycles)
Show Figures

Figure 1

25 pages, 3538 KB  
Article
Pushing the Limits of Large Language Models in Quantum Operations
by Dayton C. Closser and Zbigniew J. Kabala
Quantum Rep. 2026, 8(1), 7; https://doi.org/10.3390/quantum8010007 - 19 Jan 2026
Viewed by 109
Abstract
What is the fastest Artificial Intelligence Large Language Model (AI LLM) for generating quantum operations? To answer this, we present the first benchmarking study comparing popular and publicly available AI models tasked with creating quantum gate designs. The Wolfram Mathematica framework was used [...] Read more.
What is the fastest Artificial Intelligence Large Language Model (AI LLM) for generating quantum operations? To answer this, we present the first benchmarking study comparing popular and publicly available AI models tasked with creating quantum gate designs. The Wolfram Mathematica framework was used to interface with the six AI LLMs, including Google Gemini 2.0 Flash, Anthropic Claude 3 Haiku, WolframLLM Notebook Assistant For Mathematica V14.3.0.0, OpenAI ChatGPT Omni 4 Mini, Google Gemma 3 4b 1t, and DeepSeek Chat V3. Our novel study found the following: (1) Gemini 2.0 Flash is overall the fastest AI LLM of the models tested in producing average quantum gate designs at 2.66101 s, factoring in the “thinking” execution time and ServiceConnect network latencies. (2) On average, four out of the ten quantum operations that the six LLMs produced compiled in Python version 3.13.5 (40.8% success rate). (3) Quantum operations averaged approximately 21–45 Lines of Code (omitting nonsensical outliers). (4) DeepSeek Chat V3 produced the shortest code with an average of 21.6 lines. This comparison evaluates the time taken by each AI LLM platform to generate quantum operations (including ServiceConnect networking times). These findings highlight a promising horizon where publicly available Large Language Models can become fast collaborators with quantum computers, enabling rapid quantum gate synthesis and paving the way for greater interoperability between two remarkable and cutting-edge technologies. Full article
Show Figures

Figure 1

13 pages, 2412 KB  
Article
AI-Based Brain Volumetry Without MPRAGE? Evaluation of Synthetic T1-MPRAGE from 2D T2/FLAIR
by Ludwig Singer, Tim Alexius Möhle, Angelika Mennecke, Konstantin Huhn, Veit Rothhammer, Manuel Alexander Schmidt, Arnd Doerfler and Stefan Lang
Diagnostics 2026, 16(2), 317; https://doi.org/10.3390/diagnostics16020317 - 19 Jan 2026
Viewed by 144
Abstract
Background: Automated AI-based brain volumetry is increasingly used in clinical practice. T1-weighted sequences (e.g., MPRAGE) are considered the current state-of-the art. However, due to faster acquisition and higher in-plane resolution, 2D anisotropic sequences are often preferred in clinical routine. However, these sequences [...] Read more.
Background: Automated AI-based brain volumetry is increasingly used in clinical practice. T1-weighted sequences (e.g., MPRAGE) are considered the current state-of-the art. However, due to faster acquisition and higher in-plane resolution, 2D anisotropic sequences are often preferred in clinical routine. However, these sequences cannot be processed with currently available AI-volumetry software. Thus, we here aimed to evaluate volumetric data from synthetic MPRAGE-like sequences (mprAIge). Methods: We analyzed 412 datasets (206 conventional MPRAGE and 206 T2w/FLAIR) from healthy volunteers (n = 36) and patients with multiple sclerosis (n = 140). Synthetic mprAIge was generated using SynthSR-CNN and assessed via assemblyNET on the volBrain platform. Total brain volume (TBV), gray and white matter volume (GMV/WMV), and key substructures were compared between mprAIge and conventional MPRAGE. Average volume differences (AVDs) and correlations were calculated. Results: Synthetic mprAIge was generated successfully in all 206 cases. Quantitative analysis demonstrated strong correlation and high agreement for key substructures. TBV showed excellent agreement (AVD: 2.75% for controls, 3.90% for MS patients; r = 0.99 and 0.97, respectively). White matter volume exhibited excellent agreement (AVD: −1.92% for controls, 0.28% for MS patients; r = 0.95). Hippocampal volume also demonstrated good to excellent agreement (AVD: 1.13% for controls, −1.92% for MS patients; r = 0.91 and 0.89, respectively). Conclusions: Synthetic mprAIge enables AI-volumetry software application without limitations. Its volumetric assessments align well with conventional MPRAGE, opening new opportunities for volumetric post-processing and mapping of disease progression. Full article
Show Figures

Figure 1

Back to TopTop