Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (218)

Search Parameters:
Keywords = low-code/no-code platform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 879 KB  
Article
Enhanced Exome Sequencing Improves the Genetic Diagnosis of Deafblindness
by Guadalupe A. Cifuentes, Marta Diñeiro, Alicia R. Huete, Raquel Capín, Adrián Santiago, Alberto A. R. Vargas, Dido Carrero, Julien Biscay, Esther López Martínez, Beatriz Aguiar, María Urbaniak, Beatriz Fernández-Vega, María Costales, Rocío González-Aguado, Rubén Cabanillas and Juan Cadiñanos
Genes 2026, 17(3), 344; https://doi.org/10.3390/genes17030344 - 19 Mar 2026
Viewed by 365
Abstract
Background/Objectives: The combination of hearing loss and visual impairment in a single patient strongly suggests a genetic aetiology. However, after conventional testing, a considerable proportion of deafblindness cases remain without a genetic diagnosis. The aim of this study was to address this diagnostic [...] Read more.
Background/Objectives: The combination of hearing loss and visual impairment in a single patient strongly suggests a genetic aetiology. However, after conventional testing, a considerable proportion of deafblindness cases remain without a genetic diagnosis. The aim of this study was to address this diagnostic gap. Methods: We developed an enhanced exome strategy that uses a whole-exome backbone complemented by spike-in capture probes for (i) low-coverage coding segments and clinically validated, non-coding regions (including deep intronic splice-altering sites and untranslated exonic sequences) across 659 genes associated with hearing loss and/or visual impairment, and (ii) mitochondrial DNA. Results: With 66.6 million paired-end reads per sample, this methodology achieved coverage of at least 20 reads per base at 99.3% of target coding and non-coding positions of genes associated with deafness and/or blindness, as well as 98.8% of the whole exome. The enhanced exome approach correctly identified the genetic variants causative of deafness and/or blindness in 10 out of 10 cases with a previously known genetic cause, in 3 out of 10 additional cases that remained undiagnosed after extensive panel sequencing, and in 4 out of 4 cases that had not been genetically studied before. Comparison of the performance of two commercial bioinformatics platforms for enhanced exome interpretation revealed that eVAI consistently prioritised causative variants higher than, or as high as, VarSome Clinical, resulting in a tendency toward shorter interpretation times using the former. Both platforms offered the same diagnostic yield and both failed to correctly call one of the causative variants. Conclusions: In an era where many centres operate exome analysis through virtual panels, enhanced exome sequencing leverages the advantages of whole-exome and custom panel sequencing: it provides panel-like sensitivity for clinically actionable loci, while offering the flexibility to periodically reanalyse data and discover candidate genes. Full article
(This article belongs to the Section Genetic Diagnosis)
Show Figures

Figure 1

25 pages, 3296 KB  
Article
Machine Learning for Building Code Waiver Assessment: A Predictive Analytics Framework from 197 Singapore BCA Cases (2021–2023)
by Samson Tan and Teik Toe Teoh
Appl. Sci. 2026, 16(6), 2772; https://doi.org/10.3390/app16062772 - 13 Mar 2026
Viewed by 234
Abstract
Building code waiver assessments in Singapore remain largely discretionary, relying on case officers’ subjective judgement with limited decision-support tooling. This study presents the first machine learning framework for predicting building code waiver outcomes, trained on 197 historically decided cases from the Building and [...] Read more.
Building code waiver assessments in Singapore remain largely discretionary, relying on case officers’ subjective judgement with limited decision-support tooling. This study presents the first machine learning framework for predicting building code waiver outcomes, trained on 197 historically decided cases from the Building and Construction Authority (BCA) across five waiver categories: barrier-free accessibility (n = 45), ventilation (n = 61), staircase design (n = 37), safety provisions (n = 30), and structural modifications (n = 24), spanning 2021 to 2023. Fourteen engineered features, including documentation completeness, technical justification quality, and compliance history, were extracted through domain-expert annotation. Four models were evaluated: L2-regularised logistic regression, random forest, gradient boosting (XGBoost 2.0.1), and a weighted ensemble. The ensemble achieved the highest predictive accuracy of 83.7% (95% CI: 79.2–88.1%) with an area under the receiver operating characteristic curve (AUC) of 0.891 (95% CI: 0.854–0.928), significantly outperforming all individual models (McNemar’s test, p < 0.05). SHAP analysis revealed that documentation completeness and technical justification quality collectively account for 55% of prediction variance. A companion five-by-five risk assessment matrix, combining predicted rejection probability with consequence severity, stratified cases into actionable risk tiers correlating with observed approval rates ranging from 90.3% (very low risk) to 10.0% (very high risk; Spearman rho = −0.71, p < 0.001). Performance varied across waiver categories: ventilation waivers achieved the highest balanced accuracy (87.1%) while safety waivers proved most challenging (balanced accuracy 64.3%, sensitivity 40.0%). The framework offers a transparent, data-driven decision-support complement to regulatory judgement, learning patterns from historically decided applications within the 2021–2023 BCA context, and demonstrates feasibility for integration into Singapore’s Corenet X digital building submission platform. These five waiver categories serve as domain stratification variables. The machine learning target variable is the binary regulatory outcome: Approved (46.2% of cases) or Rejected (53.8%). Full article
Show Figures

Figure 1

22 pages, 4391 KB  
Article
Fuzzy Logic-Based LVRT Enhancement in Grid-Connected PV System for Sustainable Smart Grid Operation: A Unified Approach for DC-Link Voltage and Reactive Power Control
by Mokabbera Billah, Shameem Ahmad, Chowdhury Akram Hossain, Md. Rifat Hazari, Minh Quan Duong, Gabriela Nicoleta Sava and Emanuele Ogliari
Sustainability 2026, 18(5), 2448; https://doi.org/10.3390/su18052448 - 3 Mar 2026
Viewed by 395
Abstract
Low-voltage ride-through (LVRT) capability is essential for grid-connected photovoltaic (PV) systems, especially as rising renewable integration challenges grid stability during voltage disturbances. Existing LVRT methods often target isolated control functions, leading to limited system resilience. This paper presents a unified control strategy integrating [...] Read more.
Low-voltage ride-through (LVRT) capability is essential for grid-connected photovoltaic (PV) systems, especially as rising renewable integration challenges grid stability during voltage disturbances. Existing LVRT methods often target isolated control functions, leading to limited system resilience. This paper presents a unified control strategy integrating DC-link voltage regulation, reactive power injection, and overvoltage mitigation using a coordinated fuzzy logic framework. The proposed architecture employs a cascaded control structure comprising an outer voltage loop and an inner current loop with feed-forward decoupling, synchronized via a Synchronous Reference Frame Phase-Locked Loop (SRF-PLL). At its core is a dual-input, single-output Fuzzy Logic Controller (FLC), featuring optimized membership functions and dynamic rule-based logic to manage multiple control objectives during grid faults. The proposed FLC-based unified LVRT controller for grid-tied PV system was implemented and validated for both symmetrical and asymmetrical fault conditions in MATLAB/Simulink 2023b platform. The proposed FLC-based LVRT controller achieves voltage sag compensation of 97.02% and 98.4% for symmetrical and asymmetrical faults, respectively, outperforming conventional PI control, which achieves 94.02% and 96.5%. The system maintains a stable DC-link voltage of 800 V and delivers up to 78% reactive power support during faults. Fault detection and recovery are completed within 200 ms, complying with Bangladesh grid code requirements. This integrated fuzzy logic approach offers a significant advancement for enhancing grid stability in high-renewable environments and supports reliable renewable utilization, and more sustainable grid operation in developing regions. Full article
(This article belongs to the Special Issue Sustainable Energy in Building and Built Environment)
Show Figures

Figure 1

15 pages, 5848 KB  
Article
A Software Defined Radio Implementation of Non-Orthogonal Multiple Access with Reliable Decoding via Error Correction
by Dipanjan Adhikary and Eirini Eleni Tsiropoulou
Future Internet 2026, 18(3), 128; https://doi.org/10.3390/fi18030128 - 2 Mar 2026
Viewed by 466
Abstract
Non-orthogonal multiple access (NOMA) has been identified as one of the key technologies for 6G capacity and latency gains. However, existing implementation challenges of the NOMA technique, related to carrier, timing, and phase offsets, successive interference cancellation (SIC) error propagation, packet loss dynamics, [...] Read more.
Non-orthogonal multiple access (NOMA) has been identified as one of the key technologies for 6G capacity and latency gains. However, existing implementation challenges of the NOMA technique, related to carrier, timing, and phase offsets, successive interference cancellation (SIC) error propagation, packet loss dynamics, and host to software defined radios processing jitter, create obstacles in the practical implementation of NOMA. This paper bridges the gap between theory and hardware by introducing a complete two-user NOMA transmit–receive chain on a low-cost ADALM-Pluto software defined radio (SDR) platform. The proposed implementation integrates matched filtering, offset estimation and correction, SIC with waveform reconstruction and subtraction, and reliability reinforcement via rate-1/2 convolutional coding with Viterbi decoding. We have performed a complete validation of the proposed design in both downlink and uplink modes. We collected data regarding the packet-level and system-related metrics, such as end-to-end latency, bit error rate (BER), and success rate. Moreover, we demonstrate the implementation of the uplink NOMA without need for expensive GPS-disciplined oscillators by leveraging the Pluto Rev-C dual-transmit channels that share a common oscillator. We present detailed experimental results at 915 MHz with BPSK modulation for the downlink performance, and also show a full implementation of the uplink NOMA. We observe excellent reliability for the downlink setup and good reliability for the uplink system. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2026–2027)
Show Figures

Graphical abstract

23 pages, 1936 KB  
Article
Performance of a Threshold-Based WDM and ACM for FSO Communication Between Mobile Platforms in Maritime Environments
by Sung Sik Nam, Duck Dong Hwang and Mohamed-Slim Alouini
Mathematics 2026, 14(4), 699; https://doi.org/10.3390/math14040699 - 16 Feb 2026
Viewed by 306
Abstract
In this study, we statistically analyze the performance of a threshold-based multiple optical signal selection scheme (TMOS) for wavelength division multiplexing (WDM) and adaptive coded modulation (ACM); this is achieved using free space optical (FSO) communication between mobile platforms in maritime environments with [...] Read more.
In this study, we statistically analyze the performance of a threshold-based multiple optical signal selection scheme (TMOS) for wavelength division multiplexing (WDM) and adaptive coded modulation (ACM); this is achieved using free space optical (FSO) communication between mobile platforms in maritime environments with fog and 3D pointing errors. Specifically, we derive a new closed-form expression for a composite probability density function (PDF) that is more appropriate for applying various algorithms to FSO systems under the combined effects of fog and pointing errors. We then analyze the outage probability, average spectral efficiency (ASE), and bit error rate (BER) performance of the conventional detection techniques (i.e., heterodyne and intensity modulation/direct detection). The derived analytical results were cross-verified using Monte Carlo simulations. The results show that we can obtain a higher ASE performance by applying TMOS-based WDM and ACM and that the probability of the beam being detected in the photodetector increased at a low signal-to-noise ratio, contrary to conventional performance. Furthermore, it has been confirmed that applying WDM and ACM is suitable, particularly in maritime environments where channel conditions frequently change. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

34 pages, 7022 KB  
Article
Quantitative Perceptual Analysis of Feature-Space Scenarios in Network Media Evaluation Using Transformer-Based Deep Learning: A Case Study of Fuwen Township Primary School in China
by Yixin Liu, Zhimin Li, Lin Luo, Simin Wang, Ruqin Wang, Ruonan Wu, Dingchang Xia, Sirui Cheng, Zejing Zou, Xuanlin Li, Yujia Liu and Yingtao Qi
Buildings 2026, 16(4), 714; https://doi.org/10.3390/buildings16040714 - 9 Feb 2026
Cited by 1 | Viewed by 509
Abstract
Against the dual backdrop of the rural revitalization strategy and the pursuit of high-quality, balanced urban–rural education, optimizing rural campus spaces has emerged as an important lever for addressing educational resource disparities and improving pedagogical quality. However, conventional evaluation of campus space optimization [...] Read more.
Against the dual backdrop of the rural revitalization strategy and the pursuit of high-quality, balanced urban–rural education, optimizing rural campus spaces has emerged as an important lever for addressing educational resource disparities and improving pedagogical quality. However, conventional evaluation of campus space optimization faces two systemic dilemmas. First, top-down decision-making often neglects the authentic needs of diverse stakeholders and place-based knowledge, resulting in spatial interventions that lose regional distinctiveness. Second, routine public participation is constrained by geographical barriers, time costs, and sample-size limitations, which can amplify professional cognitive bias and impede comprehensive feedback formation. The compounded effect of these challenges contributes to a disconnect between spatial optimization outcomes and perceived needs, thereby constraining the distinctive development of rural educational spaces. To address these constraints, this study proposes a novel method that integrates regional spatial feature recognition with digital media-based public perception assessment. At the data collection and ethical governance level, the study strictly adheres to platform compliance and academic ethics. A total of 12,800 preliminary comments were scraped from major social media platforms (e.g., Douyin, Dianping, and Xiaohongshu) and processed through a three-stage screening workflow—keyword screening–rule-based filtering–manual verification—to yield 8616 valid records covering diverse public groups across China. All user-identifying information was fully anonymized to ensure lawful use and privacy protection. At the analytical modeling level, we develop a Transformer-based deep learning system that leverages multi-head attention mechanisms to capture implicit spatial-sentiment features and metaphorical expressions embedded in review texts. Evaluation on an independent test set indicates a classification accuracy of 89.2%, aligning with balanced and stable scoring performance. Robustness is further strengthened by introducing an equal-weight alternative strategy and conducting stability checks to indicate the consistency of model outputs across weighting assumptions. At the scenario interpretation level, we combine grounded-theory coding with semantic network analysis to establish a three-tier spatial analysis framework—macro (landscape pattern/hydro-topological patterns), meso (architectural interface), and micro (teaching scenes/pedagogical scenarios)—and incorporate an interpretive stakeholder typology (tourists, residents, parents, and professional groups) to systematically identify and quantify key features shaping public spatial perception. Findings show that, at the macro level, naturally integrated scenarios—such as “campus–farmland integration” and “mountain–water embeddedness”—exhibit high affective association, aligning with the “mountain-water-field-village” spatial sequence logic and suggesting broad public endorsement of ecological campus concepts, whereas vernacular settlement-pattern scenarios receive relatively low attention due to cognitive discontinuities. At the meso level, innovative corridor strategies (e.g., framed vistas and expanded corridor spaces) strengthen the building–nature interaction and suggest latent value in stimulating exploratory spatial experience. At the micro level, place-based practice-oriented teaching scenes (e.g., intangible cultural heritage handcraft and creative workshops) achieve higher scores, aligning with the compatibility of vernacular education’s “differential esthetics,” while urban convergence-oriented interdisciplinary curriculum scenes suggest an interpretive gap relative to public expectations. These results indicate an embedded relationship between public perception and regional spatial features, which is further shaped by a multi-actor governance process—characterized by “Government + Influencers + Field Study”—that mediates how rural educational spaces are produced, communicated, and interpreted in digital environments. The study’s innovative value lies in integrating sociological theories (e.g., embeddedness) with deep learning techniques to fill the regional and multi-actor perspective gap in rural campus POE and to promote a methodological shift from “experience-based induction” toward a “data-theory” dual-drive model. The findings provide inferential evidence for rural campus renewal and optimization; the methodological pipeline is transferable to small-scale rural primary schools with media exposure and salient regional ecological characteristics, and it offers a new pathway for incorporating digital media-driven public perception feedback into planning and design practice. The research methodology of this study consists of four sequential stages, which are implemented in a systematic and progressive manner: First, data collection was conducted: Python and the Octopus Collector were used to crawl online comment data related to Fuwen Township Central Primary School, strictly complying with the user agreements of the Douyin, Dianping, and Xiaohongshu platforms. Second, semantic preprocessing was performed: The evaluation content was segmented to generate word frequency statistics and semantic networks; qualitative analysis was conducted using Origin software, and quantitative translation was realized via Sankey diagrams. Third, spatial scene coding was carried out: Combined with a spatial characteristic identification system, a macro–meso–micro three-tier classification system for spatial scene characteristics was constructed to encode and quantitatively express the textual content. Finally, sentiment quantification and correlation analysis was implemented: A deep learning model based on the Transformer framework was employed to perform sentiment quantification scoring for each comment; Sankey diagrams were used to quantitatively correlate spatial scenes with sentiment tendencies, thereby exploring the public’s perceptual associations with the architectural spatial environment of rural campuses. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

22 pages, 3543 KB  
Article
Benchmarking Post-Quantum Signatures and KEMs on General-Purpose CPUs Using a TCP Client–Server Testbed
by Jesus Algar-Fernandez, Andrea Villacís-Vanegas, Ysabel Amaro-Aular and Maria-Dolores Cano
Computers 2026, 15(2), 116; https://doi.org/10.3390/computers15020116 - 9 Feb 2026
Viewed by 726
Abstract
Quantum computing threatens widely deployed public-key cryptosystems, accelerating the adoption of Post-Quantum Cryptography (PQC) in practical systems. Beyond asymptotic security, the feasibility of PQC deployments depends on measured performance on real hardware and on implementation-level overheads. This paper presents an experimental evaluation of [...] Read more.
Quantum computing threatens widely deployed public-key cryptosystems, accelerating the adoption of Post-Quantum Cryptography (PQC) in practical systems. Beyond asymptotic security, the feasibility of PQC deployments depends on measured performance on real hardware and on implementation-level overheads. This paper presents an experimental evaluation of five post-quantum digital signature schemes (CRYSTALS-Dilithium, HAWK, SQISign, SNOVA, and SPHINCS+) and three key encapsulation mechanisms (Kyber, HQC, and BIKE) selected to cover multiple PQC design families and parameterizations used in practice. We implement a TCP client–server testbed in Python that invokes C implementations for each primitive—via standalone executables and, where provided, in-process dynamic libraries—and benchmarks key generation, encapsulation/decapsulation, and signature generation/verification on two Windows 11 commodity processors: an AMD Ryzen 7 4000 (8 cores, 16 threads, 1.8 GHz) and an Intel Core i5-1035G1 (4 cores, 8 threads, 1.0 GHz). Each operation is repeated ten times under a low-interference setup, and results are aggregated as mean (with 95% confidence intervals) timings over repeated runs. Across the evaluated configurations, lattice-based schemes (Kyber, Dilithium, HAWK) show the lowest computational cost, while code-based KEMs (HQC, BIKE), isogeny-based (SQISign), and multivariate (SNOVA) signatures incur higher overhead. Hash-based SPHINCS+ exhibits larger artifacts and higher signing latency depending on the parameterization. The AMD platform consistently outperforms the Intel platform, illustrating the impact of CPU characteristics on observed PQC overheads. These results provide comparative evidence to support primitive selection and capacity planning for quantum-resistant deployments, while motivating future end-to-end validation in protocol and web service settings. Full article
Show Figures

Figure 1

15 pages, 2136 KB  
Article
Integrating Alteryx for Teaching Data Analytics in Low-Computing Programs
by Serkan Varol
Educ. Sci. 2026, 16(2), 265; https://doi.org/10.3390/educsci16020265 - 8 Feb 2026
Viewed by 515
Abstract
In response to the growing need for accessible data analytics education among low-computing disciplines, this study presents the design, implementation, and outcomes of a no-coding graduate-level data analytics course offered within the Engineering Management and Technology Department at the University of Tennessee at [...] Read more.
In response to the growing need for accessible data analytics education among low-computing disciplines, this study presents the design, implementation, and outcomes of a no-coding graduate-level data analytics course offered within the Engineering Management and Technology Department at the University of Tennessee at Chattanooga. The course utilizes Alteryx Designer 2025.2, an end-to-end, drag-and-drop analytics platform that enables students with minimal programming background to conduct complete data workflows, including data cleansing, transformation, and predictive modeling. Through a project-based learning (PBL) approach, students engage in real-world problem solving, developing data reasoning and interpretation skills rather than focusing on programming syntax. Course artifacts, student project outcomes, and instructional observations suggest that the use of a no-code platform, combined with hands-on assessment through video exercises and mentored projects, supports the development of analytical reasoning, engagement, and data interpretation skills. The paper concludes that GUI-based, no-code tools can effectively bridge the technical accessibility gap in data analytics education, making data-driven learning practical and scalable across low-computing academic programs. This paper is presented as a descriptive pedagogical case study, focusing on course design, instructional practices, and observed learning outcomes rather than a controlled empirical evaluation. Full article
(This article belongs to the Special Issue Theory and Research in Data Science Education)
Show Figures

Figure 1

25 pages, 45647 KB  
Article
A Novel FEC Implementation for VSAT Terminals Using High-Level Synthesis
by Najmeh Khosroshahi, Ron Mankarious and Mohammad Reza Soleymani
Aerospace 2026, 13(2), 155; https://doi.org/10.3390/aerospace13020155 - 6 Feb 2026
Viewed by 345
Abstract
This paper presents a hardware-efficient field-programmable gate array (FPGA) implementation of a layered two-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The decoder is described in C++ and synthesized using [...] Read more.
This paper presents a hardware-efficient field-programmable gate array (FPGA) implementation of a layered two-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The decoder is described in C++ and synthesized using the Xilinx Vitis high-level synthesis (HLS) 2025 (AMD Xilinx, San Jose, CA, USA) tool, and then packaged and integrated as an intellectual property (IP) core within the Vivado Design Suite 2024 (AMD Xilinx, San Jose, CA, USA), enabling rapid prototyping and portability across FPGA platforms. Unlike conventional normalized min-sum (NMS) and two-dimensional normalized min-sum (2D-NMS) architectures, the proposed 2D-CNMS scheme employs dyadic, multiplier-free normalization combined with two-level magnitude correction, achieving near sum-product performance with reduced complexity and latency. The design is implemented on a Zynq UltraScale+ multiprocessor system-on-chip (MPSoC) (AMD Xilinx, San Jose, CA, USA) and supports real-time operation with a throughput of 29–41 Mbps at 100 MHz, while using only 9.6–22.4 k look-up tables (LUTs), 2.1–5.9 k flip-flops (FFs), and no digital signal processing (DSP) slices or block random-access memories (BRAMs). Bit-error-rate (BER) simulations over an additive white Gaussian noise (AWGN) channel show no error floor down to 108. These results demonstrate that the proposed HLS-based 2D-CNMS IP core provides a resource-efficient, high-performance LDPC decoding solution as compared with existing LDPC implementation approaches. This LDPC solution targets performance enhancement in wireless communication systems and has been deployed on a multi-frequency time-division multiple-access (MF-TDMA) satellite link to assess its overall behavior, demonstrating improved performance with reduced resource usage. Full article
(This article belongs to the Special Issue Advanced Satellite Communications for Engineers and Scientists)
Show Figures

Figure 1

21 pages, 3332 KB  
Article
MPC-Coder: A Dual-Knowledge Enhanced Multi-Agent System with Closed-Loop Verification for PLC Code Generation
by Yinggang Zhang, Weiyi Xia, Ben Zhao, Tongwen Yuan and Xianchuan Yu
Symmetry 2026, 18(2), 248; https://doi.org/10.3390/sym18020248 - 30 Jan 2026
Viewed by 738
Abstract
Industrial PLC programming faces persistent difficulties: lengthy development cycles, low fault tolerance, and cross-platform incompatibility among vendors. While LLMs show promise for automated code generation, their direct application is hindered by the gap between ambiguous natural language and the strict determinism required by [...] Read more.
Industrial PLC programming faces persistent difficulties: lengthy development cycles, low fault tolerance, and cross-platform incompatibility among vendors. While LLMs show promise for automated code generation, their direct application is hindered by the gap between ambiguous natural language and the strict determinism required by control logic. This paper proposes MPC-Coder, a dual-knowledge enhanced multi-agent system that addresses this gap. The system combines a structured knowledge graph that imposes hard constraints on process parameters and equipment specifications with a vector database that offers implementation references such as code templates and function blocks. These two knowledge sources form a symmetric complementary architecture. A closed-loop “generation–verification–repair” mechanism leverages formal verification tools to iteratively refine the generated code. Experiments demonstrate that MPC-Coder achieves 100% syntactic correctness and 78% functional consistency, significantly outperforming general-purpose LLMs. The results indicate that the complementary fusion of domain knowledge and closed-loop verification effectively enhances the reliability of code generation, offering a viable technical pathway for the reliable application of LLMs in industrial control systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 2030 KB  
Article
A Modular AI Workflow for Architectural Facade Style Transfer: A Deep-Style Synergy Approach Based on ComfyUI and Flux Models
by Chong Xu and Chongbao Qu
Buildings 2026, 16(3), 494; https://doi.org/10.3390/buildings16030494 - 25 Jan 2026
Viewed by 908
Abstract
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by [...] Read more.
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by deep perception, encompassing key stages such as style feature extraction, depth information extraction, positive prompt input, and style image generation. The core innovation of this study lies in two aspects: Methodologically, a modular low-code visual workflow has been established. Through the coordinated operation of different modules, it ensures the visual stability of architectural forms during style conversion. In response to the novel challenges posed by generative AI in altering architectural forms, the evaluation framework innovatively introduces a “semantic inheritance degree” assessment system. This elevates the evaluation perspective beyond traditional “geometric similarity” to a new level of “semantic and imagery inheritance.” It should be clarified that the framework proposed by this research primarily provides innovative tools for architectural education, early design exploration, and visualization analysis. This workflow introduces an efficient “style-space” cognitive and generative tool for teaching architectural design. Students can use this tool to rapidly conduct comparative experiments to generate multiple stylistic facades, intuitively grasping the intrinsic relationships among different styles and architectural volumes/spatial structures. This approach encourages bold formal exploration and deepens understanding of architectural formal language. Full article
Show Figures

Figure 1

23 pages, 1205 KB  
Article
RegRes: An Exploratory Study of Cross-Platform and Cross-Code Performance Prediction Using Regression Models
by Jie Li, Gui Zhao, Biqing Zeng, Jingxin Liu and Fangjie Chen
Appl. Sci. 2026, 16(2), 910; https://doi.org/10.3390/app16020910 - 15 Jan 2026
Viewed by 307
Abstract
In the era of heterogeneous computing environments, including diverse hardware platforms and programming paradigms, accurate performance prediction of software applications is essential for efficient resource allocation, cost optimization, and informed deployment decisions. However, traditional methods often require platform-specific measurements, which are resource-intensive and [...] Read more.
In the era of heterogeneous computing environments, including diverse hardware platforms and programming paradigms, accurate performance prediction of software applications is essential for efficient resource allocation, cost optimization, and informed deployment decisions. However, traditional methods often require platform-specific measurements, which are resource-intensive and limited by data scarcity in low-performance or novel code contexts. This exploratory study addresses these challenges by leveraging transfer learning with regression models to enable cross-platform (from Intel x86 to ARM M1) and cross-code performance predictions. Using the Renaissance benchmark suite (renaissance-gpl-0.15.0.jar), we systematically evaluate eight traditional machine learning models and a deep neural network across scenarios of data transfer and fine-tuning. Key findings demonstrate that transfer learning significantly improves prediction accuracy, with tree-based models like Extra Trees achieving high R2 scores (0.92) and outperforming DNNs in robustness, particularly under noisy or data-scarce conditions. The study provides empirical insights into model effectiveness, highlights the superiority of transfer settings, and offers practical guidance for software engineers to reduce measurement overheads and enhance optimization processes. Full article
Show Figures

Figure 1

19 pages, 6578 KB  
Article
High-Resolution Spatiotemporal-Coded Differential Eddy-Current Array Probe for Defect Detection in Metal Substrates
by Qi Ouyang, Yuke Meng, Lun Huang and Yun Li
Sensors 2026, 26(2), 537; https://doi.org/10.3390/s26020537 - 13 Jan 2026
Viewed by 383
Abstract
To address the problems of weak geometric features, low signal response amplitude, and insufficient spatial resolvability of near-surface defects in metal substrates, a high-resolution spatiotemporal-coded eddy-current array probe is proposed. The probe adopts an array topology with time-multiplexed excitation and adjacent differential reception, [...] Read more.
To address the problems of weak geometric features, low signal response amplitude, and insufficient spatial resolvability of near-surface defects in metal substrates, a high-resolution spatiotemporal-coded eddy-current array probe is proposed. The probe adopts an array topology with time-multiplexed excitation and adjacent differential reception, achieving a balance between high common-mode rejection ratio and high-density spatial sampling. First, a theoretical electromagnetic coupling model between the probe and the metal substrate is established, and finite-element simulations are conducted to investigate the evolution of the skin effect, eddy-current density distribution, and differential impedance response over an excitation frequency range of 1–10 MHz. Subsequently, a 64-channel M-DECA probe and an experimental testing platform are developed, and frequency-sweeping experiments are carried out under different excitation conditions. Experimental results indicate that, under a 50 kHz excitation frequency, the array eddy-current response achieves an optimal trade-off between signal amplitude and spatial geometric consistency. Furthermore, based on the pixel-to-physical coordinate mapping relationship, the lateral equivalent diameters of near-surface defects with different characteristic scales are quantitatively characterized, with relative errors of 6.35%, 4.29%, 3.98%, 3.50%, and 5.80%, respectively. Regression-based quantitative analysis reveals a power-law relationship between defect area and the amplitude of the differential eddy-current array response, with a coefficient of determination R2=0.9034 for the bipolar peak-to-peak feature. The proposed M-DECA probe enables high-resolution imaging and quantitative characterization of near-surface defects in metal substrates, providing an effective solution for electromagnetic detection of near-surface, low-contrast defects. Full article
Show Figures

Figure 1

17 pages, 710 KB  
Article
KD-SecBERT: A Knowledge-Distilled Bidirectional Encoder Optimized for Open-Source Software Supply Chain Security in Smart Grid Applications
by Qinman Li, Xixiang Zhang, Weiming Liao, Tao Dai, Hongliang Zheng, Beiya Yang and Pengfei Wang
Electronics 2026, 15(2), 345; https://doi.org/10.3390/electronics15020345 - 13 Jan 2026
Viewed by 380
Abstract
With the acceleration of digital transformation, open-source software has become a fundamental component of modern smart grids and other critical infrastructures. However, the complex dependency structures of open-source ecosystems and the continuous emergence of vulnerabilities pose substantial challenges to software supply chain security. [...] Read more.
With the acceleration of digital transformation, open-source software has become a fundamental component of modern smart grids and other critical infrastructures. However, the complex dependency structures of open-source ecosystems and the continuous emergence of vulnerabilities pose substantial challenges to software supply chain security. In power information networks and cyber–physical control systems, vulnerabilities in open-source components integrated into Supervisory Control and Data Acquisition (SCADA), Energy Management System (EMS), and Distribution Management System (DMS) platforms and distributed energy controllers may propagate along the supply chain, threatening system security and operational stability. In such application scenarios, large language models (LLMs) often suffer from limited semantic accuracy when handling domain-specific security terminology, as well as deployment inefficiencies that hinder their practical adoption in critical infrastructure environments. To address these issues, this paper proposes KD-SecBERT, a domain-specific semantic bidirectional encoder optimized through multi-level knowledge distillation for open-source software supply chain security in smart grid applications. The proposed framework constructs a hierarchical multi-teacher ensemble that integrates general language understanding, cybersecurity-domain knowledge, and code semantic analysis, together with a lightweight student architecture based on depthwise separable convolutions and multi-head self-attention. In addition, a dynamic, multi-dimensional distillation strategy is introduced to jointly perform layer-wise representation alignment, ensemble knowledge fusion, and task-oriented optimization under a progressive curriculum learning scheme. Extensive experiments conducted on a multi-source dataset comprising National Vulnerability Database (NVD) and Common Vulnerabilities and Exposures (CVE) entries, security-related GitHub code, and Open Web Application Security Project (OWASP) test cases show that KD-SecBERT achieves an accuracy of 91.3%, a recall of 90.6%, and an F1-score of 89.2% on vulnerability classification tasks, indicating strong robustness in recognizing both common and low-frequency security semantics. These results demonstrate that KD-SecBERT provides an effective and practical solution for semantic analysis and software supply chain risk assessment in smart grids and other critical-infrastructure environments. Full article
Show Figures

Figure 1

25 pages, 7051 KB  
Article
Research on Multi-Source Dynamic Stress Data Analysis and Visualization Software for Structural Life Assessment
by Qiming Liu, Yu Chen and Zhiming Liu
Appl. Sci. 2026, 16(1), 556; https://doi.org/10.3390/app16010556 - 5 Jan 2026
Viewed by 560
Abstract
Dynamic stress data are essential for evaluating structural fatigue life. To address the challenges of complex test data formats, low data reading efficiency, and insufficient visualization, this study systematically analyzes the .raw and .sie file formats from IMC and HBM data acquisition systems [...] Read more.
Dynamic stress data are essential for evaluating structural fatigue life. To address the challenges of complex test data formats, low data reading efficiency, and insufficient visualization, this study systematically analyzes the .raw and .sie file formats from IMC and HBM data acquisition systems and proposes a unified parsing approach. A lightweight .dac format is designed, featuring a “single-channel–single-file” storage strategy that enables rapid, independent retrieval of specific channels and seamless cross-platform sharing, effectively eliminating the inefficiency of the .sie format caused by multi-channel coupling. Based on Python v3.11, an automated format conversion tool and a PyQt5-based visualization platform are developed, integrating graphical plotting, interactive operations, and fatigue strength evaluation functions. The platform supports stress feature extraction, rainflow counting, Goodman correction, and full life-cycle fatigue damage assessment based on the Palmgren–Miner rule. Experimental results demonstrate that the proposed system accurately reproduces both time- and frequency-domain features, with equivalent stress deviations within 2% of nCode results, and achieves a 7–8× improvement in file loading speed compared with the original format. Furthermore, multi-channel scalability tests confirm a linear increase in conversion time (R2 > 0.98) and stable throughput across datasets up to 10.20 GB, demonstrating strong performance consistency for large-scale engineering data. The proposed approach establishes a reliable data foundation and efficient analytical tool for fatigue life assessment of structures under complex operating conditions. Full article
(This article belongs to the Special Issue Advances and Applications in Mechanical Fatigue and Life Assessment)
Show Figures

Figure 1

Back to TopTop