Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,050)

Search Parameters:
Keywords = architectural visualization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 2027 KB  
Article
An Improved Diffusion Model for Generating Images of a Single Category of Food on a Small Dataset
by Zitian Chen, Zhiyong Xiao, Dinghui Wu and Qingbing Sang
Foods 2026, 15(3), 443; https://doi.org/10.3390/foods15030443 - 26 Jan 2026
Abstract
In the era of the digital food economy, high-fidelity food images are critical for applications ranging from visual e-commerce presentation to automated dietary assessment. However, developing robust computer vision systems for food analysis is often hindered by data scarcity for long-tail or regional [...] Read more.
In the era of the digital food economy, high-fidelity food images are critical for applications ranging from visual e-commerce presentation to automated dietary assessment. However, developing robust computer vision systems for food analysis is often hindered by data scarcity for long-tail or regional dishes. To address this challenge, we propose a novel high-fidelity food image synthesis framework as an effective data augmentation tool. Unlike generic generative models, our method introduces an Ingredient-Aware Diffusion Model based on the Masked Diffusion Transformer (MaskDiT) architecture. Specifically, we design a Label and Ingredients Encoding (LIE) module and a Cross-Attention (CA) mechanism to explicitly model the relationship between food composition and visual appearance, simulating the “cooking” process digitally. Furthermore, to stabilize training on limited data samples, we incorporate a linear interpolation strategy into the diffusion process. Extensive experiments on the Food-101 and VireoFood-172 datasets demonstrate that our method achieves state-of-the-art generation quality even in data-scarce scenarios. Crucially, we validate the practical utility of our synthetic images: utilizing them for data augmentation improved the accuracy of downstream food classification tasks from 95.65% to 96.20%. This study provides a cost-effective solution for generating diverse, controllable, and realistic food data to advance smart food systems. Full article
Show Figures

Figure 1

20 pages, 6649 KB  
Article
The Learning Experience for Earthquake Awareness Program (LEAP): An Experiential Approach to Seismic Design for Young Students
by Danny A. Melo, Natividad Garcia-Troncoso, Sandra Villamizar, Gerardo Castañeda and Daniel Gomez
Sustainability 2026, 18(3), 1233; https://doi.org/10.3390/su18031233 - 26 Jan 2026
Abstract
In many developing countries, seismic vulnerability remains high due to the widespread presence of informally constructed buildings without professional design or technical supervision. In Colombia, where nearly 60% of structures are non-engineered, this issue is especially acute. The objective of this study is [...] Read more.
In many developing countries, seismic vulnerability remains high due to the widespread presence of informally constructed buildings without professional design or technical supervision. In Colombia, where nearly 60% of structures are non-engineered, this issue is especially acute. The objective of this study is to design, implement, and quantitatively evaluate the Learning Experience for Earthquake Awareness Program (LEAP), an experiential educational strategy for young students that enhances seismic knowledge, promotes sustainable construction awareness, and contributes to disaster risk reduction as a component of social sustainability. To address this challenge, LEAP introduces students to basic principles of structural mechanics and seismic behavior through playful, hands-on activities combining theoretical instruction, practical experimentation, collaborative design, and the testing of model structures. An experimental design with pre- and post-surveys was implemented with 141 participants, including 80 secondary school students (grades 8th–11th) and 61 university students enrolled in engineering, architecture, and construction programs, using 3D-printed models, earthquake simulators, and interactive games. Statistical analysis using the Wilcoxon signed-rank test (p<0.05) revealed significant improvements in conceptual understanding and perception, including gains in distinguishing between the hypocenter and epicenter (+45.39%, p=5.10×108, r=0.50), understanding seismic magnitude (+39.01%, p=1.67×1012, r=0.71), and visually identifying structural vulnerabilities (+25.50%, p=4.50×102, r=0.41). Overall, LEAP contributes to disaster risk reduction and social sustainability by strengthening seismic awareness and responsible construction practices. The most significant results were observed among secondary school students, while university participants mainly reinforced applied and visual comprehension. Given its convenience sample, lack of control group, and immediate post-test, findings should be interpreted as exploratory and associative. Full article
(This article belongs to the Special Issue Advances in Engineering Education and Sustainable Development)
Show Figures

Figure 1

22 pages, 4947 KB  
Article
CV-EEGNet: A Compact Complex-Valued Convolutional Network for End-to-End EEG-Based Emotion Recognition
by Wenhao Wang, Dongxia Yang, Yong Yang, Yuanlun Xie, Xiu Liu, Yue Yu and Kaibo Shi
Sensors 2026, 26(3), 807; https://doi.org/10.3390/s26030807 - 26 Jan 2026
Abstract
In electroencephalogram (EEG)-based emotion recognition tasks, existing end-to-end approaches predominantly rely on real-valued neural networks, which mainly operate in the time–amplitude domain. However, EEG signals are a type of wave, intrinsically including frequency, phase, and amplitude characteristics. Real-valued architectures may struggle to capture [...] Read more.
In electroencephalogram (EEG)-based emotion recognition tasks, existing end-to-end approaches predominantly rely on real-valued neural networks, which mainly operate in the time–amplitude domain. However, EEG signals are a type of wave, intrinsically including frequency, phase, and amplitude characteristics. Real-valued architectures may struggle to capture amplitude–phase coupling and spectral structures that are crucial for emotion decoding. To the best of our knowledge, this work is the first to introduce complex-valued neural networks for EEG-based emotion recognition, upon which we design a new end-to-end architecture named Complex-valued EEGNet (CV-EEGNet). Beginning with raw EEG signals, CV-EEGNet transforms them into complex-valued spectra via the Fast Fourier Transform, then sequentially applies complex-valued spectral, spatial, and depthwise-separable convolution modules to extract frequency structures, spatial topologies, and high-level semantic representations while preserving amplitude–phase relationships. Finally, a complex-valued, fully connected classifier generates complex logits, and the final emotion predictions are derived from their magnitudes. Experiments on the SEED (three-class) and SEED-IV (four-class) datasets validate the effectiveness of the proposed method, with t-SNE visualizations further confirming the discriminability of the learned representations. These results show the potential of complex-valued neural networks for raw-signal EEG emotion recognition. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 17688 KB  
Article
A GIS-Based Platform for Efficient Governance of Illegal Land Use and Construction: A Case Study of Xiamen City
by Chuxin Li, Yuanrong He, Yuanmao Zheng, Yuantong Jiang, Xinhui Wu, Panlin Hao, Min Luo and Yuting Kang
Land 2026, 15(2), 209; https://doi.org/10.3390/land15020209 - 25 Jan 2026
Abstract
By addressing the challenges of management difficulties, insufficient integration of driver analysis, and single-dimensional analysis in the governance of illegal land use and illegal construction (collectively referred to as the “Two Illegalities”) under rapid urbanization, this study designs and implements a GIS-based governance [...] Read more.
By addressing the challenges of management difficulties, insufficient integration of driver analysis, and single-dimensional analysis in the governance of illegal land use and illegal construction (collectively referred to as the “Two Illegalities”) under rapid urbanization, this study designs and implements a GIS-based governance system using Xiamen City as the study area. First, we propose a standardized data-processing workflow and construct a comprehensive management platform integrating multi-source data fusion, spatiotemporal visualization, intelligent analysis, and customized report generation, effectively lowering the barrier for non-professional users. Second, utilizing methods integrated into the platform, such as Moran’s I and centroid trajectory analysis, we deeply analyze the spatiotemporal evolution and driving mechanisms of “Two Illegalities” activities in Xiamen from 2018 to 2023. The results indicate that the distribution of “Two Illegalities” exhibits significant spatial clustering, with hotspots concentrated in urban–rural transition zones. The spatial morphology evolved from multi-core diffusion to the contraction of agglomeration belts. This evolution is essentially the result of the dynamic adaptation between regional economic development gradients, urbanization processes, and policy-enforcement synergy mechanisms. Through a modular, open technical architecture and a “Data-Technology-Enforcement” collaborative mechanism, the system significantly improves information management efficiency and the scientific basis of decision-making. It provides a replicable and scalable technical framework and practical paradigm for similar cities to transform “Two Illegalities” governance from passive disposal to active prevention and control. Full article
Show Figures

Figure 1

14 pages, 2030 KB  
Article
A Modular AI Workflow for Architectural Facade Style Transfer: A Deep-Style Synergy Approach Based on ComfyUI and Flux Models
by Chong Xu and Chongbao Qu
Buildings 2026, 16(3), 494; https://doi.org/10.3390/buildings16030494 - 25 Jan 2026
Abstract
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by [...] Read more.
This study focuses on the transfer of architectural facade styles. Using the node-based visual deep learning platform ComfyUI, the system integrates the Flux Redux and Flux Depth models to establish a modular workflow. This workflow achieved style transfer of building facades guided by deep perception, encompassing key stages such as style feature extraction, depth information extraction, positive prompt input, and style image generation. The core innovation of this study lies in two aspects: Methodologically, a modular low-code visual workflow has been established. Through the coordinated operation of different modules, it ensures the visual stability of architectural forms during style conversion. In response to the novel challenges posed by generative AI in altering architectural forms, the evaluation framework innovatively introduces a “semantic inheritance degree” assessment system. This elevates the evaluation perspective beyond traditional “geometric similarity” to a new level of “semantic and imagery inheritance.” It should be clarified that the framework proposed by this research primarily provides innovative tools for architectural education, early design exploration, and visualization analysis. This workflow introduces an efficient “style-space” cognitive and generative tool for teaching architectural design. Students can use this tool to rapidly conduct comparative experiments to generate multiple stylistic facades, intuitively grasping the intrinsic relationships among different styles and architectural volumes/spatial structures. This approach encourages bold formal exploration and deepens understanding of architectural formal language. Full article
Show Figures

Figure 1

22 pages, 954 KB  
Systematic Review
AI Sparring in Conceptual Architectural Design: A Systematic Review of Generative AI as a Pedagogical Partner (2015–2025)
by Mirko Stanimirovic, Ana Momcilovic Petronijevic, Branislava Stoiljkovic, Slavisa Kondic and Bojana Nikolic
Buildings 2026, 16(3), 488; https://doi.org/10.3390/buildings16030488 - 24 Jan 2026
Viewed by 45
Abstract
Over the past five years, generative AI has carved out a major role in architecture, especially in education and visual idea generation. Most of the time, the literature talks about AI as a tool, an assistant, or sometimes a co-creator, always highlighting efficiency [...] Read more.
Over the past five years, generative AI has carved out a major role in architecture, especially in education and visual idea generation. Most of the time, the literature talks about AI as a tool, an assistant, or sometimes a co-creator, always highlighting efficiency and the end product in architectural design. There is a steady rise in empirical studies, yet the real impact on how young architects learn still lacks a solid theory behind it. In this systematic review, we dig into peer-reviewed work from 2015 to 2025, looking at how generative AI fits into architectural design education. Using PRISMA guidelines, we pull together findings from 40 papers across architecture, design studies, human–computer interaction and educational research. What stands out is a clear tension: on one hand, students crank out more creative work; on the other, their reflective engagement drops, especially when AI steps in as a replacement during early ideation instead of working alongside them. To address this, we introduce the idea of “AI sparring”. Here, generative AI is not just a helper—it becomes a provocateur, pushing students to think critically and develop stronger architectural concepts. Our review offers new ways to interpret AI’s role, moving beyond seeing it just as a productivity booster. Instead, we argue for AI as an active, reflective partner in education, and we lay out practical recommendations for studio-based teaching and future research. This paper is a theoretical review and conceptual proposal, and we urge future studies to test these ideas in practice. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
20 pages, 1385 KB  
Article
Development of an IoT System for Acquisition of Data and Control Based on External Battery State of Charge
by Aleksandar Valentinov Hristov, Daniela Gotseva, Roumen Ivanov Trifonov and Jelena Petrovic
Electronics 2026, 15(3), 502; https://doi.org/10.3390/electronics15030502 - 23 Jan 2026
Viewed by 139
Abstract
In the context of small, battery-powered systems, a lightweight, reusable architecture is needed for integrated measurement, visualization, and cloud telemetry that minimizes hardware complexity and energy footprint. Existing solutions require high resources. This limits their applicability in Internet of Things (IoT) devices with [...] Read more.
In the context of small, battery-powered systems, a lightweight, reusable architecture is needed for integrated measurement, visualization, and cloud telemetry that minimizes hardware complexity and energy footprint. Existing solutions require high resources. This limits their applicability in Internet of Things (IoT) devices with low power consumption. The present work demonstrates the process of design, implementation and experimental evaluation of a single-cell lithium-ion battery monitoring prototype, intended for standalone operation or integration into other systems. The architecture is compact and energy efficient, with a reduction in complexity and memory usage: modular architecture with clearly distinguished responsibilities, avoidance of unnecessary dynamic memory allocations, centralized error handling, and a low-power policy through the usage of deep sleep mode. The data is stored in a cloud platform, while minimal storage is used locally. The developed system combines the functional requirements for an embedded external battery monitoring system: local voltage and current measurement, approximate estimation of the State of Charge (SoC) using a look-up table (LUT) based on the discharge characteristic, and visualization on a monochrome OLED display. The conducted experiments demonstrate the typical U(t) curve and the triggering of the indicator at low charge levels (LOW − SoC ≤ 20% and CRITICAL − SoC ≤ 5%) in real-world conditions and the absence of unwanted switching of the state near the voltage thresholds. Full article
Show Figures

Figure 1

19 pages, 2721 KB  
Article
A Portable Extended-Gate FET Integrated Sensing System with Low-Noise Current Readout for On-Site Detection of Escherichia coli O157:H7
by Weilin Guo, Yanping Hu, Yunchao Cao, Hongbin Zhang and Hong Wang
Micromachines 2026, 17(2), 151; https://doi.org/10.3390/mi17020151 - 23 Jan 2026
Viewed by 78
Abstract
Field-effect transistor (FET) biosensors enable label-free and real-time electrical transduction; however, their practical deployment is often constrained by the need for bulky benchtop instrumentation to provide stable biasing, low-noise readout, and data processing. Here, we report a portable extended-gate FET (EG-FET) integrated sensing [...] Read more.
Field-effect transistor (FET) biosensors enable label-free and real-time electrical transduction; however, their practical deployment is often constrained by the need for bulky benchtop instrumentation to provide stable biasing, low-noise readout, and data processing. Here, we report a portable extended-gate FET (EG-FET) integrated sensing system that consolidates the sensing interface, analog front-end conditioning, embedded acquisition/control, and user-side visualization into an end-to-end prototype suitable for on-site operation. The system couples a screen-printed Au extended-gate electrode to a MOSFET and employs a low-noise signal-conditioning chain with microcontroller-based digitization and real-time data streaming to a host graphical interface. As a proof-of-concept, enterohemorrhagic Escherichia coli O157:H7 was selected as the target. A bacteria-specific immunosensing interface was constructed on the Au extended gate via covalent immobilization of monoclonal antibodies. Measurements in buffered samples produced concentration-dependent current responses, and a linear calibration was experimentally validated over 104–1010 CFU/mL. In specificity evaluation against three common foodborne pathogens (Staphylococcus aureus, Salmonella typhimurium, and Listeria monocytogenes), the sensor showed a maximum interference response of only 13% relative to the target signal (ΔI/ΔImax) with statistical significance (p < 0.001). Our work establishes a practical hardware–software architecture that mitigates reliance on benchtop instruments and provides a scalable route toward portable EG-FET sensing for rapid, point-of-need detection of foodborne pathogens and other biomarkers. Full article
(This article belongs to the Special Issue Next-Generation Biomedical Devices)
Show Figures

Figure 1

16 pages, 5308 KB  
Article
Patient-Level Classification of Rotator Cuff Tears on Shoulder MRI Using an Explainable Vision Transformer Framework
by Murat Aşçı, Sergen Aşık, Ahmet Yazıcı and İrfan Okumuşer
J. Clin. Med. 2026, 15(3), 928; https://doi.org/10.3390/jcm15030928 (registering DOI) - 23 Jan 2026
Viewed by 70
Abstract
Background/Objectives: Diagnosing Rotator Cuff Tears (RCTs) via Magnetic Resonance Imaging (MRI) is clinically challenging due to complex 3D anatomy and significant interobserver variability. Traditional slice-centric Convolutional Neural Networks (CNNs) often fail to capture the necessary volumetric context for accurate grading. This study [...] Read more.
Background/Objectives: Diagnosing Rotator Cuff Tears (RCTs) via Magnetic Resonance Imaging (MRI) is clinically challenging due to complex 3D anatomy and significant interobserver variability. Traditional slice-centric Convolutional Neural Networks (CNNs) often fail to capture the necessary volumetric context for accurate grading. This study aims to develop and validate the Patient-Aware Vision Transformer (Pa-ViT), an explainable deep-learning framework designed for the automated, patient-level classification of RCTs (Normal, Partial-Thickness, and Full-Thickness). Methods: A large-scale retrospective dataset comprising 2447 T2-weighted coronal shoulder MRI examinations was utilized. The proposed Pa-ViT framework employs a Vision Transformer (ViT-Base) backbone within a Weakly-Supervised Multiple Instance Learning (MIL) paradigm to aggregate slice-level semantic features into a unified patient diagnosis. The model was trained using a weighted cross-entropy loss to address class imbalance and was benchmarked against widely used CNN architectures and traditional machine-learning classifiers. Results: The Pa-ViT model achieved a high overall accuracy of 91% and a macro-averaged F1-score of 0.91, significantly outperforming the standard VGG-16 baseline (87%). Notably, the model demonstrated superior discriminative power for the challenging Partial-Thickness Tear class (ROC AUC: 0.903). Furthermore, Attention Rollout visualizations confirmed the model’s reliance on genuine anatomical features, such as the supraspinatus footprint, rather than artifacts. Conclusions: By effectively modeling long-range dependencies, the Pa-ViT framework provides a robust alternative to traditional CNNs. It offers a clinically viable, explainable decision support tool that enhances diagnostic sensitivity, particularly for subtle partial-thickness tears. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

23 pages, 6538 KB  
Article
Multi-Scale Graph-Decoupling Spatial–Temporal Network for Traffic Flow Forecasting in Complex Urban Environments
by Hongtao Li, Wenzheng Liu and Huaixian Chen
Electronics 2026, 15(3), 495; https://doi.org/10.3390/electronics15030495 - 23 Jan 2026
Viewed by 121
Abstract
Accurate traffic flow forecasting is a fundamental component of Intelligent Transportation Systems and proactive urban mobility management. However, the inherent complexity of urban traffic flow, characterized by non-stationary dynamics and multi-scale temporal dependencies, poses significant modeling challenges. Existing spatio-temporal models often struggle to [...] Read more.
Accurate traffic flow forecasting is a fundamental component of Intelligent Transportation Systems and proactive urban mobility management. However, the inherent complexity of urban traffic flow, characterized by non-stationary dynamics and multi-scale temporal dependencies, poses significant modeling challenges. Existing spatio-temporal models often struggle to reconcile the discrepancy between static physical road constraints and highly dynamic, state-dependent spatial correlations, while their reliance on fixed temporal receptive fields limits the capacity to disentangle overlapping periodicities and stochastic fluctuations. To bridge these gaps, this study proposes a novel Multi-scale Graph-Decoupling Spatial–temporal Network (MS-GSTN). MS-GSTN leverages a Hierarchical Moving Average decomposition module to recursively partition raw traffic flow signals into constituent patterns across diverse temporal resolutions, ranging from systemic daily trends to high-frequency transients. Subsequently, a Tri-graph Spatio-temporal Fusion module synergistically models scale-specific dependencies by integrating an adaptive temporal graph, a static spatial graph, and a data-driven dynamic spatial graph within a unified architecture. Extensive experiments on four large-scale real-world benchmark datasets demonstrate that MS-GSTN consistently achieves superior forecasting accuracy compared to representative state-of-the-art models. Quantitatively, the proposed framework yields an overall reduction in Mean Absolute Error of up to 6.2% and maintains enhanced stability across multiple forecasting horizons. Visualization analysis further confirms that MS-GSTN effectively identifies scale-dependent spatial couplings, revealing that long-term traffic flow trends propagate through global network connectivity while short-term variations are governed by localized interactions. Full article
Show Figures

Figure 1

28 pages, 26446 KB  
Article
Interpreting Multi-Branch Anti-Spoofing Architectures: Correlating Internal Strategy with Empirical Performance
by Ivan Viakhirev, Kirill Borodin, Mikhail Gorodnichev and Grach Mkrtchian
Mathematics 2026, 14(2), 381; https://doi.org/10.3390/math14020381 - 22 Jan 2026
Viewed by 41
Abstract
Multi-branch deep neural networks like AASIST3 achieve state-of-the-art comparable performance in audio anti-spoofing, yet their internal decision dynamics remain opaque compared to traditional input-level saliency methods. While existing interpretability efforts largely focus on visualizing input artifacts, the way individual architectural branches cooperate or [...] Read more.
Multi-branch deep neural networks like AASIST3 achieve state-of-the-art comparable performance in audio anti-spoofing, yet their internal decision dynamics remain opaque compared to traditional input-level saliency methods. While existing interpretability efforts largely focus on visualizing input artifacts, the way individual architectural branches cooperate or compete under different spoofing attacks is not well characterized. This paper develops a framework for interpreting AASIST3 at the component level. Intermediate activations from fourteen branches and global attention modules are modeled with covariance operators whose leading eigenvalues form low-dimensional spectral signatures. These signatures train a CatBoost meta-classifier to generate TreeSHAP-based branch attributions, which we convert into normalized contribution shares and confidence scores (Cb) to quantify the model’s operational strategy. By analyzing 13 spoofing attacks from the ASVspoof 2019 benchmark, we identify four operational archetypes—ranging from “Effective Specialization” (e.g., A09, Equal Error Rate (EER) 0.04%, C=1.56) to “Ineffective Consensus” (e.g., A08, EER 3.14%, C=0.33). Crucially, our analysis exposes a “Flawed Specialization” mode where the model places high confidence in an incorrect branch, leading to severe performance degradation for attacks A17 and A18 (EER 14.26% and 28.63%, respectively). These quantitative findings link internal architectural strategy directly to empirical reliability, highlighting specific structural dependencies that standard performance metrics overlook. Full article
(This article belongs to the Special Issue New Solutions for Multimedia and Artificial Intelligence Security)
Show Figures

Figure 1

62 pages, 4036 KB  
Systematic Review
Quantization of Deep Neural Networks for Medical Image Analysis: A Systematic Review and Meta-Analysis
by Edgar Fabián Rivera-Guzmán, Luis Fernando Guerrero-Vásquez and Vladimir Espartaco Robles-Bykbaev
Technologies 2026, 14(1), 76; https://doi.org/10.3390/technologies14010076 - 22 Jan 2026
Viewed by 83
Abstract
Neural network quantization has become established as a key strategy for transitioning medical imaging models from research environments to clinical devices and resource-constrained edge platforms; however, the available evidence remains fragmented and focused on highly heterogeneous use cases. This study presents a systematic [...] Read more.
Neural network quantization has become established as a key strategy for transitioning medical imaging models from research environments to clinical devices and resource-constrained edge platforms; however, the available evidence remains fragmented and focused on highly heterogeneous use cases. This study presents a systematic review of 72 studies on quantization applied to medical images, following PRISMA guidelines, with the aim of characterizing the relationship among quantization technique, network architecture, imaging modality, and execution environment, as well as their impact on latency, memory footprint, and clinical deployment. Based on a structured variable matrix, we analyze—through tailored visualizations—usage patterns of Post-Training Quantization (PTQ), Quantization-Aware Training (QAT), mixed precision, and binary/low-bit schemes across frameworks such as PyTorch V 2.6.0, TensorFlow 2.19.0, and TensorFlow Lite, executed on server-class GPUs, edge/embedded devices, and specialized hardware. The results reveal a strong concentration of evidence in PyTorch/TensorFlow pipelines using INT8 or mixed precision on GPUs and edge platforms, contrasted with limited attention to PACS/RIS interoperability, model lifecycle management, energy consumption, cost, and regulatory traceability. We conclude that, although quantization can approximate real-time performance and reduce memory footprint, its clinical adoption remains constrained by integration challenges, model governance requirements, and the maturity of the hardware–software ecosystem. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

15 pages, 5081 KB  
Article
A Study on Super-Low-Energy Building Design Strategies Based on the Quantification of Passive Climate Adaptation Mechanisms
by Jiaohua Cheng, Yuanyi Zhang, Xiaohuan Liu and Rui Ding
Buildings 2026, 16(2), 456; https://doi.org/10.3390/buildings16020456 - 22 Jan 2026
Viewed by 24
Abstract
In response to the urgent need for developing super-low-energy buildings (SLEBs) under extreme climatic conditions, a critical research gap lies in scientifically quantifying the passive climate adaptation mechanisms of vernacular architecture and translating them into modern design strategies. To this end, this study [...] Read more.
In response to the urgent need for developing super-low-energy buildings (SLEBs) under extreme climatic conditions, a critical research gap lies in scientifically quantifying the passive climate adaptation mechanisms of vernacular architecture and translating them into modern design strategies. To this end, this study proposes a multidimensional “Monitoring–Visualization–Quantification” analytical method. Using the Aijing Zhuang building in central Fujian, China, as a case study, this method systematically analyzed its passive regulatory performance through high-frequency monitoring and spatial-interpolation techniques. This research revealed a distinct “Gradient-Buffering-and-Dynamic-Adjustment” mechanism: a maximum indoor–outdoor temperature difference of 5.7 °C was achieved, with indoor temperature variability reduced by 62%. The courtyard, functioning as a “Thermal Buffer” and “Ventilation Hub”, orchestrated the internal climatic gradients. This study provides systematic quantitative evidence for the modern translation of traditional wisdom, and the revealed mechanism can be directly transformed into design strategies for SLEBs adapted to extreme climates. Full article
Show Figures

Figure 1

13 pages, 717 KB  
Article
Gaining Understanding of Neural Networks with Programmatically Generated Data
by Eric O’Sullivan, Ken Kennedy and Jean Mohammadi-Aragh
Math. Comput. Appl. 2026, 31(1), 16; https://doi.org/10.3390/mca31010016 - 22 Jan 2026
Viewed by 66
Abstract
The performance of convolutional neural networks (CNNs) depends not only on model architecture but also on the structure and quality of the training data. While most artificial network interpretability methods focus on explaining trained models, less attention has been given to understanding how [...] Read more.
The performance of convolutional neural networks (CNNs) depends not only on model architecture but also on the structure and quality of the training data. While most artificial network interpretability methods focus on explaining trained models, less attention has been given to understanding how dataset composition itself shapes learning outcomes. This work introduces a novel framework that uses programmatically generated synthetic datasets to isolate and control visual features, enabling systematic evaluation of their contribution to CNN performance. Guided by principles from set theory, Shapley values, and the Apriori algorithm, we formalize an equivalence between CNN kernel weights and pattern frequency counts, showing that feature overlap across datasets predicts model generalization. Methods include the construction of four synthetic digit datasets with controlled object and background features, training lightweight CNNs under K-fold validation, and statistical evaluation of cross-dataset performance. The results show that internal object patterns significantly improve accuracy and F1 scores compared to non-object background features, and that a dataset similarity prediction algorithm achieves near-perfect correlation (ρ=0.97) between the predicted and observed performance. The conclusions highlight that dataset feature composition can be treated as a measurable proxy for model behavior, offering a new path for dataset evaluation, pruning, and design optimization. This approach provides a principled framework for predicting CNN performance without requiring full-scale model training. Full article
(This article belongs to the Special Issue Applied Optimization in Automatic Control and Systems Engineering)
Show Figures

Figure 1

16 pages, 5052 KB  
Article
Automated Collateral Classification on CT Angiography in Acute Ischemic Stroke: Performance Trends Across Hyperparameter Combinations
by Chi-Ming Ku and Tzong-Rong Ger
Bioengineering 2026, 13(1), 124; https://doi.org/10.3390/bioengineering13010124 - 21 Jan 2026
Viewed by 87
Abstract
Collateral status is an important therapeutic indicator for acute ischemic stroke (AIS), yet visual collateral grading remains subjective and suffers from inter-observer variability. To address this limitation, this study automatically extracted binarized vascular morphological features from CTA images and developed a convolutional neural [...] Read more.
Collateral status is an important therapeutic indicator for acute ischemic stroke (AIS), yet visual collateral grading remains subjective and suffers from inter-observer variability. To address this limitation, this study automatically extracted binarized vascular morphological features from CTA images and developed a convolutional neural network (CNN) for automated collateral classification. Performance trends were systematically analyzed across diverse hyperparameter combinations to meet different clinical decision needs. A total of 157 AIS patients (median age 65 [57–74] years; 61.8% were male) were retrospectively enrolled and stratified by Menon score into good (3–5, n = 117) and poor (0–2, n = 40) collateral groups. A total of 192 architectures were established, and three representative model tendencies emerged: a sensitivity-oriented model (AUC = 0.773; sensitivity = 87.18%; specificity = 65.00%), a balanced model (AUC = 0.768; sensitivity = 72.65%; specificity = 77.50%), and a specificity-oriented model (AUC = 0.753; sensitivity = 63.25%; specificity = 85.00%). These results demonstrate that kernel size, the number of filters in the first layer, and the number of convolutional layers are key determinants of performance directionality, allowing tailored model selection depending on clinical requirements. This work highlights the feasibility of CTA-based automated collateral classification and provides a systematic framework for developing models optimized for sensitivity, specificity, or balanced decision-making. The findings may serve as a reference for clinical model deployment and have potential for integration into multi-objective AI systems for endovascular thrombectomy patient triage. Full article
Show Figures

Graphical abstract

Back to TopTop