Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (123)

Search Parameters:
Keywords = normed module

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 14590 KB  
Article
VTC-Net: A Semantic Segmentation Network for Ore Particles Integrating Transformer and Convolutional Block Attention Module (CBAM)
by Yijing Wu, Weinong Liang, Jiandong Fang, Chunxia Zhou and Xiaolu Sun
Sensors 2026, 26(3), 787; https://doi.org/10.3390/s26030787 - 24 Jan 2026
Viewed by 190
Abstract
In mineral processing, visual-based online particle size analysis systems depend on high-precision image segmentation to accurately quantify ore particle size distribution, thereby optimizing crushing and sorting operations. However, due to multi-scale variations, severe adhesion, and occlusion within ore particle clusters, existing segmentation models [...] Read more.
In mineral processing, visual-based online particle size analysis systems depend on high-precision image segmentation to accurately quantify ore particle size distribution, thereby optimizing crushing and sorting operations. However, due to multi-scale variations, severe adhesion, and occlusion within ore particle clusters, existing segmentation models often exhibit undersegmentation and misclassification, leading to blurred boundaries and limited generalization. To address these challenges, this paper proposes a novel semantic segmentation model named VTC-Net. The model employs VGG16 as the backbone encoder, integrates Transformer modules in deeper layers to capture global contextual dependencies, and incorporates a Convolutional Block Attention Module (CBAM) at the fourth stage to enhance focus on critical regions such as adhesion edges. BatchNorm layers are used to stabilize training. Experiments on ore image datasets show that VTC-Net outperforms mainstream models such as UNet and DeepLabV3 in key metrics, including MIoU (89.90%) and pixel accuracy (96.80%). Ablation studies confirm the effectiveness and complementary role of each module. Visual analysis further demonstrates that the model identifies ore contours and adhesion areas more accurately, significantly improving segmentation robustness and precision under complex operational conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 5431 KB  
Article
Active Fault-Tolerant Method for Navigation Sensor Faults Based on Frobenius Norm–KPCA–SVM–BiLSTM
by Zexia Huang, Bei Xu, Guoyang Ye, Pu Yang and Chunli Shao
Actuators 2026, 15(1), 64; https://doi.org/10.3390/act15010064 - 19 Jan 2026
Viewed by 110
Abstract
Aiming to address the safety and stability issues caused by typical faults of Unmanned Aerial Vehicle (UAV) navigation sensors, a novel fault-tolerant method is proposed, which can capture the temporal dependencies of fault feature evolution, and complete the classification, prediction, and data reconstruction [...] Read more.
Aiming to address the safety and stability issues caused by typical faults of Unmanned Aerial Vehicle (UAV) navigation sensors, a novel fault-tolerant method is proposed, which can capture the temporal dependencies of fault feature evolution, and complete the classification, prediction, and data reconstruction of fault data. In this fault-tolerant method, the feature extraction module adopts the FNKPCA method—integrating the Frobenius Norm (F-norm) with Kernel Principal Component Analysis (KPCA)—to optimize the kernel function’s ability to capture signal features, and enhance the system reliability. By combining FNKPCA with Support Vector Machine (SVM) and Bidirectional Long Short-Term Memory (BiLSTM), an active fault-tolerant processing method, namely FNKPCA–SVM–BiLSTM, is obtained. This study conducts comparative experiments on public datasets, and verifies the effectiveness of the proposed method under different fault states. The proposed approach has the following advantages: (1) It achieves a detection accuracy of 98.64% for sensor faults, with an average false alarm rate of only 0.15% and an average missed detection rate of 1.16%, demonstrating excellent detection performance. (2) Compared with the Long Short-Term Memory (LSTM)-based method, the proposed fault-tolerant method can reduce the RMSE metrics of Global Positioning System (GPS), Inertial Measurement Unit (IMU), and Ultra-Wide-Band (UWB) sensors by 77.80%, 14.30%, and 75.00%, respectively, exhibiting a significant fault-tolerant effect. Full article
(This article belongs to the Section Actuators for Manufacturing Systems)
Show Figures

Figure 1

21 pages, 11032 KB  
Article
Scale Calibration and Pressure-Driven Knowledge Distillation for Image Classification
by Jing Xie, Penghui Guan, Han Li, Chunhua Tang, Li Wang and Yingcheng Lin
Symmetry 2026, 18(1), 177; https://doi.org/10.3390/sym18010177 - 18 Jan 2026
Viewed by 136
Abstract
Knowledge distillation achieves model compression by training a lightweight student network to mimic the output distribution of a larger teacher network. However, when the teacher becomes overconfident, its sharply peaked logits break the scale symmetry of supervision and induce high-variance gradients, leading to [...] Read more.
Knowledge distillation achieves model compression by training a lightweight student network to mimic the output distribution of a larger teacher network. However, when the teacher becomes overconfident, its sharply peaked logits break the scale symmetry of supervision and induce high-variance gradients, leading to unstable optimization. Meanwhile, research that focuses only on final-logit alignment often fails to utilize intermediate semantic structure effectively. This causes weak discrimination of student representations, especially under class imbalance. To address these issues, we propose Scale Calibration and Pressure-Driven Knowledge Distillation (SPKD): a one-stage framework comprising two lightweight, complementary mechanisms. First, a dynamic scale calibration module normalizes the teacher’s logits to a consistent magnitude, reducing gradient variance. Secondly, an adaptive pressure-driven mechanism refines student learning by preventing feature collapse and promoting intra-class compactness and inter-class separability. Extensive experiments on CIFAR-100 and ImageNet demonstrate that SPKD achieves superior performance to distillation baselines across various teacher–student combinations. For example, SPKD achieves a score of 74.84% on CIFAR-100 for the homogeneous architecture VGG13-VGG8. Additional evidence from logit norm and gradient variance statistics, as well as representation analyses, proves the fact that SPKD stabilizes optimization while learning more discriminative and well-structured features. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 3982 KB  
Article
AI-Driven Decimeter-Level Indoor Localization Using Single-Link Wi-Fi: Adaptive Clustering and Probabilistic Multipath Mitigation
by Li-Ping Tian, Chih-Min Yu, Li-Chun Wang and Zhizhang (David) Chen
Sensors 2026, 26(2), 642; https://doi.org/10.3390/s26020642 - 18 Jan 2026
Viewed by 147
Abstract
This paper presents an Artificial Intelligence (AI)-driven framework for high-precision indoor localization using single-link Wi-Fi channel state information (CSI), targeting real-time deployment in complex multipath environments. To overcome challenges such as signal distortion and environmental dynamics, the proposed system integrates adaptive and unsupervised [...] Read more.
This paper presents an Artificial Intelligence (AI)-driven framework for high-precision indoor localization using single-link Wi-Fi channel state information (CSI), targeting real-time deployment in complex multipath environments. To overcome challenges such as signal distortion and environmental dynamics, the proposed system integrates adaptive and unsupervised intelligence modules into the localization pipeline. A refined two-stage time-of-flight (TOF) estimation method is introduced, combining a minimum-norm algorithm with a probability-weighted refinement mechanism that improves ranging accuracy under non-line-of-sight (NLOS) conditions. Simultaneously, an adaptive parameter-tuned DBSCAN algorithm is applied to angle-of-arrival (AOA) sequences, enabling unsupervised spatio-temporal clustering for stable direction estimation without requiring prior labels or environmental calibration. These AI-enabled components allow the system to dynamically suppress multipath interference, eliminate positioning ambiguity, and maintain robustness across diverse indoor layouts. Comprehensive experiments conducted on the Widar2.0 dataset demonstrate that the proposed method achieves decimeter-level accuracy with an average localization error of 0.63 m, outperforming existing methods such as “Widar2.0” and “Dynamic-MUSIC” in both accuracy and efficiency. This intelligent and lightweight architecture is fully compatible with commodity Wi-Fi hardware and offers significant potential for real-time human tracking, smart building navigation, and other location-aware AI applications. Full article
(This article belongs to the Special Issue Sensors for Indoor Positioning)
Show Figures

Figure 1

23 pages, 91075 KB  
Article
Improved Lightweight Marine Oil Spill Detection Using the YOLOv8 Algorithm
by Jianting Shi, Tianyu Jiao, Daniel P. Ames, Yinan Chen and Zhonghua Xie
Appl. Sci. 2026, 16(2), 780; https://doi.org/10.3390/app16020780 - 12 Jan 2026
Viewed by 198
Abstract
Marine oil spill detection using Synthetic Aperture Radar (SAR) is crucial but challenged by dynamic marine conditions, diverse spill scales, and limitations in existing algorithms regarding model size and real-time performance. To address these challenges, we propose LSFE-YOLO, a YOLOv8s-optimized (You Only Look [...] Read more.
Marine oil spill detection using Synthetic Aperture Radar (SAR) is crucial but challenged by dynamic marine conditions, diverse spill scales, and limitations in existing algorithms regarding model size and real-time performance. To address these challenges, we propose LSFE-YOLO, a YOLOv8s-optimized (You Only Look Once version 8) lightweight model with an original, domain-tailored synergistic integration of FasterNet, GN-LSC Head (GroupNorm Lightweight Shared Convolution Head), and C2f_MBE (C2f Mobile Bottleneck Enhanced). FasterNet serves as the backbone (25% neck width reduction), leveraging partial convolution (PConv) to minimize memory access and redundant computations—overcoming traditional lightweight backbones’ high memory overhead—laying the foundation for real-time deployment while preserving feature extraction. The proposed GN-LSC Head replaces YOLOv8’s decoupled head: its shared convolutions reduce parameter redundancy by approximately 40%, and GroupNorm (Group Normalization) ensures stable accuracy under edge computing’s small-batch constraints, outperforming BatchNorm (Batch Normalization) in resource-limited scenarios. The C2f_MBE module integrates EffectiveSE (Effective Squeeze and Excitation)-optimized MBConv (Mobile Inverted Bottleneck Convolution) into C2f: MBConv’s inverted-residual design enhances multi-scale feature capture, while lightweight EffectiveSE strengthens discriminative oil spill features without extra computation, addressing the original C2f’s scale variability insufficiency. Additionally, an SE (Squeeze and Excitation) attention mechanism embedded upstream of SPPF (Spatial Pyramid Pooling Fast) suppresses background interference (e.g., waves, biological oil films), synergizing with FasterNet and C2f_MBE to form a cascaded feature optimization pipeline that refines representations throughout the model. Experimental results show that LSFE-YOLO improves mAP (mean Average Precision) by 1.3% and F1 score by 1.7% over YOLOv8s, while achieving substantial reductions in model size (81.9%), parameter count (82.9%), and computational cost (84.2%), alongside a 20 FPS (Frames Per Second) increase in detection speed. LSFE-YOLO offers an efficient and effective solution for real-time marine oil spill detection. Full article
Show Figures

Figure 1

20 pages, 707 KB  
Article
Beyond Native Norms: A Perceptually Grounded and Fair Framework for Automatic Speech Assessment
by Mewlude Nijat, Yang Wei, Shuailong Li, Abdusalam Dawut and Askar Hamdulla
Appl. Sci. 2026, 16(2), 647; https://doi.org/10.3390/app16020647 - 8 Jan 2026
Viewed by 214
Abstract
Pronunciation assessment is central to computer-assisted pronunciation training (CAPT) and speaking tests, yet most systems still adopt a native norm, treating deviations from canonical L1 pronunciations as errors. In contrast, rating rubrics and psycholinguistic evidence emphasize intelligibility for a target listener population and [...] Read more.
Pronunciation assessment is central to computer-assisted pronunciation training (CAPT) and speaking tests, yet most systems still adopt a native norm, treating deviations from canonical L1 pronunciations as errors. In contrast, rating rubrics and psycholinguistic evidence emphasize intelligibility for a target listener population and show that listeners rapidly adapt their phonetic categories to new accents. We argue that automatic assessment should likewise be referenced to the target learner group. We build a Transformer-based mispronunciation detection (MD) model that computationally mimics listener adaptation: it is first pre-trained on multi-speaker Librispeech, then fine-tuned on the non-native L2-ARCTIC corpus that represents a specific learner population. Fine-tuning, using either synthetic or human MD labels, constrains updates to the phonetic space (i.e., the representation space used to encode phone-level distinctions, the learned phone/phonetic embedding space, and its alignment with acoustic representations), which means that only the phonetic module is updated while the rest of the model stays fixed. Relative to the pre-trained model, L2 adaptation substantially improves MD recall and F1, increasing ROC–AUC from 0.72 to 0.85. The results support a target-population norm and inform the design of perception-aligned, fairer automatic pronunciation assessment systems. Full article
Show Figures

Figure 1

15 pages, 2072 KB  
Article
A Ceramic Rare Defect Amplification Method Based on TC-CycleGAN
by Zhiqiang Zeng, Changying Dang, Zebing Ma, Jiansu Li and Zhonghua Li
Sensors 2026, 26(2), 395; https://doi.org/10.3390/s26020395 - 7 Jan 2026
Viewed by 260
Abstract
The ceramic defect detection technology based on deep learning suffers from the problems of scarce rare defect samples and class imbalance. However, the current deep generative image augmentation techniques are limited when applied to the task of augmenting rare ceramic defects due to [...] Read more.
The ceramic defect detection technology based on deep learning suffers from the problems of scarce rare defect samples and class imbalance. However, the current deep generative image augmentation techniques are limited when applied to the task of augmenting rare ceramic defects due to issues such as uneven image brightness and insufficient features of small-sized defects, resulting in poor image quality and limited improvement in detection results. This paper proposes a ceramic rare defect image augmentation method based on TC-CycleGAN. TC-CycleGAN is based on the CycleGAN framework and optimizes the generator and discriminator structures to make them more suitable for ceramic defect features, thereby improving the quality of generated images. The generator is TC-UNet, which introduces the scSE and DehazeFormer modules on the basis of UNet, effectively enhancing the model’s ability to learn the subtle defect features on the ceramic surface; the discriminator is the TC-PatchGAN architecture, which replaces the original BatchNorm module with the ContraNorm module, effectively increasing the discriminator’s sensitivity to the representation of tiny ceramic defect features and enhancing the diversity of generated images. The image quality assessment experiments show that the method proposed in this paper significantly improves the quality of generated defective images. For the concave type images, the FID and KID values have decreased by 49% and 73%, respectively, while for the smoke stains type images, the FID and KID values have decreased by 57% and 63% respectively. The further defect detection experiments results show that when using the data set expanded by the method in this paper for training, the recognition accuracy of the detection model for rare defects has significantly improved. The detection accuracy of the concave and smoke stains types of defects has increased by 1.2% and 3.9% respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 18607 KB  
Article
Robust Object Detection in Adverse Weather Conditions: ECL-YOLOv11 for Automotive Vision Systems
by Zhaohui Liu, Jiaxu Zhang, Xiaojun Zhang and Hongle Song
Sensors 2026, 26(1), 304; https://doi.org/10.3390/s26010304 - 2 Jan 2026
Viewed by 620
Abstract
The rapid development of intelligent transportation systems and autonomous driving technologies has made visual perception a key component in ensuring safety and improving efficiency in complex traffic environments. As a core task in visual perception, object detection directly affects the reliability of downstream [...] Read more.
The rapid development of intelligent transportation systems and autonomous driving technologies has made visual perception a key component in ensuring safety and improving efficiency in complex traffic environments. As a core task in visual perception, object detection directly affects the reliability of downstream modules such as path planning and decision control. However, adverse weather conditions (e.g., fog, rain, and snow) significantly degrade image quality—causing texture blurring, reduced contrast, and increased noise—which in turn weakens the robustness of traditional detection models and raises potential traffic safety risks. To address this challenge, this paper proposes an enhanced object detection framework, ECL-YOLOv11 (Edge-enhanced, Context-guided, and Lightweight YOLOv11), designed to improve detection accuracy and real-time performance under adverse weather conditions, thereby providing a reliable solution for in-vehicle perception systems. The ECL-YOLOv11 architecture integrates three key modules: (1) a Convolutional Edge-enhancement (CE) module that fuses edge features extracted by Sobel operators with convolutional features to explicitly retain boundary and contour information, thereby alleviating feature degradation and improving localization accuracy under low-visibility conditions; (2) a Context-guided Multi-scale Fusion Network (AENet) that enhances perception of small and distant objects through multi-scale feature integration and context modeling, improving semantic consistency and detection stability in complex scenes; and (3) a Lightweight Shared Convolutional Detection Head (LDHead) that adopts shared convolutions and GroupNorm normalization to optimize computational efficiency, reduce inference latency, and satisfy the real-time requirements of on-board systems. Experimental results show that ECL-YOLOv11 achieves mAP@50 and mAP@50–95 values of 62.7% and 40.5%, respectively, representing improvements of 1.3% and 0.8% over the baseline YOLOv11, while the Precision reaches 73.1%. The model achieves a balanced trade-off between accuracy and inference speed, operating at 237.8 FPS on standard hardware. Ablation studies confirm the independent effectiveness of each proposed module in feature enhancement, multi-scale fusion, and lightweight detection, while their integration further improves overall performance. Qualitative visualizations demonstrate that ECL-YOLOv11 maintains high-confidence detections across varying motion states and adverse weather conditions, avoiding category confusion and missed detections. These results indicate that the proposed framework provides a reliable and adaptable foundation for all-weather perception in autonomous driving systems, ensuring both operational safety and real-time responsiveness. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

36 pages, 7233 KB  
Article
Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
by K. E. ArunKumar, Matthew E. Wilson, Nathan E. Blake, Tylor J. Yost and Matthew Walker
Sensors 2025, 25(24), 7557; https://doi.org/10.3390/s25247557 - 12 Dec 2025
Viewed by 729
Abstract
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence [...] Read more.
Early detection of breast cancer commonly relies on imaging technologies such as ultrasound, mammography and MRI. Among these, breast ultrasound is widely used by radiologists to identify and assess lesions. In this study, we developed image segmentation techniques and multiclass classification artificial intelligence (AI) tools based on pretrained models to segment lesions and detect breast cancer. The proposed workflow includes both the development of segmentation models and development of a series of classification models to classify ultrasound images as normal, benign or malignant. The pretrained models were trained and evaluated on the Breast Ultrasound Images (BUSI) dataset, a publicly available collection of grayscale breast ultrasound images with corresponding expert-annotated masks. For segmentation, images and ground-truth masks were used to pretrained encoder (ResNet18, EfficientNet-B0 and MobileNetV2)–decoder (U-Net, U-Net++ and DeepLabV3) models, including the DeepLabV3 architecture integrated with a Frequency-Domain Feature Enhancement Module (FEM). The proposed FEM improves spatial and spectral feature representations using Discrete Fourier Transform (DFT), GroupNorm, dropout regularization and adaptive fusion. For classification, each image was assigned a label (normal, benign or malignant). Optuna, an open-source software framework, was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder–decoder segmentation architecture. Five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3 and GoogleNet) were optimized for multiclass classification. DeepLabV3 outperformed other segmentation architectures, with consistent performance across training, validation and test images, with Dice Similarity Coefficient (DSC, a metric describing the overlap between predicted and true lesion regions) values of 0.87, 0.80 and 0.83 on training, validation and test sets, respectively. ResNet18:DeepLabV3 achieved an Intersection over Union (IoU) score of 0.78 during training, while ResNet18:U-Net++ achieved the best Dice coefficient (0.83) and IoU (0.71) and area under the curve (AUC, 0.91) scores on the test (unseen) dataset when compared to other models. However, the proposed Resnet18: FrequencyAwareDeepLabV3 (FADeepLabV3) achieved a DSC of 0.85 and an IoU of 0.72 on the test dataset, demonstrating improvements over standard DeepLabV3. Notably, the frequency-domain enhancement substantially improved the AUC from 0.90 to 0.98, indicating enhanced prediction confidence and clinical reliability. For classification, ResNet18 produced an F1 score—a measure combining precision and recall—of 0.95 and an accuracy of 0.90 on the training dataset, while InceptionV3 performed best on the test dataset, with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal transfer learning models on an imbalanced ultrasound image dataset. Full article
Show Figures

Figure 1

15 pages, 2333 KB  
Article
A High-Precision Segmentation Method for Photovoltaic Modules Integrating Transformer and Improved U-Net
by Kesheng Jin, Sha Gao, Hui Yu and Ji Zhang
Processes 2025, 13(12), 4013; https://doi.org/10.3390/pr13124013 - 11 Dec 2025
Viewed by 318
Abstract
To address the challenges of insufficient robustness and limited feature extraction in photovoltaic module image segmentation under complex scenarios, we propose a high-precision PV module segmentation model (Pv-UNet) that integrates Transformer and improved U-Net architecture. The model introduces a MultiScale Transformer in the [...] Read more.
To address the challenges of insufficient robustness and limited feature extraction in photovoltaic module image segmentation under complex scenarios, we propose a high-precision PV module segmentation model (Pv-UNet) that integrates Transformer and improved U-Net architecture. The model introduces a MultiScale Transformer in the encoding path to achieve cross-scale feature correlation and semantic enhancement, combines residual structure with dynamic channel adaptation mechanism in the DoubleConv module to improve feature transfer stability, and incorporates an Attention Gate module in the decoding path to suppress complex background interference. Experimental data were obtained from UAV visible light images of a photovoltaic power station in Yuezhe Town, Qiubei County, Yunnan Province. Compared with U-Net, BatchNorm-UNet, and Seg-UNet, Pv-UNet achieved significant improvements in IoU, Dice, and Precision metrics to 97.69%, 93.88%, and 97.99% respectively, while reducing the Loss value to 0.0393. The results demonstrate that our method offers notable advantages in both accuracy and robustness for PV module segmentation, providing technical support for automated inspection and intelligent monitoring of photovoltaic power stations. Full article
(This article belongs to the Section Environmental and Green Processes)
Show Figures

Figure 1

40 pages, 1231 KB  
Review
Quaternionic and Octonionic Frameworks for Quantum Computation: Mathematical Structures, Models, and Fundamental Limitations
by Johan Heriberto Rúa Muñoz, Jorge Eduardo Mahecha Gómez and Santiago Pineda Montoya
Quantum Rep. 2025, 7(4), 55; https://doi.org/10.3390/quantum7040055 - 26 Nov 2025
Viewed by 761
Abstract
We develop detailed quaternionic and octonionic frameworks for quantum computation grounded on normed division algebras. Our central result is to prove the polynomial computational equivalence of quaternionic and complex quantum models: Computation over H is polynomially equivalent to the standard complex quantum circuit [...] Read more.
We develop detailed quaternionic and octonionic frameworks for quantum computation grounded on normed division algebras. Our central result is to prove the polynomial computational equivalence of quaternionic and complex quantum models: Computation over H is polynomially equivalent to the standard complex quantum circuit model and hence captures the same complexity class BQP up to polynomial reductions. Over H, we construct a complete model—quaternionic qubits on right H-modules with quaternion-valued inner products, unitary dynamics, associative tensor products, and universal gate sets—and establish polynomial equivalence with the standard complex model; routes for implementation at fidelities exceeding 99% via pulse-level synthesis on current hardware are discussed. Over O, non-associativity yields path-dependent evolution, ambiguous adjoints/inner products, non-associative tensor products, and possible failure of energy conservation outside associative sectors. We formalize these obstructions and systematize four mitigation strategies: Confinement to associative subalgebras, G2-invariant codes, dynamical decoupling of associator terms, and a seven-factor algebraic decomposition for gate synthesis. The results delineate the feasible quaternionic regime from the constrained octonionic landscape and point to applications in symmetry-protected architectures, algebra-aware simulation, and hypercomplex learning. Full article
Show Figures

Figure 1

18 pages, 15265 KB  
Article
Community Action: An Architecture and Design Pedagogy
by Torange Khonsari
Architecture 2025, 5(4), 115; https://doi.org/10.3390/architecture5040115 - 20 Nov 2025
Cited by 1 | Viewed by 507
Abstract
As architectural educators interested in community engagement and learning about everyday practices in the city, we recognize that teaching community engagement in a practical rather than abstract way is key. This paper presents community-engaged architecture and design pedagogy as potential methods for informing [...] Read more.
As architectural educators interested in community engagement and learning about everyday practices in the city, we recognize that teaching community engagement in a practical rather than abstract way is key. This paper presents community-engaged architecture and design pedagogy as potential methods for informing the shift in the role of the architect from top-down to ground-up. This paper presents the author’s pedagogical experimentation based on 25 years of teaching live projects in socially engaged architecture and activism. It describes how a pedagogy combining architecture and activism resulted in the development of an interdisciplinary commons curriculum. The curricula aimed to increase the influence of design practitioners in the development of deliberatively democratic neighborhoods by creating new design practices and outputs. Teaching the political role of the architect from the ground-up rather than from the traditional top-down perspective is challenging, as only a few historical case studies can legitimize and inform its development. This paper describes the content of two pedagogical formats. The ‘Architecture and Activism’ postgraduate architecture and design studio and the following ‘Design for Cultural Commons’ interdisciplinary design postgraduate program. They were both designed to have real-world influence. The ‘Design for Cultural Commons’ postgraduate program enabled the development of a curriculum ranging from modules in social science, art and politics to systems thinking, which is required knowledge for complex neighborhood practices. The city was used as a field of study to discover new knowledge through students’ community engagements. Various theoretical frameworks were employed to develop new forms of emancipatory pedagogy, helping the author unlearn the norms of conventional architectural education. The practice of recalibrating architectural canons and values into a common-based curriculum development is discussed through the framing of learning commons. Full article
(This article belongs to the Special Issue Spaces and Practices of Everyday Community Resilience)
Show Figures

Figure 1

21 pages, 953 KB  
Article
OS-Denseformer: A Lightweight End-to-End Noise-Robust Method for Chinese Speech Recognition
by Shiqi Que, Liping Qian, Mingqing Li and Qian Wang
Appl. Sci. 2025, 15(22), 12096; https://doi.org/10.3390/app152212096 - 14 Nov 2025
Viewed by 1132
Abstract
Automatic speech recognition (ASR) technology faces the dual challenges of model complexity and noise robustness when deployed on terminal devices (e.g., mobile devices, embedded systems). To meet the demand for lightweight and high-performance models in terminal devices, we propose a lightweight end-to-end speech [...] Read more.
Automatic speech recognition (ASR) technology faces the dual challenges of model complexity and noise robustness when deployed on terminal devices (e.g., mobile devices, embedded systems). To meet the demand for lightweight and high-performance models in terminal devices, we propose a lightweight end-to-end speech recognition model, OS-Denseformer (Omni-Scale-Denseformer). The core of this model lies in its lightweight design and noise adaptability: multi-scale acoustic features are efficiently extracted through a multi-sampling structure to enhance noise robustness; the proposed OS-Conv module improves local feature extraction capability while significantly reducing the number of parameters, enhancing computational efficiency, and lowering model complexity; the proposed normalization function, ExpNorm, normalizes the model output, facilitating more accurate parameter optimization during model training. Finally, we employ distinct loss functions across different training stages, using Minimum Bayes Risk (MBR) joint optimization to determine the optimal weighting scheme that directly minimizes the character error rate (CER). Experimental results on public datasets such as AISHELL-1 demonstrate that, under a high-noise environment of −15 dB, the CER of the OS-Denseformer model is reduced by 9.95%, 7.97%, and 4.85% compared to the benchmark models Squeezeformer, Conformer, and Zipformer, respectively. Additionally, the model parameter count is reduced by 53.35%, 10.27%, and 27.66%, while the giga floating-point operations per second (GFLOPs) are decreased by 67.51%, 66.51%, and 13.82%, respectively. Deployment on resource-constrained mobile devices demonstrates that, compared to Conformer, OS-Denseformer reduced memory usage by 10.79% and decreased inference latency by 61.62%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 274 KB  
Article
K-g-Fusion Frames on Cartesian Products of Two Hilbert C*-Modules
by Sanae Touaiher, Maryam G. Alshehri and Mohamed Rossafi
Mathematics 2025, 13(22), 3576; https://doi.org/10.3390/math13223576 - 7 Nov 2025
Viewed by 504
Abstract
In this paper, we introduce and investigate the concept of K-g-fusion frames in the Cartesian product of two Hilbert C*-modules over the same unital C*-algebra. Our main result establishes that the Cartesian product of two K-g-fusion frames [...] Read more.
In this paper, we introduce and investigate the concept of K-g-fusion frames in the Cartesian product of two Hilbert C*-modules over the same unital C*-algebra. Our main result establishes that the Cartesian product of two K-g-fusion frames remains a K-g-fusion frame for the direct-sum module. We give explicit formulae for the associated synthesis, analysis, and frame operators and prove natural relations (direct-sum decomposition of the frame operator). Furthermore, we prove a perturbation theorem showing that small perturbations of the component families, measured in the operator or norm sense, still yield a K-g-fusion frame for the product module, with explicit new frame bounds obtained. Full article
30 pages, 354 KB  
Article
Reconceptualizing Human Authorship in the Age of Generative AI: A Normative Framework for Copyright Thresholds
by Fernando A. Ramos-Zaga
Laws 2025, 14(6), 84; https://doi.org/10.3390/laws14060084 - 7 Nov 2025
Viewed by 4044
Abstract
The emergence of generative artificial intelligence has unsettled traditional legal conceptions of authorship and originality by challenging the foundational premise of copyright, namely, the requirement of human intervention as a precondition for protection. Such disruption exposes the anthropocentric limits of existing regulatory frameworks [...] Read more.
The emergence of generative artificial intelligence has unsettled traditional legal conceptions of authorship and originality by challenging the foundational premise of copyright, namely, the requirement of human intervention as a precondition for protection. Such disruption exposes the anthropocentric limits of existing regulatory frameworks and underscores the absence of coherent, harmonized responses across jurisdictions. The study proposes a normative framework for determining the minimum threshold of human creativity necessary for works produced with the assistance of artificial intelligence to qualify for legal protection. Through comparative and doctrinal analysis, it advances the criterion of substantial creative direction, defined through three essential elements: effective control over the generative process, verifiable creative input, and identifiable expressive intent. On this basis, a graduated model of copyright protection is suggested, modulating the scope of rights according to the degree of human intervention and complemented by procedural reforms aimed at enabling its administrative implementation. The proposal seeks to reorient copyright toward an adaptive paradigm that safeguards technological innovation while preserving the centrality of human creativity as the normative foundation of the system, thereby ensuring a balanced relationship between regulatory flexibility and legal certainty. Full article
Back to TopTop