Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (595)

Search Parameters:
Keywords = large-sized blocks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 771 KB  
Article
MSA-Net: A Deep Learning Network with Multi-Axial Hadamard Attention and Pyramid Pooling for Stroke Microwave Imaging
by Bo Han, Dongliang Li, Xuhui Zhu, Mingshuai Zhang and Peng Li
Algorithms 2026, 19(4), 276; https://doi.org/10.3390/a19040276 - 2 Apr 2026
Viewed by 201
Abstract
Microwave imaging is emerging as an alternative to conventional medical diagnostic techniques. Traditional analytical and numerical methods fail to adequately address these fundamental challenges: they often rely on strict linear approximations or simplified physical models, leading to low reconstruction accuracy, poor robustness, and [...] Read more.
Microwave imaging is emerging as an alternative to conventional medical diagnostic techniques. Traditional analytical and numerical methods fail to adequately address these fundamental challenges: they often rely on strict linear approximations or simplified physical models, leading to low reconstruction accuracy, poor robustness, and limited generalization ability in complex clinical scenarios. As a result, they cannot meet the high-precision requirements of practical stroke microwave imaging. To further improve the accuracy of microwave imaging algorithms in recognizing stroke regions and solving the backscattering problem, this study employs a combination of methods with deep learning. It presents the Multi-Scale Attention Network (MSA-Net) for microwave imaging. The network is based on the EGE-UNet network structure with improved multi-axis Hadamard attention, incorporating null-space pyramid pooling and introducing a deep supervisory mechanism to improve the network performance further. To combine microwave imaging with deep learning, firstly, a large amount of microwave data need to be simulated with HFSS, in which the simulation model is a human brain stroke model constructed by an HFSS simulation system. Secondly, the microwave data obtained from the simulation are converted into a tensor format. Then, the tensor data are input into the MSA-Net neural network, which generates a binary mask image that can be used to detect the size and location of the stroke. This study also prompts the model to converge faster by sparsifying the microwave data to improve training efficiency. The method has been tested using simulation data, and based on the comparison experiments with other networks, MSA-Net is more accurate in detecting the location and the bleed size. The experimental results show that the proposed method is superior for stroke imaging. The experimental results show that the proposed model achieves a 1.08 improvement in peak signal-to-noise ratio and a 0.017 reduction in learned perceptual image block similarity, fully validating the effectiveness of the structural optimization strategy proposed in this paper. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis: 3rd Edition)
Show Figures

Figure 1

24 pages, 3759 KB  
Article
Variation in Seed Traits, Germination Performance, and Seedling Morphology of Cotinus coggygria (Scop.) in Relation to Provenance and Seed Size
by Askin Gokturk and Asiye Surmeli
Horticulturae 2026, 12(4), 426; https://doi.org/10.3390/horticulturae12040426 - 1 Apr 2026
Viewed by 260
Abstract
Cotinus coggygria (Scop) is a medicinally valuable species naturally distributed in the Artvin region of Turkiye. However, information on its seed traits, germination behavior, and seedling morphology in relation to seed size and provenance remains limited. This study aimed to evaluate the effects [...] Read more.
Cotinus coggygria (Scop) is a medicinally valuable species naturally distributed in the Artvin region of Turkiye. However, information on its seed traits, germination behavior, and seedling morphology in relation to seed size and provenance remains limited. This study aimed to evaluate the effects of seed size and provenance on the seed characteristics, germination, and seedling morphological traits of C. coggygria. Seeds were collected from four provenances (Seyitler, Tepekoy, Eskikale, and Tortum) and classified into large and small size groups using a 2 mm sieve. The seed traits of length, diameter, thickness, sphericity, volume, and thousand-seed weight were considered. To break seed dormancy, the seeds were subjected to sulfuric acid scarification and cold stratification treatments. Germination trials were conducted under nursery conditions using 45-cell trays in a randomized block design with four replicates. The mean germination time was significantly affected by provenance, whereas seed size and pretreatment combinations had no significant effects. Seed size did not significantly influence seedling morphology, whereas provenance caused significant differences. Seedlings originating from Eskikale exhibited greater height and root collar diameter, with root mass fractions ranging from 80.25% to 82.78%. These results indicate that provenance is a key factor influencing germination and seedling morphology rather than seed size. Full article
(This article belongs to the Section Propagation and Seeds)
Show Figures

Figure 1

19 pages, 9863 KB  
Article
Analysis of Slope Braking Adaptability of Copper-Based Powder Metallurgy Brake Pads for High-Speed Trains Based on Full-Scale Bench Tests
by Xueqian Geng
Lubricants 2026, 14(4), 146; https://doi.org/10.3390/lubricants14040146 - 31 Mar 2026
Viewed by 223
Abstract
With the opening of complex service routes, the importance of the service performance of brake pads under long slope braking conditions is increasing. It is necessary to analyze the slope braking adaptability of current brake pad products. This work takes the copper-based powder [...] Read more.
With the opening of complex service routes, the importance of the service performance of brake pads under long slope braking conditions is increasing. It is necessary to analyze the slope braking adaptability of current brake pad products. This work takes the copper-based powder metallurgy brake pads of a certain in-service high-speed train as the research object and conducts friction and wear behavior tests of the brake pads based on a full-scale brake test bench. Through microscopic observation and damage analysis, the differences in friction and wear behavior of the brake pads under stop braking and slope braking conditions are compared, revealing the wear mechanism and damage evolution characteristics of the brake pads. The results show that under the impact of high speed, high braking force, and severe thermal load in the stop braking conditions, the uneven wear of brake pads is high, and the eccentric wear of friction blocks is affected by both the friction radius and friction direction. The friction surface has a large number and size of damages, and the stability of the friction interface is poor. The brake pad exhibits a composite wear mechanism dominated by abrasive wear and brittle fracture induced exfoliation. In the slope braking condition, under the action of low speed, low braking force, and long-term stable thermal load, the uneven wear of the brake pads is relatively low, the surface damage size is small, and the friction block only has eccentric wear along the friction direction. The brake pad mainly initiates cracks along the interface of the components, which propagate parallel to the friction surface, exhibiting a progressive delamination and flaking exfoliation mechanism with a low wear rate. Although the friction interface of the brake pad is relatively stable under slope braking conditions, the cumulative delamination wear of the brake pads under long-term braking action needs further attention. Full article
Show Figures

Figure 1

17 pages, 2574 KB  
Article
Recursive Weight Sharing for Parameter-Efficient Deep Convolutional Networks: Application to Skin Lesion Classification
by Ali Belkhiri, My Abdelouahed Sabri and Abdellah Aarab
Appl. Syst. Innov. 2026, 9(4), 69; https://doi.org/10.3390/asi9040069 - 25 Mar 2026
Viewed by 323
Abstract
Modern deep convolutional neural networks achieve remarkable performance but require substantial computational resources due to their large parameter counts, limiting their suitability for resource-constrained environments. We propose Tiny Recursive ResNet-50, a parameter-efficient architecture that reduces model complexity through recursive feature refinement with weight [...] Read more.
Modern deep convolutional neural networks achieve remarkable performance but require substantial computational resources due to their large parameter counts, limiting their suitability for resource-constrained environments. We propose Tiny Recursive ResNet-50, a parameter-efficient architecture that reduces model complexity through recursive feature refinement with weight sharing across reasoning cycles. The proposed design combines lightweight bottleneck blocks, iterative latent state accumulation, and deep supervision to enhance representation quality without increasing parameter count. Extensive experiments are conducted on melanoma classification using the HAM10000 dataset as the primary training and evaluation benchmark. Results demonstrate that the proposed recursive architecture maintains competitive accuracy while reducing parameters by approximately 49%, confirming its efficiency under constrained settings. To assess robustness under limited data and acquisition variability, we additionally validate on the PH2 dataset (200 images). Due to the small dataset size and class imbalance, evaluation is performed using 5-fold stratified cross-validation, and performance metrics are reported as mean ± standard deviation. This validation confirms that recursive refinement with moderate cycle depth improves stability and generalization in small-data regimes. Full article
Show Figures

Figure 1

29 pages, 7118 KB  
Article
Improving Document Layout Analysis Using Synthetic Data Generation and Convolutional Models
by Olha Pronina, Tao Xia, Kyrylo Sheliah, Olena Piatykop, Vasily Efremenko and Elena Balalayeva
Appl. Sci. 2026, 16(6), 3089; https://doi.org/10.3390/app16063089 - 23 Mar 2026
Viewed by 305
Abstract
Document Layout Analysis (DLA) is a critical step in intelligent document processing and is essential for accurately reconstructing the hierarchical structure of pages. While modern convolutional neural networks exhibit high performance, their effectiveness heavily depends on the quality and representativeness of training data, [...] Read more.
Document Layout Analysis (DLA) is a critical step in intelligent document processing and is essential for accurately reconstructing the hierarchical structure of pages. While modern convolutional neural networks exhibit high performance, their effectiveness heavily depends on the quality and representativeness of training data, limiting their application in scenarios where labeled datasets are scarce. This paper proposes a method for enhancing DLA through synthetic generation of training data. A formalized mathematical model for generating document layouts has been developed, allowing control over element placement density, sizes, and spatial distribution. An experimental study investigated the impact of various data generation strategies on the training of the YOLO11m model, including median and threshold-based element splitting as well as different block sampling schemes. The experiments showed that employing median element splitting combined with random sampling from a large shuffled pool of synthetic data yields consistent improvements of 2–4% across all key metrics: precision, recall, mAP@50, and mAP@50:95, as compared with simple data generation strategies. These results demonstrate that targeted optimization of the data preparation process can enhance the performance of convolutional models in DLA tasks without increasing architectural complexity. The practical applicability of the method is validated through integration into the MinerU system. Future research will focus on extending the proposed model to complex layouts in scientific journals, technical reports, and handwritten documents. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 685 KB  
Article
Performance of XL Sizes of Myval Balloon-Expandable Valve in Real-World Patients with Extremely Large Aortic Annuli
by Kasparas Briedis, Kristina Morkūnaitė, Norvydas Zapustas, Evelina Zarambaitė, Žilvinas Krivickas, Sandra Kmitaitė, Agnė Rimkutė, Klaudija Tvaronavičiūtė, Kamilija Briedė, Urtė Lukauskaitė, Monika Biesevičienė, Tsung-Ying Tsai, Ali Aldujeli, Jurgita Plisienė, Ramūnas Unikas, Remigijus Žaliūnas and Lina Bardauskienė
Medicina 2026, 62(3), 585; https://doi.org/10.3390/medicina62030585 - 20 Mar 2026
Viewed by 302
Abstract
Background and Objectives: Transcatheter aortic valve replacement (TAVR) in large aortic annuli poses challenges due to limited valve-size options and increased complication risks. The aim is to evaluate the safety and performance of XL sizes (30.5 mm and 32 mm) of the Myval [...] Read more.
Background and Objectives: Transcatheter aortic valve replacement (TAVR) in large aortic annuli poses challenges due to limited valve-size options and increased complication risks. The aim is to evaluate the safety and performance of XL sizes (30.5 mm and 32 mm) of the Myval transcatheter heart valve (THV) for treating patients with severe aortic stenosis and large aortic annuli. Material and Methods: This retrospective observational study included consecutive patients undergoing TAVR with XL sizes of the Myval THV between December 2023 and December 2024 at a single centre. During this period, 146 TAVI procedures were performed, of which 15 patients (10.3%) with large aortic annuli (mean systolic annular area 786.5 ± 48.2 mm2) received XL valves and were included in the present analysis. Patients were followed up at discharge, 3–6 months, and 1 year. Patient evaluation included echocardiography and clinical assessments following the Valve Academic Research Consortium-3 criteria. Results: All patients were male, with a mean age of 79.1 ± 5.9 years. Technical success was achieved in 100% of cases. At discharge, none of the patients had moderate or greater paravalvular leakage (PVL); 11 patients had no PVL, while 1 had trace and 3 had mild PVL. The mean effective orifice area (EOA) improved from 0.75 ± 0.15 cm2 at baseline to 2.31 ± 0.21 cm2 at discharge (p < 0.0001). At the 12-month follow-up, the mean EOA was 2.4 ± 0.3 cm2, and no moderate or severe PVL or major adverse clinical outcomes were observed. One patient required a permanent pacemaker implantation due to an atrioventricular block. Conclusions: The XL sizes of Myval THV showed both safety and efficacy in patients with large aortic annuli, demonstrating acceptable hemodynamic performance and low complication rates. However, large-scale studies with longer follow-ups are needed to validate these findings in diverse populations. Full article
(This article belongs to the Special Issue Aortic Stenosis: Diagnosis and Clinical Management)
Show Figures

Figure 1

14 pages, 2308 KB  
Article
Route-Aware Adaptive Variable-Resolution Storage of Gridded Meteorological Data: A Case Study Using Weather Radar Data
by Jie Li, Xi Chen, Xiaojian Hu, Yungang Tian, Qileng He and Yuxin Hu
Atmosphere 2026, 17(3), 300; https://doi.org/10.3390/atmos17030300 - 16 Mar 2026
Viewed by 217
Abstract
The increasing availability of high-resolution gridded meteorological data poses significant challenges for efficient storage and rapid data access. This study proposes a route-aware adaptive variable-resolution storage (AVRS) strategy for gridded meteorological datasets. The spatial domain is partitioned into fixed-size blocks and storage resolution [...] Read more.
The increasing availability of high-resolution gridded meteorological data poses significant challenges for efficient storage and rapid data access. This study proposes a route-aware adaptive variable-resolution storage (AVRS) strategy for gridded meteorological datasets. The spatial domain is partitioned into fixed-size blocks and storage resolution is dynamically assigned based on radar reflectivity characteristics and air-route traffic density, prioritizing aviation-relevant regions while reducing redundancy elsewhere. Composite radar reflectivity (CREF) data are used as a case study to evaluate storage efficiency, reconstruction accuracy, and query performance. Experimental results indicate that AVRS approach reduces storage volume while maintaining high reconstruction fidelity and preserving key convective structures. In addition, route-oriented point-based queries are significantly accelerated compared with conventional uniform-resolution storage. The proposed AVRS framework provides a scalable and aviation-oriented storage solution for large-scale gridded meteorological data, with potential benefits for atmospheric research and air traffic operations. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

18 pages, 1286 KB  
Article
Performance Evaluation of Advanced Encryption Standard and Blowfish Encryption on WearOS: Implications for Wearable Device Security
by Sirapat Boonkrong and Papitchaya Kaensawan
J. Cybersecur. Priv. 2026, 6(2), 50; https://doi.org/10.3390/jcp6020050 - 7 Mar 2026
Viewed by 541
Abstract
In this study, we evaluated the performance of the Advanced Encryption Standard (AES)-128, AES-256, and Blowfish algorithms on WearOS for messages ranging from 8 to 128 bytes, which are typical message sizes for contemporary smartwatch applications. Using a WearOS emulator, we measured encryption [...] Read more.
In this study, we evaluated the performance of the Advanced Encryption Standard (AES)-128, AES-256, and Blowfish algorithms on WearOS for messages ranging from 8 to 128 bytes, which are typical message sizes for contemporary smartwatch applications. Using a WearOS emulator, we measured encryption time, memory usage, central processing unit (CPU) utilization, and battery consumption across 16 messages sizes with 10 repetitions over each configuration. The AES-128 algorithm consistently outperformed the others with approximately 1.0 ms of encryption time at 128 bytes, less than 6 KB memory, and less than 39% peak CPU utilization. The AES-256 algorithm added 25–30% processing overhead and higher energy consumption with negligible extra memory cost. The Blowfish algorithm consumed approximately three times more memory and exhibited the highest battery consumption per operation. It also scales poorly due to its 64-bit block size and large key scheduling approach. In addition, all performance differences are highly statistically significant (p < 0.001). Given the widespread hardware AES acceleration on WearOS devices and memory constraints, AES-128 is recommended as the default symmetric encryption algorithm for confidentiality in smartwatch applications. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Graphical abstract

13 pages, 1064 KB  
Article
Interatrial Conduction Block in Pediatric Patients with Ostium Secundum Atrial Septal Defect
by Silvia Garibaldi, Fabiana Lucà, Francesca Valeria Contini, Alessandra Pizzuto, Gianluca Mirizzi, Massimiliano Cantinotti, Martina Nesti, Luca Panchetti, Umberto Startari, Marcello Piacenti, Nadia Assanta, Andrea Rossi, Federico Landra and Giuseppe Santoro
J. Clin. Med. 2026, 15(5), 1916; https://doi.org/10.3390/jcm15051916 - 3 Mar 2026
Viewed by 273
Abstract
Background: Atrial arrhythmias represent a frequent long-term complication in patients with atrial septal defects (ASDs). Interatrial block (IAB), reflecting delayed or impaired conduction across Bachmann’s bundle, has been proposed as an electrophysiological substrate predisposing to atrial arrhythmogenesis. However, evidence regarding its prevalence and [...] Read more.
Background: Atrial arrhythmias represent a frequent long-term complication in patients with atrial septal defects (ASDs). Interatrial block (IAB), reflecting delayed or impaired conduction across Bachmann’s bundle, has been proposed as an electrophysiological substrate predisposing to atrial arrhythmogenesis. However, evidence regarding its prevalence and clinical correlates in pediatric patients with ASD remains limited. The present study aimed to characterize interatrial conduction patterns and assess the occurrence of IAB in children with large secundum ASD undergoing percutaneous closure. Methods: Between January 2020 and March 2024, 37 consecutive pediatric patients (median age 6 years, range 5–11) with large ostium secundum ASD were included in a retrospective analysis of a prospectively maintained institutional database. Standard 12-lead electrocardiograms were recorded before and within 24 h after defect closure. P-wave morphology and duration were systematically analyzed, and IAB was classified according to the Bayés de Luna criteria. Results: The median Qp/Qs ratio was 1.69 (1.32–2.24), with a mean pulmonary artery pressure of 19 mmHg (17–22). IAB was identified in 24.3% of patients before the procedure, predominantly as first-degree IAB. Following device implantation, IAB prevalence (29.7%) and P-wave parameters remained unchanged, with no significant differences compared with baseline. No associations were observed between IAB and defect size, hemodynamic burden, or device characteristics, whereas anthropometric variables, including weight, height, and body surface area, showed a significant correlation with IAB occurrence. During a median follow-up of 199 days, no atrial arrhythmias were documented. Conclusions: In this pediatric cohort with large ASD, IAB was present in approximately one quarter of patients and appeared unrelated to anatomical or procedural factors, supporting the hypothesis of an underlying congenital conduction abnormality. Early recognition of IAB may therefore have implications for long-term arrhythmic risk stratification in this population. Full article
(This article belongs to the Section Cardiovascular Medicine)
Show Figures

Figure 1

34 pages, 9228 KB  
Article
Analyzing the Impact of Kernel Fusion on GPU Tensor Operation Performance: A Systematic Performance Study
by Matija Dodović, Milica Veselinović and Marko Mišić
Electronics 2026, 15(5), 1034; https://doi.org/10.3390/electronics15051034 - 2 Mar 2026
Viewed by 1072
Abstract
Large numbers of small tensor kernels are executed by GPUs in modern deep learning frameworks, where total performance is frequently constrained by memory bandwidth and kernel launch overheads. Systems such as TensorFlow XLA, PyTorch JIT, and cuDNN often use kernel fusion, which is [...] Read more.
Large numbers of small tensor kernels are executed by GPUs in modern deep learning frameworks, where total performance is frequently constrained by memory bandwidth and kernel launch overheads. Systems such as TensorFlow XLA, PyTorch JIT, and cuDNN often use kernel fusion, which is defined as combining many tensor operations into a single GPU kernel, to reduce intermediate memory transfers and boost efficiency. Nevertheless, it is difficult to measure the true performance impact of fusion on both isolated tensor operations and end-to-end model execution. An experimental investigation of kernel fusion on three different NVIDIA GPUs is presented in this work. For four sample tensor operations: element-wise addition, fused multiply–add, linear transformation with ReLU activation, and map-reduce, we build fused and unfused CUDA kernels using FP32, FP16, and mixed-precision arithmetics. We measure execution time, speedup, and effective memory bandwidth across a range of input sizes. For memory-bound and activation-heavy workloads, fusion yields consistent speedups between 1.5× and 3.13×, particularly for small and medium inputs where kernel launch overhead is significant. For operations dominated by atomic updates, the benefit is limited to between 1.01× and 1.44×. When the reduction strategy is reformulated using block-level shared-memory aggregation, kernel fusion becomes effective again, achieving speedups of up to 2× by eliminating global synchronization bottlenecks. We further evaluate the effect of fusion on image classification models using PyTorch 2.10.0 JIT, achieving 1.54× to 1.83× faster inference. Our results provide practical guidelines on when kernel fusion is most effective. Full article
(This article belongs to the Special Issue Advances in High-Performance and Parallel Computing)
Show Figures

Figure 1

19 pages, 1368 KB  
Article
Evaluation of Different Mechanized Wheat Harvesting Systems in Egypt: Case Study Within the EU KAFI Programme
by Galal Aboelasaad, Luigi Pari, Massimo Brambilla, Simone Bergonzoli, Luca Cozzolino, Francesco Giovanni Ceglie, Ahmed Fawzy Elkot, Yousry Shaban and Hamada Morgan
AgriEngineering 2026, 8(3), 87; https://doi.org/10.3390/agriengineering8030087 - 2 Mar 2026
Viewed by 626
Abstract
The mechanization of wheat harvesting in Egypt is a critical step towards enhancing food security. This study evaluated the operational performance, grain loss, and economic viability of four wheat harvesting systems for the ‘Sakha 95’ variety in the Nile Delta. To evaluate and [...] Read more.
The mechanization of wheat harvesting in Egypt is a critical step towards enhancing food security. This study evaluated the operational performance, grain loss, and economic viability of four wheat harvesting systems for the ‘Sakha 95’ variety in the Nile Delta. To evaluate and rank the different systems based on multiple criteria, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) was employed. A Randomized Complete Block Design (RCBD) with three replicates was used to test three self-propelled combine harvesters (Claas [4.2 m], Field-King [2.0 m], Daedong [1.4 m]) alongside one semi-mechanized system (reaper–binder + stationary thresher). The TOPSIS analysis identified the Field King combine as the most recommended system (Rank 1), providing the optimal balance between operational efficiency and cost. It achieved the lowest direct harvesting cost (3386.66 EGP ha−1) with a minimal grain loss of only 0.05%. The Claas combine secured Rank 2. While it reached the highest effective field capacity (1.18 ha h−1) and near-total grain recovery (0.005% loss), its ranking was influenced by its high initial purchase price and fuel consumption. The reaper–binder system (Rank 3) and Daedong combine (Rank 4) followed. Despite having the highest operational cost (7371.42 EGP ha−1) and higher grain losses (0.72%), the reaper–binder remains a scientifically justified choice for integrated crop-livestock systems, as its ability to produce ready-to-use “soft straw” provides a net economic advantage for smallholders. The study concludes that while large combines are ideal for the “New Lands,” mid-sized units like the Field King are best suited for scaling through cooperatives in fragmented landscapes. Full article
Show Figures

Figure 1

23 pages, 5282 KB  
Article
IoT-SBIdM: A Privacy-Preserving Stateless Blockchain-Based Identity Management for Trustworthy Internet of Things IoT Ecosystems
by Eman Alatawi, Anoud Alhawiti, Doaa Albalawi and Umar Albalawi
Mathematics 2026, 14(4), 715; https://doi.org/10.3390/math14040715 - 18 Feb 2026
Viewed by 569
Abstract
The rapid expansion of the Internet of Things (IoT) has led to billions of interconnected devices generating and exchanging sensitive data across diverse domains, which introduces challenges in identity management (IdM) regarding privacy, scalability, and verifiability. While blockchain technology provides decentralization and tamper [...] Read more.
The rapid expansion of the Internet of Things (IoT) has led to billions of interconnected devices generating and exchanging sensitive data across diverse domains, which introduces challenges in identity management (IdM) regarding privacy, scalability, and verifiability. While blockchain technology provides decentralization and tamper resistance, its transparency and increasing on-chain storage demands make it unsuitable for large-scale IoT identity ecosystems. To overcome these challenges, IoT-SBIdM is proposed as a lightweight, privacy-preserving, and stateless blockchain-based identity management framework designed for IoT environments. This framework incorporates Elliptic Curve Cryptography (ECC)-based accumulators and Zero-Knowledge Proofs (ZKPs) to facilitate selective disclosure, enabling entities to prove credential authenticity without exposing sensitive identity information. Furthermore, the framework adopts W3C-compliant Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) to promote interoperability and user-controlled identity ownership. The experimental results indicate that IoT-SBIdM achieves efficient smart contract execution by reducing gas costs through optimized registry logic. Moreover, the system maintains a compact block size of only 45 MB at higher block heights, outperforming comparable schemes in storage efficiency by achieving a 55% reduction relative to recent models and an approximate 94% reduction relative to older systems, thereby demonstrating superior scalability and storage efficiency, making it suitable for identity management solutions for IoT environments. Full article
(This article belongs to the Special Issue Applied Cryptography and Blockchain Security, 2nd Edition)
Show Figures

Figure 1

27 pages, 3230 KB  
Article
Enhanced MQTT Protocol for Securing Big Data/Hadoop Data Management
by Ferdaous Kamoun-Abid and Amel Meddeb-Makhlouf
J. Sens. Actuator Netw. 2026, 15(1), 22; https://doi.org/10.3390/jsan15010022 - 16 Feb 2026
Viewed by 729
Abstract
Big data has significantly transformed data processing and analytics across various domains. However, ensuring security and data confidentiality in distributed platforms such as Hadoop remains a challenging task. Distributed environments face major security issues, particularly in the management and protection of large-scale data. [...] Read more.
Big data has significantly transformed data processing and analytics across various domains. However, ensuring security and data confidentiality in distributed platforms such as Hadoop remains a challenging task. Distributed environments face major security issues, particularly in the management and protection of large-scale data. In this article, we focus on the cost of secure information transmission, implementation complexity, and scalability. Furthermore, we address the confidentiality of information stored in Hadoop by analyzing different AES encryption modes and examining their potential to enhance Hadoop security. At the application layer, we operate within our Hadoop environment using an extended, secure, and widely used MQTT protocol for large-scale data communication. This approach is based on implementing MQTT with TLS, and before connecting, we add a hash verification of the data nodes’ identities and send the JWT. This protocol uses TCP at the transport layer for underlying transmission. The advantage of TCP lies in its reliability and small header size, making it particularly suitable for big data environments. This work proposes a triple-layer protection framework. The first layer is the assessment of the performance of existing AES encryption modes (CTR, CBC, and GCM) with different key sizes to optimize data confidentiality and processing efficiency in large-scale Hadoop deployments. Afterwards, we propose evaluating the integrity of DataNodes using a novel verification mechanism that employs SHA-3-256 hashing to authenticate nodes and prevent unauthorized access during cluster initialization. At the third tier, the integrity of data blocks within Hadoop is ensured using SHA-3-256. Through extensive performance testing and security validation, we demonstrate integration. Full article
(This article belongs to the Section Network Security and Privacy)
Show Figures

Figure 1

14 pages, 728 KB  
Article
PBBQ: Plug-In Balanced Binary Quantization for LLMs
by Zhangming Li, Weifan Guan, Zhengwei Chang, Linghao Zhang and Qinghao Hu
Electronics 2026, 15(4), 819; https://doi.org/10.3390/electronics15040819 - 13 Feb 2026
Viewed by 322
Abstract
In recent years, the expansion of large-model parameters has substantially increased storage and inference overhead. Consequently, post-training quantization has become a key technique for reducing model size and inference-time energy consumption. However, we observe that, under extremely low bit-width settings, mainstream error-compensation-based algorithms [...] Read more.
In recent years, the expansion of large-model parameters has substantially increased storage and inference overhead. Consequently, post-training quantization has become a key technique for reducing model size and inference-time energy consumption. However, we observe that, under extremely low bit-width settings, mainstream error-compensation-based algorithms tend to overfit the calibration data. To mitigate this issue, we propose Plug-in Balanced Binary Quantization for LLMs (PBBQ), which reduces the excessive emphasis on subsequent channels via block-wise dropout and layer-wise reordering. PBBQ can be integrated into GPTQ-style frameworks and ultra-low-bit methods such as BiLLM and ARB-LLM. Experimental results show that PBBQ significantly improves the performance of multiple error-compensation quantization algorithms. When combined with the state-of-the-art methods BiLLM and ARB-LLM, the perplexity (ppl) on WikiText-2 is reduced by 21.46% (from 32.48 to 25.51) and 22.02% (from 16.44 to 12.82), respectively. Full article
(This article belongs to the Special Issue Emerging Computing Paradigms for Efficient Edge AI Acceleration)
Show Figures

Figure 1

15 pages, 6379 KB  
Article
A Spheroid-Based In Vitro Model to Generate the Zonal Organisation of the Tendon-to-Bone Enthesis
by Vinothini Prabhakaran and Jennifer Z. Paxton
Organoids 2026, 5(1), 7; https://doi.org/10.3390/organoids5010007 - 10 Feb 2026
Cited by 1 | Viewed by 1024
Abstract
The tendon-to-bone enthesis is a multiphasic structure with four structurally continuous and compositionally distinct regions: tendon, uncalcified fibrocartilage, calcified fibrocartilage and bone. Our study aimed to develop 3D scaffold-free in vitro spheroids and macro-tissues of the enthesis for applications as experimental tools to [...] Read more.
The tendon-to-bone enthesis is a multiphasic structure with four structurally continuous and compositionally distinct regions: tendon, uncalcified fibrocartilage, calcified fibrocartilage and bone. Our study aimed to develop 3D scaffold-free in vitro spheroids and macro-tissues of the enthesis for applications as experimental tools to understand the development and repair of enthesis injury. This study hypothesises that integrating tendon and bone cell spheroids with bone marrow mesenchymal stem cell spheroids will facilitate the production of a fibrocartilaginous interface. 3D Spheroids: The biphasic (tendon–bone) and triphasic co-culture (tendon–stem cell–bone) of spheroids in growth media and chondrogenic media were investigated to establish fusion kinetics, and the cellular and ECM components produced via histology and immunohistochemistry. Complete fusion between spheroids occurred within 6-to-8 days in biphasic co-culture, and 15-to-20 days in triphasic co-culture. Compared to biphasic, the triphasic co-culture in chondrogenic media showed a continuous interface connecting the tendon and bone regions. The presence of collagen I, sulphated proteoglycans and collagen type II in the interface region of triphasic co-culture indicates fibrochondrogenic differentiation. 3D macro-tissues: The modular tissue engineering strategy was used in this study to produce enthesis macro-tissues using spheroids as building blocks. Spheroids were bio-assembled in the triphasic manner (12 tendon spheroids, 12 stem cell spheroids and 8 bone spheroids) in the custom-designed and 3D-printed temporary supports (Formlabs Clear Resin®) using a customised spheroid bio-assembly system. The fusion of spheroids occurred by day 8 after bio-assembly, and they were removed from temporary supports and cultured in scaffold-free conditions. Although the bio-assembly methodology was successful in producing fused scaffold-free macro-tissues, the histological analysis revealed the presence of an extensive necrotic core due to the large-sized constructs. To conclude, the findings support the hypothesis that a triphasic co-culture has the potential to produce a structurally continuous fibrocartilaginous interface but requires further optimisation to produce macro-tissues with anatomical morphologies and reduced necrotic cores. Full article
Show Figures

Figure 1

Back to TopTop