Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,594)

Search Parameters:
Keywords = information and computer technologies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 272 KB  
Article
A Perturbation Subsampling Method for Massive Censored Data
by Yan Tian and Jiaxin Song
Entropy 2026, 28(4), 476; https://doi.org/10.3390/e28040476 - 20 Apr 2026
Abstract
With the advancement of information technology, large-scale data have become increasingly common. Subsampling methods for the statistical analysis of such data require computing the sampling probability for each observation, a process that can be computationally intensive. In this paper, we extend the perturbed [...] Read more.
With the advancement of information technology, large-scale data have become increasingly common. Subsampling methods for the statistical analysis of such data require computing the sampling probability for each observation, a process that can be computationally intensive. In this paper, we extend the perturbed subsampling approach to the Cox proportional hazards model, a widely used method in survival analysis to address the statistical analysis of large-scale survival data. Specifically, we propose a perturbed subsampling algorithm for this model. The effectiveness of the proposed method is evaluated through simulation studies and real-data analysis. Full article
Show Figures

Figure 1

17 pages, 1528 KB  
Review
Integrative Computational Approaches to Prostate Cancer with Conditional Reprogramming and AI-Driven Precision Medicine
by Ahmed Fadiel, Punit Malpani, Kenneth D. Eichenbaum, Frederick Naftolin, Aya Hassouneh, Geralyn Chong and Kunle Odunsi
Cells 2026, 15(8), 700; https://doi.org/10.3390/cells15080700 - 15 Apr 2026
Viewed by 328
Abstract
Prostate cancer, particularly metastatic castration-resistant prostate cancer (mCRPC), presents therapeutic challenges rooted in adaptive lineage plasticity and neuroendocrine transdifferentiation. Conventional genome-based models fail to account for the divergent clinical trajectories observed among tumors that share identical driver mutations. This limitation requires reconceptualizing cancer [...] Read more.
Prostate cancer, particularly metastatic castration-resistant prostate cancer (mCRPC), presents therapeutic challenges rooted in adaptive lineage plasticity and neuroendocrine transdifferentiation. Conventional genome-based models fail to account for the divergent clinical trajectories observed among tumors that share identical driver mutations. This limitation requires reconceptualizing cancer as a dynamic system in which tumor cells can execute context-dependent molecular programs governed by epigenetic and transcriptional network remodeling. This review critically evaluates three convergent technological pillars reshaping prostate cancer research and clinical care. First, conditional reprogramming (CR) enables the rapid generation of patient-derived models that preserve genomic fidelity, intratumoral heterogeneity, and reversible phenotypic plasticity without genetic manipulation. Second, single-cell and spatial multi-omics approaches have clarified the cellular trajectories underlying luminal-to-neuroendocrine transdifferentiation, identifying a therapeutically actionable intermediate state. They have revealed the hierarchical transcription factor network (FOXA2–NKX2-1–p300/CBP) which orchestrates chromatin remodeling during this lethal transition. Third, physics-informed machine learning and digital twin architectures aim to move beyond correlative risk prediction toward mechanistically sound forecasting of tumor evolution, treatment response, and resistance emergence. We address unresolved challenges in prospective clinical validation, spatial heterogeneity capture, regulatory pathways for functional diagnostics, and the imperative for causal, as opposed to associative, inference from perturbational datasets. The integration of these three domains through closed-loop experimental–computational feedback cycles represents a paradigm shift from reactive to anticipatory precision oncology. Full article
Show Figures

Figure 1

23 pages, 1845 KB  
Article
A Hybrid Transformer–Graph Framework for Curriculum Sequencing and Prerequisite Optimization in Computer Science Education with Explainable AI
by Ritika Awasthi, Abhinav Shukla, Ayush Kumar Agrawal, Parul Dubey and R Kanesaraj Ramasamy
Algorithms 2026, 19(4), 308; https://doi.org/10.3390/a19040308 - 14 Apr 2026
Viewed by 171
Abstract
Curriculum redesign in Computer Science and Information Technology has become increasingly complex due to rapid technological advancements, interdisciplinary knowledge requirements, and evolving industry expectations. Recent progress in artificial intelligence, particularly Transformer-based language models, offers new opportunities for data-driven and scalable curriculum analysis. This [...] Read more.
Curriculum redesign in Computer Science and Information Technology has become increasingly complex due to rapid technological advancements, interdisciplinary knowledge requirements, and evolving industry expectations. Recent progress in artificial intelligence, particularly Transformer-based language models, offers new opportunities for data-driven and scalable curriculum analysis. This study utilizes syllabus-level textual datasets collected from multiple universities, comprising structured and unstructured course descriptions across diverse CS and IT programs. The dataset enables semantic representation learning and prerequisite inference while supporting cross-institutional curriculum analysis. We propose a hybrid framework that combines Transformer-based semantic encoding with graph-based prerequisite optimization and constraint-aware curriculum sequencing. The novelty of this work lies in integrating semantic prerequisite discovery, optimization-driven curriculum structuring, and explainable AI within a unified decision-support framework. Experimental results demonstrate that the proposed approach consistently outperforms existing machine learning and deep learning baselines, achieving higher prerequisite prediction accuracy, improved curriculum feasibility, and more coherent course sequencing, thereby offering a scalable and interpretable solution for evidence-based curriculum redesign in higher education. Full article
21 pages, 1611 KB  
Article
Bring Your Own Battery: An Ideal-Storage-Based Optimization Metric for Cost-Informed Generation and Storage Planning
by Wen-Chi Cheng, Gabriel Jose Soto, Dylan James McDowell, Paul Talbot, Takanori Kajihara, Jakub Toman and Jason Marcinkoski
Metrics 2026, 3(2), 8; https://doi.org/10.3390/metrics3020008 - 14 Apr 2026
Viewed by 183
Abstract
The rapid growth of artificial intelligence (AI) workloads and data center infrastructure is driving a surge in electricity demand, underscoring the need for robust metrics to evaluate energy generation and storage strategies. This study introduces the Bring Your Own Battery (BYOBattery) metric, a [...] Read more.
The rapid growth of artificial intelligence (AI) workloads and data center infrastructure is driving a surge in electricity demand, underscoring the need for robust metrics to evaluate energy generation and storage strategies. This study introduces the Bring Your Own Battery (BYOBattery) metric, a region-specific, temporally resolved indicator designed to quantify the ideal energy storage capacity required to mitigate generation-demand mismatches. The BYOBattery metric is computed as the minimum ideal battery storage required to eliminate generation-demand imbalances over a given time window, and is extended to incorporate curtailment via a convex optimization formulation to better manage peak generation and storage requirements. We applied the BYOBattery metric to wind, solar, and nuclear generation technologies across three major U.S. grid regions: the California Independent System Operator (CAISO), the Electric Reliability Council of Texas (ERCOT), and the Pennsylvania–New Jersey–Maryland Interconnection (PJM), using operational data from 2021 to 2024. Key findings are: (1) nuclear consistently requires the least storage in order to meet demand (i.e., one equivalent load hour compared with 10–25 h for wind and solar); (2) wind storage requirements decrease with increased capacity, whereas solar necessitates consistent levels of storage; and (3) the 30-year non-discounted cost per kWh for nuclear ($0.10/kWh) is substantially lower than that of wind or solar by a factor of 1–4 across all studied region. The BYOBattery metric enables comparative benchmarking of generation technologies under dynamic demand conditions and supports cost-informed planning for energy systems. This work contributes a reproducible, interpretable, and computationally efficient tool for energy system analyses and broader performance evaluations. Full article
Show Figures

Figure 1

28 pages, 3232 KB  
Article
Fisher-DARTS: A Neural Architecture Search Framework with Fisher Information Optimization
by Yu Zhang and Changyuan Wang
Appl. Sci. 2026, 16(8), 3808; https://doi.org/10.3390/app16083808 - 14 Apr 2026
Viewed by 356
Abstract
Differentiable Neural Architecture Search has emerged as a powerful paradigm for automated network design, yet it suffers from a fundamental optimization inconsistency problem: Architectures optimized under continuous relaxation often fail to maintain their performance after discretization. To address this challenge, we propose Fisher-DARTS—a [...] Read more.
Differentiable Neural Architecture Search has emerged as a powerful paradigm for automated network design, yet it suffers from a fundamental optimization inconsistency problem: Architectures optimized under continuous relaxation often fail to maintain their performance after discretization. To address this challenge, we propose Fisher-DARTS—a Fisher information-driven differentiable NAS framework. The proposed method introduces three key innovations: (1) a Fisher information-based momentum update mechanism that guides architectural parameters toward statistically significant operations, aligning the search objective with discrete deployment; (2) a progressive three-region pruning strategy that adaptively eliminates redundant operations with low Fisher information, ensuring architectural compactness; and (3) a cell-weighted fusion module that preserves multi-scale features across stacked cells. Additionally, the search space is expanded by incorporating attention mechanisms to enhance feature representation capability. The proposed framework is generic and applicable to a wide range of vision tasks. To validate its effectiveness, we apply it to gaze estimation—a core technology in multimodal human–computer interaction. Experimental results on three public datasets, MPIIFaceGaze, RT-GENE, and ETH-XGaze, demonstrate that Fisher-DARTS achieves mean angular errors of 3.22°, 5.45°, and 4.12°, respectively, outperforming hand-designed networks and existing NAS-based gaze estimation models. These results validate the effectiveness of the proposed Fisher-driven NAS framework and its generalization capability across diverse scenarios. Full article
Show Figures

Figure 1

34 pages, 6776 KB  
Review
Emerging Trends in Interactive Space: A Scientometric Analysis
by Jiazhen Zhang, Nan Yang, Wenhan Zhang, Jingwen Liu and Jeremy Cenci
Buildings 2026, 16(8), 1514; https://doi.org/10.3390/buildings16081514 - 13 Apr 2026
Viewed by 165
Abstract
With the advent of the Fourth Industrial Revolution and the rise of new forms of productive forces, the ways humans interact with space, objects, and information are being profoundly reshaped, bringing unprecedented possibilities for upgrading interactive spaces—human settlements that integrate physical and digital [...] Read more.
With the advent of the Fourth Industrial Revolution and the rise of new forms of productive forces, the ways humans interact with space, objects, and information are being profoundly reshaped, bringing unprecedented possibilities for upgrading interactive spaces—human settlements that integrate physical and digital environments. Against this background, using the literature on interactive space research from the Web of Science (WoS) Core Collection between 1990 and 2025 as the data source, this study employs CiteSpace software to generate scientific knowledge maps, analyzing the historic development, hotspots, and trends in the research of interactive space, providing both theoretical and data support. In terms of results, a total of 458 papers were collected, demonstrating a consistent year-on-year increase. The research spans multiple fields, including computer science, architecture, ecology, physics, design, and behavioristics. Specifically, results indicate that research hotspots in interactive spaces include collaborative governance, social coexistence, and sustainable renewal, all of which are highly relevant to activating human settlements. The vitality of interactive spaces can be constructed across multiple dimensions, (for instance, enhancement based on ecology, environment, culture, and other factors of the space). However, research on interactive spaces still suffers from a lack of interdisciplinary collaboration and multi-domain integration; therefore, it is essential to strengthen cooperation among relevant fields. Current research lacks interdisciplinary integration and dynamic response mechanisms. Based on these findings, this study, through visual analysis, reveals the research hotspots and evolutionary trajectory of interactive spaces and proposes a “technology–humanism–governance” trinity framework. This system should be based on technology as the means, humanism as the guiding principle, and effective governance as the goal. It aims to explore how to leverage the service-oriented and convenient nature of technology in interactive spaces to deepen human-centric design and thereby drive the optimization of systems. Based on these findings, future research on interactive spaces should shift its design philosophy to be more human-centric, establish a multidisciplinary research system, utilize local empirical cases, and develop scalable, applicable theories to construct harmonious, open spaces, enhance human–environment relationships, and provide other countries undergoing urbanization with practical solutions. Full article
Show Figures

Figure 1

25 pages, 2504 KB  
Article
Teaching Strategies and Methods in a Complex Education Process: Use Case of Multi-Level Computer-Assisted Exercises on Constructive Simulation Systems
by Miro Čolić and Mirko Sužnjević
Appl. Sci. 2026, 16(8), 3692; https://doi.org/10.3390/app16083692 - 9 Apr 2026
Viewed by 162
Abstract
This study develops a new concept of computer-assisted exercises (CAX) on constructive simulation systems and how the proposed concept affects the strategy and teaching methods. The current state of affairs in the field of defense and security, both in Europe and in the [...] Read more.
This study develops a new concept of computer-assisted exercises (CAX) on constructive simulation systems and how the proposed concept affects the strategy and teaching methods. The current state of affairs in the field of defense and security, both in Europe and in the world, requires the acquisition of competencies (European Qualifications Framework—EQF: knowledge, skills, independence, and responsibility), i.e., the education and training of a significantly larger number of personnel in the field of defense and security than has been the case in the last 70 years. In addition, an important specificity of today is that students need to acquire some competencies that were almost unknown until recently. Most of these competencies are the result of the rapid development of technology, which has significantly changed human life in all areas. In order to respond to the modern requirements of conducting operations, where the transfer of information both horizontally and vertically is exponentially accelerated, current concepts of preparation and implementation of education and training, of which exercises are often the most important part, need to be replaced with new concepts, and one such concept is developed in this paper. New information introduced is mostly related to the new weapons that are being introduced (unmanned systems, hypersonic missiles, weapons based on microwaves and lasers, etc.), which all result in necessary changes to the traditional approach to conducting war, i.e., tactics, techniques, and procedures (TTP). This novel exercise concept allows for the simultaneous implementation of training for up to three or four hierarchical levels (e.g., TF Div, brigade, battalion, and company) in one exercise, while in most countries, including the NATO alliance, it is still common for such exercises to be conducted according to a concept that is over 20 years old and, as a rule, is focused on the implementation of exercises for one or two hierarchical levels. This approach allows key personnel from the headquarters of units from four hierarchical levels to be simulated in real time, which is not provided by current concepts for preparing and conducting exercises. The new concept was applied as a multi-level, computer-assisted exercise (CAX) on constructive simulation systems. In addition, significant advantages of the new concept relate to the flexibility and adaptability of the proposed concept to be applied in addition to operational units and in training institutions such as academies and higher education institutions. In addition to the above, the new concept requires a shorter planning period as well as fewer total resources needed for the preparation and implementation of the exercise. The management, organizational, and technological components of the proposed exercise concept are implemented in the CAX model. The hypotheses in this paper will be tested in an applied study, which was evaluated through an external evaluation body. The implemented CAX model was tested in Croatia on the example of using exercises at the Croatian Defense Academy. Full article
(This article belongs to the Special Issue Applications of Smart Learning in Education)
Show Figures

Figure 1

18 pages, 1606 KB  
Article
Multi-Scale Dynamic Perception and Context Guidance Modulation for Efficient Deepfake Detection
by Yuanqing Ding, Fanliang Bu and Hanming Zhai
Electronics 2026, 15(8), 1569; https://doi.org/10.3390/electronics15081569 - 9 Apr 2026
Viewed by 286
Abstract
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such [...] Read more.
Deepfake technology poses significant threats to information authenticity and social trust, necessitating effective detection methods. However, existing detection approaches predominantly rely on high-complexity network architectures that, while accurate in controlled environments, suffer from prohibitive computational costs that hinder deployment in resource-constrained scenarios such as social media platforms. To address this efficiency-accuracy dilemma, we propose a lightweight face forgery detection method that systematically learns multi-scale forgery traces. Our approach features a four-stage lightweight architecture that hierarchically extracts features from local textures to global semantics, mimicking the human visual system. Within each stage, a multi-scale dynamic perception mechanism divides feature channels into parallel groups equipped with lightweight attention modules to capture forgery cues spanning pixel-level anomalies, local structures, regional patterns, and semantic inconsistencies. Furthermore, rather than relying on conventional feature fusion that risks suppressing subtle artifacts, we introduce a novel Context-Guided Dynamic Convolution. This mechanism uses mid-level spatial anomalies as active anchors to dynamically modulate high-level semantic filters, with the goal of mitigating the disconnect between semantic content and forgery evidence. Our model achieves strong performance, yielding an AUC of 91.98% on FaceForensics++ and 93.50% on DeepFake Detection Challenge, outperforming current state-of-the-art lightweight methods. Furthermore, compared to heavy Vision Transformers, our model achieves a superior performance-efficiency trade-off, requiring only 3.06 M parameters and 1.36 G FLOPs, making it highly suitable for real-time, resource-constrained deployment. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

26 pages, 8769 KB  
Article
A Dual-Form Spiral-like Microwave Sensor for Non-Invasive Glucose Monitoring: From Planar Design to Wearable Implementation
by Zaid A. Abdul Hassain, Malik J. Farhan and Taha A. Elwi
Electronics 2026, 15(8), 1567; https://doi.org/10.3390/electronics15081567 - 9 Apr 2026
Viewed by 296
Abstract
In this paper, a novel multiband microwave resonator is proposed and investigated for non-invasive glucose sensing applications. The structure is based on a compact, planar spiral-like geometry fed by a Coplanar waveguide (CPW) transmission line, designed to support multiple resonant modes through nested [...] Read more.
In this paper, a novel multiband microwave resonator is proposed and investigated for non-invasive glucose sensing applications. The structure is based on a compact, planar spiral-like geometry fed by a Coplanar waveguide (CPW) transmission line, designed to support multiple resonant modes through nested concentric rings. A full electromagnetic model was developed to predict the resonance behavior analytically, achieving excellent agreement with Computer Simulated Technology (CST) simulations across four resonant frequencies (2.7, 6.44, 8.0, and 12.8 GHz). The sensor demonstrated high glucose sensitivity at multiple frequencies, with peak values reaching 0.05 dB/mg/dL and 0.038 dB/mg/dL at 10.1 GHz and 6.22 GHz, respectively. To enhance conformability and skin contact, the antenna was further transformed into a semi-cylindrical flexible form suitable for finger-wrapping. Despite the mechanical deformation, the structure preserved its resonance while offering enhanced near-field interaction with biological tissues. The folded sensor achieved a sensitivity of 0.032 dB/mg/dL at 5.25 GHz and a peak gain of 6.05 dB, validating its robustness for wearable deployment. The clear correlation between reflection magnitude and glucose level (with R > 0.99) confirms the sensor’s potential as a passive, multiband, and non-invasive glucose monitoring platform. The physics-informed residual deep learning framework significantly enhances prediction accuracy, achieving an RMSE of 0.28 mg/dL, MARD of 0.13%, and confining 100% of both training and holdout predictions within the <5% ISO-like risk region, thereby ensuring robust and clinically reliable non-invasive glucose estimation. Full article
Show Figures

Figure 1

35 pages, 30864 KB  
Article
A Robot Path Planning Method Based on a Key Point Encoding Genetic Algorithm
by Chuanyu Yang, Zhenxue He, Xiaojun Zhao, Yijin Wang and Xiaodan Zhang
Algorithms 2026, 19(4), 285; https://doi.org/10.3390/a19040285 - 7 Apr 2026
Viewed by 295
Abstract
Path planning is a key technology in robot navigation and has long attracted significant attention. However, in scenarios with high-density or unstructured obstacle distributions, path planning methods based on swarm intelligence optimization still face issues of low computational efficiency and poor path quality, [...] Read more.
Path planning is a key technology in robot navigation and has long attracted significant attention. However, in scenarios with high-density or unstructured obstacle distributions, path planning methods based on swarm intelligence optimization still face issues of low computational efficiency and poor path quality, limiting their performance in real-time applications. To address these challenges, this paper defines path key points and proposes a path planning method based on the Key-Points Encoding Genetic Algorithm (KEGA). First, an encoding scheme is designed to map key-point sequences into binary encodings, guiding the population to explore efficiently. Then, a new path generation module is integrated using target point direction, local environment, and historical path information to generate high-quality key-point sequences, thereby improving path quality. Additionally, by evaluating key-point sequences as a proxy for full path evaluation, only one precise path construction is required per iteration, significantly reducing computational overhead. Experiments were conducted on four simulated maps with diverse obstacle distribution characteristics and eight real-world street maps to validate the method’s robustness and generalizability. The results show that, compared to the existing state-of-the-art robot path planning methods, the proposed method achieves an average runtime savings of 75.40%, a path length reduction of 35.65% and a path smoothness improvement of 68%. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

29 pages, 7604 KB  
Article
Shading and Geometric Constraint Neural Radiance Field for DSM Reconstruction from Multi-View Satellite Images
by Zhihua Hu, Zhiwen Chen, Yushun Li, Yuxuan Liu, Kao Zhang, Chenguang Zhao and Yongxian Zhang
Remote Sens. 2026, 18(7), 1091; https://doi.org/10.3390/rs18071091 - 5 Apr 2026
Viewed by 303
Abstract
With the continued development of spatial information technologies, Digital Surface Models (DSMs) have become fundamental data products for urban planning, virtual reality, geographic information systems, and digital-earth applications. Neural Radiance Fields (NeRFs) have achieved remarkable success in multi-view 3D reconstruction in computer vision. [...] Read more.
With the continued development of spatial information technologies, Digital Surface Models (DSMs) have become fundamental data products for urban planning, virtual reality, geographic information systems, and digital-earth applications. Neural Radiance Fields (NeRFs) have achieved remarkable success in multi-view 3D reconstruction in computer vision. Still, their application to DSM generation from satellite imagery remains challenging because of differences in imaging geometry, complex surface structure, and varying illumination conditions. To address these issues, this paper proposes a Shading and Geometric Constraint (SGC) method tailored to satellite photogrammetry and designed to integrate with existing NeRF-based frameworks such as Sat-NeRF and EO-NeRF. First, a physical imaging model based on Lambertian reflectance and spherical harmonics is introduced to represent the complex illumination variations in satellite images. Synthetic images generated by this model provide auxiliary supervision that improves robustness to illumination inconsistency. Second, inspired by classical shading-based refinement methods, we introduce a bilateral edge-preserving geometric constraint. Unlike standard smoothness terms, this constraint uses photometric discrepancies to weight geometric smoothing, thereby preserving sharp building boundaries while smoothing flat surfaces. We integrate the method into two state-of-the-art baselines, Sat-NeRF and EO-NeRF. EO-NeRF+SGC achieves up to a 57.93% reduction in elevation MAE relative to EO-NeRF, which is the largest relative MAE reduction reported in this study. The method also recovers finer structural details and sharper edges than recently published NeRF-based DSM reconstruction methods. Full article
Show Figures

Figure 1

19 pages, 551 KB  
Article
SCAFormer: Side-Channel Analysis Based on a Transformer with Focal Modulation
by Longde Yan, Aidong Chen, Wenwen Chen, Jiawang Huang, Yanlong Zhang, Shuo Wang and Jing Zhou
Math. Comput. Appl. 2026, 31(2), 55; https://doi.org/10.3390/mca31020055 - 4 Apr 2026
Viewed by 344
Abstract
With the rapid development of Internet technology, information security has become increasingly important. Cryptographic analysis techniques, especially side-channel analysis (SCA), pose a significant threat to security systems. The latest SCA technology mainly utilizes the physical leakage signals generated during the operation of encryption [...] Read more.
With the rapid development of Internet technology, information security has become increasingly important. Cryptographic analysis techniques, especially side-channel analysis (SCA), pose a significant threat to security systems. The latest SCA technology mainly utilizes the physical leakage signals generated during the operation of encryption devices, such as power consumption, temperature and electromagnetic radiation. These signals themselves carry the physical characteristics of the device, which are related to the encryption algorithm. Among them, the power consumption trace remains the main target of modern SCA research. However, such trajectories often bring about some analytical difficulties, such as the data sequence being too long, the feature points being distributed sparsely, and the internal relationships of the data being complex. These challenges hinder effective analysis. While Transformer architectures are good at capturing long-range dependencies in sequential data, their high computational complexity limits practical deployment. To address this, we propose replacing the self-attention (SA) module in Transformers with a focal modulation module. This modification significantly reduces computational complexity and reduces computational operations during feature extraction, enabling efficient and accurate side-channel attacks. Experimental results on benchmark datasets (ASCAD, AES_RD, AES_HD, DPAv4) demonstrate the superiority of our approach. The proposed method achieves a reduction in training time compared to standard Transformer models, and achieves superior key recovery performance, outperforming existing state-of-the-art models. Full article
Show Figures

Figure 1

14 pages, 1388 KB  
Article
Table-Aware Row-Level RAG for Classical Chinese Understanding
by Zhihao Liu and Waiyie Leong
Computers 2026, 15(4), 221; https://doi.org/10.3390/computers15040221 - 2 Apr 2026
Viewed by 396
Abstract
The classical Chinese language is characterized by a high density of meaning, wide use of polysemy, and strong dependence on history and culture, which pose challenges to existing large language models (LLMs). Retrieval-augmented generation (RAG) technology has become a prevailing option that could [...] Read more.
The classical Chinese language is characterized by a high density of meaning, wide use of polysemy, and strong dependence on history and culture, which pose challenges to existing large language models (LLMs). Retrieval-augmented generation (RAG) technology has become a prevailing option that could address these issues without retraining the model, but most of the existing RAG systems regard structured tables as unstructured text, encoding a whole table into one vector. Such a schema usually hides the row-level semantic information and raises the reasoning cost for LLMs. In this study, we propose a new table-aware row-wise retrieval system in which each row of a table is treated as an individual semantic unit, explicitly (instead of implicitly) reasoning at generation time. We organize the table into row-level vector representations, which makes retrieval more deterministic and semantically interpretable, in particular, for pedagogical or philological datasets. Based on LangChain and integrated with Qwen LLMs, our system can be evaluated experimentally for classical Chinese learning tasks, where we find that compared with the traditional RAG systems, this system improves on retrieval performance, semantic consistency, and explainability, with no model training or extra computation time required. Full article
Show Figures

Figure 1

20 pages, 34702 KB  
Article
rePPG: Relighting Photoplethysmography Signal to Video
by Seunghyun Kim, Yeongje Park, Byeongseon An and Eui Chul Lee
Biomimetics 2026, 11(4), 230; https://doi.org/10.3390/biomimetics11040230 - 1 Apr 2026
Viewed by 488
Abstract
Remote photoplethysmography (rPPG) extracts physiological signals from facial videos by analyzing subtle skin color variations caused by blood flow. While this technology enables contactless health monitoring, it also raises privacy concerns because facial videos reveal both identity and sensitive biometric information. Existing privacy-preserving [...] Read more.
Remote photoplethysmography (rPPG) extracts physiological signals from facial videos by analyzing subtle skin color variations caused by blood flow. While this technology enables contactless health monitoring, it also raises privacy concerns because facial videos reveal both identity and sensitive biometric information. Existing privacy-preserving techniques, such as blurring or pixelation, degrade visual quality and are unsuitable for practical rPPG applications. This paper presents rePPG, a framework that inserts a desired rPPG signal into facial videos while preserving the original facial appearance. The proposed method disentangles facial appearance and physiological features, enabling replacement of the physiological signal without altering facial identity or visual quality. Skin segmentation restricts modifications to skin regions, and a cycle-consistency mechanism ensures that the injected rPPG signal can be reliably recovered from the generated video. Importantly, the extracted rPPG signals are evaluated against the injected target physiological signals rather than the subject’s original physiological state, ensuring that the evaluation measures signal rewriting accuracy. Experiments on the PURE and UBFC datasets show that rePPG successfully embeds target PPG signals, achieving 1.10 BPM MAE and 95.00% PTE6 on PURE while preserving visual quality (PSNR 24.61 dB, SSIM 0.638). Heart rate metrics are computed using a 5-second temporal window to ensure a consistent evaluation protocol. Full article
(This article belongs to the Special Issue Bio-Inspired Signal Processing on Image and Audio Data)
Show Figures

Figure 1

20 pages, 1367 KB  
Review
Deep Learning Decoding of Steady-State Visual Evoked Potential (SSVEP) for Real-Time Mobile Brain–Computer Interfaces: A Narrative Review from Laboratory Settings to Lightweight Engineering Applications
by Hanzhen Zhang and Chunjing Tao
Brain Sci. 2026, 16(4), 387; https://doi.org/10.3390/brainsci16040387 - 31 Mar 2026
Viewed by 601
Abstract
Background/Objectives: SSVEP-BCI has broad application potential in mobile human–computer interaction due to its high information transfer rate and stable signal characteristics. The introduction of deep learning technology has significantly advanced SSVEP decoding performance, offering novel approaches for processing short-duration signals and tackling [...] Read more.
Background/Objectives: SSVEP-BCI has broad application potential in mobile human–computer interaction due to its high information transfer rate and stable signal characteristics. The introduction of deep learning technology has significantly advanced SSVEP decoding performance, offering novel approaches for processing short-duration signals and tackling complex classification tasks. The establishment of the Tsinghua Benchmark dataset provides a standardized benchmark for evaluating algorithm performance, accelerating the development of deep learning-based SSVEP decoding. However, a summary of SSVEP deep learning decoding technologies for real-time mobile applications is lacking. Methods: We conducted a comprehensive literature review of SSVEP deep learning decoding studies published since 2023, using the Tsinghua Benchmark dataset. This review focuses on technical developments targeting real-time performance, low computational complexity, and high robustness. Results: We summarize the key technologies developed for real-time mobile SSVEP decoding. Our analysis thoroughly examines how these techniques address core challenges in the engineering implementation of mobile brain–computer interfaces, including real-time processing requirements, resource constraints, and environmental robustness. Conclusions: This review provides a comprehensive overview of SSVEP deep learning decoding technologies for mobile applications, establishing a technical foundation to advance mobile brain–computer interfaces from laboratory settings to practical deployment. Full article
(This article belongs to the Special Issue Trends and Challenges in Neuroengineering)
Show Figures

Figure 1

Back to TopTop