Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = SCANN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 333 KB  
Review
Scaling Entity Resolution with K-Means: A Review of Partitioning Techniques
by Dimitrios Karapiperis and Vassilios S. Verykios
Electronics 2025, 14(18), 3605; https://doi.org/10.3390/electronics14183605 - 11 Sep 2025
Viewed by 665
Abstract
Entity resolution (ER) is a fundamental data integration process hindered by its quadratic computational complexity, making naive comparisons infeasible for large datasets. Blocking (or partitioning) is the foundational strategy to overcome this, traditionally using methods like K-Means clustering to group similar records. However, [...] Read more.
Entity resolution (ER) is a fundamental data integration process hindered by its quadratic computational complexity, making naive comparisons infeasible for large datasets. Blocking (or partitioning) is the foundational strategy to overcome this, traditionally using methods like K-Means clustering to group similar records. However, with the rise of deep learning and high-dimensional vector embeddings, the ER task has evolved into a vector similarity search problem. This review traces the evolution of K-Means from a direct, standalone blocking algorithm into a core partitioning engine within modern Approximate Nearest Neighbor (ANN) indexes. We analyze how its role has been adapted and optimized in partition-based systems like the Inverted File (IVF) system and Google’s SCANN, which are now central to scalable, embedding-based ER. By examining the architectural principles and trade-offs of these methods and contrasting them with non-partitioning alternatives like HNSW, this paper provides a coherent narrative on the journey of K-Means from a simple clustering tool to a critical component for scaling modern ER workflows. Full article
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems, 2nd Edition)
Show Figures

Figure 1

13 pages, 5853 KB  
Article
SCANN: Side Channel Analysis of Spiking Neural Networks
by Karthikeyan Nagarajan, Rupshali Roy, Rasit Onur Topaloglu, Sachhidh Kannan and Swaroop Ghosh
Cryptography 2023, 7(2), 17; https://doi.org/10.3390/cryptography7020017 - 27 Mar 2023
Cited by 10 | Viewed by 3807
Abstract
Spiking neural networks (SNNs) are quickly gaining traction as a viable alternative to deep neural networks (DNNs). Compared to DNNs, SNNs are computationally more powerful and energy efficient. The design metrics (synaptic weights, membrane threshold, etc.) chosen for such SNN architectures are often [...] Read more.
Spiking neural networks (SNNs) are quickly gaining traction as a viable alternative to deep neural networks (DNNs). Compared to DNNs, SNNs are computationally more powerful and energy efficient. The design metrics (synaptic weights, membrane threshold, etc.) chosen for such SNN architectures are often proprietary and constitute confidential intellectual property (IP). Our study indicates that SNN architectures implemented using conventional analog neurons are susceptible to side channel attack (SCA). Unlike the conventional SCAs that are aimed to leak private keys from cryptographic implementations, SCANN (SCA̲ of spiking n̲eural n̲etworks) can reveal the sensitive IP implemented within the SNN through the power side channel. We demonstrate eight unique SCANN attacks by taking a common analog neuron (axon hillock neuron) as the test case. We chose this particular model since it is biologically plausible and is hence a good fit for SNNs. Simulation results indicate that different synaptic weights, neurons/layer, neuron membrane thresholds, and neuron capacitor sizes (which are the building blocks of SNN) yield distinct power and spike timing signatures, making them vulnerable to SCA. We show that an adversary can use templates (using foundry-calibrated simulations or fabricating known design parameters in test chips) and analysis to identify the specifications of the implemented SNN. Full article
(This article belongs to the Special Issue Feature Papers in Hardware Security II)
Show Figures

Figure 1

11 pages, 992 KB  
Article
Pure and Hybrid SCAN, rSCAN, and r2SCAN: Which One Is Preferred in KS- and HF-DFT Calculations, and How Does D4 Dispersion Correction Affect This Ranking?
by Golokesh Santra and Jan M. L. Martin
Molecules 2022, 27(1), 141; https://doi.org/10.3390/molecules27010141 - 27 Dec 2021
Cited by 18 | Viewed by 5529
Abstract
Using the large and chemically diverse GMTKN55 dataset, we have tested the performance of pure and hybrid KS-DFT and HF-DFT functionals constructed from three variants of the SCAN meta-GGA exchange-correlation functional: original SCAN, rSCAN, and r2SCAN. Without any dispersion correction involved, [...] Read more.
Using the large and chemically diverse GMTKN55 dataset, we have tested the performance of pure and hybrid KS-DFT and HF-DFT functionals constructed from three variants of the SCAN meta-GGA exchange-correlation functional: original SCAN, rSCAN, and r2SCAN. Without any dispersion correction involved, HF-SCANn outperforms the two other HF-DFT functionals. In contrast, among the self-consistent variants, SCANn and r2SCANn offer essentially the same performance at lower percentages of HF-exchange, while at higher percentages, SCANn marginally outperforms r2SCANn and rSCANn. However, with D4 dispersion correction included, all three HF-DFT-D4 variants perform similarly, and among the self-consistent counterparts, r2SCANn-D4 outperforms the other two variants across the board. In view of the much milder grid dependence of r2SCAN vs. SCAN, r2SCAN is to be preferred across the board, also in HF-DFT and hybrid KS-DFT contexts. Full article
Show Figures

Figure 1

Back to TopTop