sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensor Data Summarization: Theory, Applications, and Systems"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 January 2022.

Special Issue Editor

Prof. Dan Feldman
E-Mail Website
Guest Editor
Robotics & Big Data Lab, Computer Science Department, University of Haifa, Haifa 3498838, Israel
Interests: machine Learning; big data; computer vision and robotics
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Research on data summarization techniques such as coreset and sketch constructions is growing rapidly these days, and not only in its original community, theoretical computer sciences and mathematics, but also in more modern fields that use huge amounts of data from sensors, such as machine and deep learning. In recent years, we have also seen more and more papers in applied fields such as robotics, graphics, and computer vision, and new related theories in areas such as differential privacy, cryptographic, compressed sensing, and signal processing.

Due to this multidisciplinary research, results sometimes fall between the cracks: Theory-oriented people may not appreciate experimental results, while practitioners may not be interested in or understand tedious mathematical proofs. Often, by the time reviews are available and your journal version is published, the results have already been improved upon and become obsolete.

This Special Issue is dedicated to all types of aspects in sensor data summarization, including new provable constructions, related approximation algorithms, applications to streaming and parallel computations, software implementations, and systems that are based on such techniques.

Our goal is to have an exciting Special Edition with interesting high-quality results with an efficient reviewing process that is both professional and relatively fast.

Prof. Dan Feldman
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • coreset
  • sketch
  • streaming
  • parallel computations
  • compressed sensing
  • dimension reduction
  • sparsification
  • sampling

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
No Fine-Tuning, No Cry: Robust SVD for Compressing Deep Networks
Sensors 2021, 21(16), 5599; https://doi.org/10.3390/s21165599 - 19 Aug 2021
Viewed by 349
Abstract
A common technique for compressing a neural network is to compute the k-rank 2 approximation Ak of the matrix ARn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is [...] Read more.
A common technique for compressing a neural network is to compute the k-rank 2 approximation Ak of the matrix ARn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is the number of input neurons in the layer, n is the number in the next one, and Ak is stored in O((n+d)k) memory instead of O(nd). Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the k-rank 2 approximation with p, for p[1,2], which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. Full article
(This article belongs to the Special Issue Sensor Data Summarization: Theory, Applications, and Systems)
Show Figures

Figure 1

Back to TopTop