Special Issue "Data Compression, Communication Processing and Security 2016"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (31 January 2017).

Special Issue Editor

Guest Editor
Dr. Bruno Carpentieri Website 1 Website 2 E-Mail
Associate Professor, Dipartimento di Informatica ed Applicazioni, "Renato M.Capocelli", Universita' di Salerno, Via Ponte Don Melillo, 84084 Fisciano (SA), Italy
Phone: +39-089969500
Fax: +39-089969500
Interests: data compression; information theory; algorithms; parallel computing

Special Issue Information

Dear Colleagues,

There is a strict relationship that links data compression, data communication, data processing and data security. This is testified by the growing interest for efficient compression, communication and processing techniques that the new media are demanding and by the increasing demand of data security.

This Special Issue is devoted to the exploitation of the many facets of this relationship, and explores the current state-of-the-art of research in compression, communication and security.

The topics of interest to this Special Issue covers the scope of the CCPS 2016 Conference (http://ccps2016.di.unisa.it/CCPS_2016/Home.html).

Extended versions of papers presented at CCPS_2016 are sought, but this call for papers is fully open to all who want to contribute by submitting a relevant research manuscript.

Prof. Dr. Bruno Carpentieri
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data compression
  • data communication
  • data processing
  • data security

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Design and Implementation of a Multi-Modal Biometric System for Company Access Control
Algorithms 2017, 10(2), 61; https://doi.org/10.3390/a10020061 - 27 May 2017
Cited by 1
Abstract
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive [...] Read more.
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman’s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user’s face only or the serial acquisition of both the user’s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent “response behaviors” according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

Open AccessArticle
Adaptive Vector Quantization for Lossy Compression of Image Sequences
Algorithms 2017, 10(2), 51; https://doi.org/10.3390/a10020051 - 09 May 2017
Cited by 4
Abstract
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using [...] Read more.
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using the properties of the vector quantization to approximate data. First, we review the key aspects of the AVQ algorithm and, subsequently, we outline the basic concepts and the design choices behind the proposed scheme. Finally, we report the experimental results, which highlight an improvement in compression performances when our scheme is compared with the AVQ algorithm. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

Open AccessArticle
Towards Efficient Positional Inverted Index †
Algorithms 2017, 10(1), 30; https://doi.org/10.3390/a10010030 - 22 Feb 2017
Abstract
We address the problem of positional indexing in the natural language domain. The positional inverted index contains the information of the word positions. Thus, it is able to recover the original text file, which implies that it is not necessary to store the [...] Read more.
We address the problem of positional indexing in the natural language domain. The positional inverted index contains the information of the word positions. Thus, it is able to recover the original text file, which implies that it is not necessary to store the original file. Our Positional Inverted Self-Index (PISI) stores the word position gaps encoded by variable byte code. Inverted lists of single terms are combined into one inverted list that represents the backbone of the text file since it stores the sequence of the indexed words of the original file. The inverted list is synchronized with a presentation layer that stores separators, stop words, as well as variants of the indexed words. The Huffman coding is used to encode the presentation layer. The space complexity of the PISI inverted list is O ( ( N n ) log 2 b N + ( N n α + n ) × ( log 2 b n + 1 ) ) where N is a number of stems, n is a number of unique stems, α is a step/period of the back pointers in the inverted list and b is the size of the word of computer memory given in bits. The space complexity of the presentation layer is O ( i = 1 N log 2 p i n ( i ) j = 1 N log 2 p j + N ) with respect to p i n ( i ) as a probability of a stem variant at position i, p j as the probability of separator or stop word at position j and N as the number of separators and stop words. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

Open AccessArticle
Concurrent vs. Exclusive Reading in Parallel Decoding of LZ-Compressed Files
Algorithms 2017, 10(1), 21; https://doi.org/10.3390/a10010021 - 28 Jan 2017
Abstract
Broadcasting a message from one to many processors in a network corresponds to concurrent reading on a random access shared memory parallel machine. Computing the trees of a forest, the level of each node in its tree and the path between two nodes [...] Read more.
Broadcasting a message from one to many processors in a network corresponds to concurrent reading on a random access shared memory parallel machine. Computing the trees of a forest, the level of each node in its tree and the path between two nodes are problems that can easily be solved with concurrent reading in a time logarithmic in the maximum height of a tree. Solving such problems with exclusive reading requires a time logarithmic in the number of nodes, implying message passing between disjoint pairs of processors on a distributed system. Allowing concurrent reading in parallel algorithm design for distributed computing might be advantageous in practice if these problems are faced on shallow trees with some specific constraints. We show an application to LZC (Lempel-Ziv-Compress)-compressed file decoding, whose parallelization employs these computations on such trees for realistic data. On the other hand, zipped files do not have this advantage, since they are compressed by the Lempel–Ziv sliding window technique. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Open AccessArticle
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Algorithms 2017, 10(1), 7; https://doi.org/10.3390/a10010007 - 06 Jan 2017
Cited by 2
Abstract
This paper presents a variant of the iterative shrinkage-thresholding (IST) algorithm, called backtracking-based adaptive IST (BAIST), for image compressive sensing (CS) reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the [...] Read more.
This paper presents a variant of the iterative shrinkage-thresholding (IST) algorithm, called backtracking-based adaptive IST (BAIST), for image compressive sensing (CS) reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

Back to TopTop