E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Entropy in Image Analysis"

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (31 March 2019)

Special Issue Editor

Guest Editor
Dr. Amelia Carolina Sparavigna

Department of Applied Science and Technology, Polytechnic University of Turin, Turin, Italy
Website | E-Mail
Interests: general physics and mathematics; optics; software; image processing applied to microscopy and satellite imagery

Special Issue Information

Dear Colleagues,

Image analysis is a fundamental task for extracting information from images acquired across a range of different devices. This analysis often needs numerical and analytical methods which are highly sophisticated, in particular for those applications in medicine, security, and remote sensing, where the results of the processing may consist of data of vital importance.

As being involved in numerous applications requiring reliable quantitative results, the image analysis has produced a large number of approaches and algorithms, sometimes limited to specific functions in a small range of tasks, sometimes generic enough to be applied to a wide range of tasks. In this framework, a key role can be played by the entropy, in the form of the Shannon entropy or in the form of a generalized entropy, used directly in the processing methods or in the evaluation of the results, to maximize the success of a final decision support system.

Since the active research in image processing is still engaged in the search of methods that are truly comparable to the abilities of human vision capabilities, I solicit your contribution to this Special Issue of the Journal, devoted to the use of entropy in extracting information from images, and in the decision processes related to the image analyses.

Dr. Amelia Carolina Sparavigna
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image entropy
  • Shannon entropy
  • Tsallis entropy
  • Generalized entropies
  • Image processing
  • Image segmentation
  • Retinex methods
  • Medical imaging
  • Remote sensing
  • Security

Related Special Issue

Published Papers (24 papers)

View options order results:
result details:
Displaying articles 1-24
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Entropy in Image Analysis
Entropy 2019, 21(5), 502; https://doi.org/10.3390/e21050502
Received: 13 May 2019 / Accepted: 15 May 2019 / Published: 17 May 2019
PDF Full-text (167 KB) | HTML Full-text | XML Full-text
Abstract
Image analysis is playing a very essential role in numerous research areas in the fields of science and technology, ranging from medical imaging to the computer science of automatic vision [...] Full article
(This article belongs to the Special Issue Entropy in Image Analysis)

Research

Jump to: Editorial

Open AccessArticle
Large-Scale Person Re-Identification Based on Deep Hash Learning
Entropy 2019, 21(5), 449; https://doi.org/10.3390/e21050449
Received: 31 March 2019 / Revised: 27 April 2019 / Accepted: 28 April 2019 / Published: 30 April 2019
Cited by 1 | PDF Full-text (1746 KB) | HTML Full-text | XML Full-text
Abstract
Person re-identification in the image processing domain has been a challenging research topic due to the influence of pedestrian posture, background, lighting, and other factors. In this paper, the method of harsh learning is applied in person re-identification, and we propose a person [...] Read more.
Person re-identification in the image processing domain has been a challenging research topic due to the influence of pedestrian posture, background, lighting, and other factors. In this paper, the method of harsh learning is applied in person re-identification, and we propose a person re-identification method based on deep hash learning. By improving the conventional method, the method proposed in this paper uses an easy-to-optimize shallow convolutional neural network to learn the inherent implicit relationship of the image and then extracts the deep features of the image. Then, a hash layer with three-step calculation is incorporated in the fully connected layer of the network. The hash function is learned and mapped into a hash code through the connection between the network layers. The generation of the hash code satisfies the requirements that minimize the error of the sum of quantization loss and Softmax regression cross-entropy loss, which achieve the end-to-end generation of hash code in the network. After obtaining the hash code through the network, the distance between the pedestrian image hash code to be retrieved and the pedestrian image hash code library is calculated to implement the person re-identification. Experiments conducted on multiple standard datasets show that our deep hashing network achieves the comparable performances and outperforms other hashing methods with large margins on Rank-1 and mAP value identification rates in pedestrian re-identification. Besides, our method is predominant in the efficiency of training and retrieval in contrast to other pedestrian re-identification algorithms. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
A q-Extension of Sigmoid Functions and the Application for Enhancement of Ultrasound Images
Entropy 2019, 21(4), 430; https://doi.org/10.3390/e21040430
Received: 14 March 2019 / Revised: 14 April 2019 / Accepted: 17 April 2019 / Published: 23 April 2019
Cited by 1 | PDF Full-text (1463 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes the q-sigmoid functions, which are variations of the sigmoid expressions and an analysis of their application to the process of enhancing regions of interest in digital images. These new functions are based on the non-extensive Tsallis statistics, arising in [...] Read more.
This paper proposes the q-sigmoid functions, which are variations of the sigmoid expressions and an analysis of their application to the process of enhancing regions of interest in digital images. These new functions are based on the non-extensive Tsallis statistics, arising in the field of statistical mechanics through the use of q-exponential functions. The potential of q-sigmoids for image processing is demonstrated in tasks of region enhancement in ultrasound images which are highly affected by speckle noise. Before demonstrating the results in real images, we study the asymptotic behavior of these functions and the effect of the obtained expressions when processing synthetic images. In both experiments, the q-sigmoids overcame the original sigmoid functions, as well as two other well-known methods for the enhancement of regions of interest: slicing and histogram equalization. These results show that q-sigmoids can be used as a preprocessing step in pipelines including segmentation as demonstrated for the Otsu algorithm and deep learning approaches for further feature extractions and analyses. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
A Chaotic Electromagnetic Field Optimization Algorithm Based on Fuzzy Entropy for Multilevel Thresholding Color Image Segmentation
Entropy 2019, 21(4), 398; https://doi.org/10.3390/e21040398
Received: 1 April 2019 / Revised: 11 April 2019 / Accepted: 12 April 2019 / Published: 15 April 2019
Cited by 1 | PDF Full-text (24338 KB) | HTML Full-text | XML Full-text
Abstract
Multilevel thresholding segmentation of color images is an important technology in various applications which has received more attention in recent years. The process of determining the optimal threshold values in the case of traditional methods is time-consuming. In order to mitigate the above [...] Read more.
Multilevel thresholding segmentation of color images is an important technology in various applications which has received more attention in recent years. The process of determining the optimal threshold values in the case of traditional methods is time-consuming. In order to mitigate the above problem, meta-heuristic algorithms have been employed in this field for searching the optima during the past few years. In this paper, an effective technique of Electromagnetic Field Optimization (EFO) algorithm based on a fuzzy entropy criterion is proposed, and in addition, a novel chaotic strategy is embedded into EFO to develop a new algorithm named CEFO. To evaluate the robustness of the proposed algorithm, other competitive algorithms such as Artificial Bee Colony (ABC), Bat Algorithm (BA), Wind Driven Optimization (WDO), and Bird Swarm Algorithm (BSA) are compared using fuzzy entropy as the fitness function. Furthermore, the proposed segmentation method is also compared with the most widely used approaches of Otsu’s variance and Kapur’s entropy to verify its segmentation accuracy and efficiency. Experiments are conducted on ten Berkeley benchmark images and the simulation results are presented in terms of peak signal to noise ratio (PSNR), mean structural similarity (MSSIM), feature similarity (FSIM), and computational time (CPU Time) at different threshold levels of 4, 6, 8, and 10 for each test image. A series of experiments can significantly demonstrate the superior performance of the proposed technique, which can deal with multilevel thresholding color image segmentation excellently. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Image Encryption Based on Pixel-Level Diffusion with Dynamic Filtering and DNA-Level Permutation with 3D Latin Cubes
Entropy 2019, 21(3), 319; https://doi.org/10.3390/e21030319
Received: 12 March 2019 / Revised: 19 March 2019 / Accepted: 21 March 2019 / Published: 24 March 2019
Cited by 2 | PDF Full-text (13252 KB) | HTML Full-text | XML Full-text
Abstract
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach [...] Read more.
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach consists of five stages: (1) a newly proposed 5D hyperchaotic system with two positive Lyapunov exponents is applied to generate a pseudorandom sequence; (2) for each pixel in an image, a filtering operation with different templates called dynamic filtering is conducted to diffuse the image; (3) DNA encoding is applied to the diffused image and then the DNA-level image is transformed into several 3D DNA-level cubes; (4) Latin cube is operated on each DNA-level cube; and (5) all the DNA cubes are integrated and decoded to a 2D cipher image. Extensive experiments are conducted on public testing images, and the results show that the proposed DFDLC can achieve state-of-the-art results in terms of several evaluation criteria. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Kapur’s Entropy for Color Image Segmentation Based on a Hybrid Whale Optimization Algorithm
Entropy 2019, 21(3), 318; https://doi.org/10.3390/e21030318
Received: 14 March 2019 / Accepted: 21 March 2019 / Published: 23 March 2019
Cited by 2 | PDF Full-text (96400 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new hybrid whale optimization algorithm (WOA) called WOA-DE is proposed to better balance the exploitation and exploration phases of optimization. Differential evolution (DE) is adopted as a local search strategy with the purpose of enhancing exploitation capability. The WOA-DE [...] Read more.
In this paper, a new hybrid whale optimization algorithm (WOA) called WOA-DE is proposed to better balance the exploitation and exploration phases of optimization. Differential evolution (DE) is adopted as a local search strategy with the purpose of enhancing exploitation capability. The WOA-DE algorithm is then utilized to solve the problem of multilevel color image segmentation that can be considered as a challenging optimization task. Kapur’s entropy is used to obtain an efficient image segmentation method. In order to evaluate the performance of proposed algorithm, different images are selected for experiments, including natural images, satellite images and magnetic resonance (MR) images. The experimental results are compared with state-of-the-art meta-heuristic algorithms as well as conventional approaches. Several performance measures have been used such as average fitness values, standard deviation (STD), peak signal to noise ratio (PSNR), structural similarity index (SSIM), feature similarity index (FSIM), Wilcoxon’s rank sum test, and Friedman test. The experimental results indicate that the WOA-DE algorithm is superior to the other meta-heuristic algorithms. In addition, to show the effectiveness of the proposed technique, the Otsu method is used for comparison. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Primality, Fractality, and Image Analysis
Entropy 2019, 21(3), 304; https://doi.org/10.3390/e21030304
Received: 22 February 2019 / Revised: 17 March 2019 / Accepted: 18 March 2019 / Published: 21 March 2019
Cited by 5 | PDF Full-text (735 KB) | HTML Full-text | XML Full-text
Abstract
This paper deals with the hidden structure of prime numbers. Previous numerical studies have already indicated a fractal-like behavior of prime-indexed primes. The construction of binary images enables us to generalize this result. In fact, two-integer sequences can easily be converted into a [...] Read more.
This paper deals with the hidden structure of prime numbers. Previous numerical studies have already indicated a fractal-like behavior of prime-indexed primes. The construction of binary images enables us to generalize this result. In fact, two-integer sequences can easily be converted into a two-color image. In particular, the resulting method shows that both the coprimality condition and Ramanujan primes resemble the Minkowski island and Cantor set, respectively. Furthermore, the comparison between prime-indexed primes and Ramanujan primes is introduced and discussed. Thus the Cantor set covers a relevant role in the fractal-like description of prime numbers. The results confirm the feasibility of the method based on binary images. The link between fractal sets and chaotic dynamical systems may allow the characterization of the Hénon map only in terms of prime numbers. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
On Structural Entropy and Spatial Filling Factor Analysis of Colonoscopy Pictures
Entropy 2019, 21(3), 256; https://doi.org/10.3390/e21030256
Received: 31 December 2018 / Revised: 19 February 2019 / Accepted: 27 February 2019 / Published: 6 March 2019
Cited by 1 | PDF Full-text (14378 KB) | HTML Full-text | XML Full-text
Abstract
Colonoscopy is the standard device for diagnosing colorectal cancer, which develops from little lesions on the bowel wall called polyps. The Rényi entropies-based structural entropy and spatial filling factor are two scale- and resolution-independent quantities that characterize the shape of a probability distribution [...] Read more.
Colonoscopy is the standard device for diagnosing colorectal cancer, which develops from little lesions on the bowel wall called polyps. The Rényi entropies-based structural entropy and spatial filling factor are two scale- and resolution-independent quantities that characterize the shape of a probability distribution with the help of characteristic curves of the structural entropy–spatial filling factor map. This alternative definition of structural entropy is easy to calculate, independent of the image resolution, and does not require the calculation of neighbor statistics, unlike the other graph-based structural entropies.The distant goal of this study was to help computer aided diagnosis in finding colorectal polyps by making the Rényi entropy based structural entropy more understood. The direct goal was to determine characteristic curves that can differentiate between polyps and other structure on the picture. After analyzing the distribution of colonoscopy picture color channels, the typical structures were modeled with simple geometrical functions and the structural entropy–spatial filling factor characteristic curves were determined for these model structures for various parameter sets. A colonoscopy image analying method, i.e., the line- or column-wise scanning of the picture, was also tested, with satisfactory matching of the characteristic curve and the image. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Breaking an Image Encryption Algorithm Based on DNA Encoding and Spatiotemporal Chaos
Entropy 2019, 21(3), 246; https://doi.org/10.3390/e21030246
Received: 3 February 2019 / Revised: 26 February 2019 / Accepted: 26 February 2019 / Published: 5 March 2019
Cited by 1 | PDF Full-text (1743 KB) | HTML Full-text | XML Full-text
Abstract
Recently, an image encryption algorithm based on DNA encoding and spatiotemporal chaos (IEA-DESC) was proposed. In IEA-DESC, pixel diffusion, DNA encoding, DNA-base permutation and DNA decoding are performed successively to generate cipher-images from the plain-images. Some security analyses and simulation results are given [...] Read more.
Recently, an image encryption algorithm based on DNA encoding and spatiotemporal chaos (IEA-DESC) was proposed. In IEA-DESC, pixel diffusion, DNA encoding, DNA-base permutation and DNA decoding are performed successively to generate cipher-images from the plain-images. Some security analyses and simulation results are given to prove that it can withstand various common attacks. However, in this paper, it is found that IEA-DESC has some inherent security defects as follows: (1) the pixel diffusion is invalid for attackers from the perspective of cryptanalysis; (2) the combination of DNA encoding and DNA decoding is equivalent to bitwise complement; (3) the DNA-base permutation is actually a fixed position shuffling operation for quaternary elements, which has been proved to be insecure. In summary, IEA-DESC is essentially a combination of a fixed DNA-base position permutation and bitwise complement. Therefore, IEA-DESC can be equivalently represented as simplified form, and its security solely depends on the equivalent secret key. So the equivalent secret key of IEA-DESC can be recovered using chosen-plaintext attack and chosen-ciphertext attack, respectively. Theoretical analysis and experimental results show that the two attack methods are both effective and efficient. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Entropy and Contrast Enhancement of Infrared Thermal Images Using the Multiscale Top-Hat Transform
Entropy 2019, 21(3), 244; https://doi.org/10.3390/e21030244
Received: 29 December 2018 / Revised: 20 February 2019 / Accepted: 25 February 2019 / Published: 4 March 2019
Cited by 1 | PDF Full-text (2091 KB) | HTML Full-text | XML Full-text
Abstract
Discrete entropy is used to measure the content of an image, where a higher value indicates an image with richer details. Infrared images are capable of revealing important hidden targets. The disadvantage of this type of image is that their low contrast and [...] Read more.
Discrete entropy is used to measure the content of an image, where a higher value indicates an image with richer details. Infrared images are capable of revealing important hidden targets. The disadvantage of this type of image is that their low contrast and level of detail are not consistent with human visual perception. These problems can be caused by variations of the environment or by limitations of the cameras that capture the images. In this work we propose a method that improves the details of infrared images, increasing their entropy, preserving their natural appearance, and enhancing contrast. The proposed method extracts multiple features of brightness and darkness from the infrared image. This is done by means of the multiscale top-hat transform. To improve the infrared image, multiple scales are added to the bright areas and multiple areas of darkness are subtracted. The method was tested with 450 infrared thermal images from a public database. Evaluation of the experimental results shows that the proposed method improves the details of the image by increasing entropy, also preserving natural appearance and enhancing the contrast of infrared thermal images. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Study on Asphalt Pavement Surface Texture Degradation Using 3-D Image Processing Techniques and Entropy Theory
Entropy 2019, 21(2), 208; https://doi.org/10.3390/e21020208
Received: 4 February 2019 / Revised: 17 February 2019 / Accepted: 18 February 2019 / Published: 21 February 2019
Cited by 1 | PDF Full-text (10466 KB) | HTML Full-text | XML Full-text
Abstract
Surface texture is a very important factor affecting the anti-skid performance of pavements. In this paper, entropy theory is introduced to study the decay behavior of the three-dimensional macrotexture and microtexture of road surfaces in service based on the field test data collected [...] Read more.
Surface texture is a very important factor affecting the anti-skid performance of pavements. In this paper, entropy theory is introduced to study the decay behavior of the three-dimensional macrotexture and microtexture of road surfaces in service based on the field test data collected over more than 2 years. Entropy is found to be feasible for evaluating the three-dimensional macrotexture and microtexture of an asphalt pavement surface. The complexity of the texture increases with the increase of entropy. Under the polishing action of the vehicle load, the entropy of the surface texture decreases gradually. The three-dimensional macrotexture decay characteristics of asphalt pavement surfaces are significantly different for different mixture designs. The macrotexture decay performance of asphalt pavement can be improved by designing appropriate mixtures. Compared with the traditional macrotexture parameter Mean Texture Depth (MTD) index, entropy contains more physical information and has a better correlation with the pavement anti-skid performance index. It has significant advantages in describing the relationship between macrotexture characteristics and the anti-skid performance of asphalt pavement. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Nonrigid Medical Image Registration Using an Information Theoretic Measure Based on Arimoto Entropy with Gradient Distributions
Entropy 2019, 21(2), 189; https://doi.org/10.3390/e21020189
Received: 12 December 2018 / Revised: 2 February 2019 / Accepted: 14 February 2019 / Published: 18 February 2019
Cited by 1 | PDF Full-text (3221 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces a new nonrigid registration approach for medical images applying an information theoretic measure based on Arimoto entropy with gradient distributions. A normalized dissimilarity measure based on Arimoto entropy is presented, which is employed to measure the independence between two images. [...] Read more.
This paper introduces a new nonrigid registration approach for medical images applying an information theoretic measure based on Arimoto entropy with gradient distributions. A normalized dissimilarity measure based on Arimoto entropy is presented, which is employed to measure the independence between two images. In addition, a regularization term is integrated into the cost function to obtain the smooth elastic deformation. To take the spatial information between voxels into account, the distance of gradient distributions is constructed. The goal of nonrigid alignment is to find the optimal solution of a cost function including a dissimilarity measure, a regularization term, and a distance term between the gradient distributions of two images to be registered, which would achieve a minimum value when two misaligned images are perfectly registered using limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) optimization scheme. To evaluate the test results of our presented algorithm in non-rigid medical image registration, experiments on simulated three-dimension (3D) brain magnetic resonance imaging (MR) images, real 3D thoracic computed tomography (CT) volumes and 3D cardiac CT volumes were carried out on elastix package. Comparison studies including mutual information (MI) and the approach without considering spatial information were conducted. These results demonstrate a slight improvement in accuracy of non-rigid registration. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Objective 3D Printed Surface Quality Assessment Based on Entropy of Depth Maps
Entropy 2019, 21(1), 97; https://doi.org/10.3390/e21010097
Received: 20 December 2018 / Revised: 16 January 2019 / Accepted: 18 January 2019 / Published: 21 January 2019
Cited by 1 | PDF Full-text (3497 KB) | HTML Full-text | XML Full-text
Abstract
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the [...] Read more.
A rapid development and growing popularity of additive manufacturing technology leads to new challenging tasks allowing not only a reliable monitoring of the progress of the 3D printing process but also the quality of the printed objects. The automatic objective assessment of the surface quality of the 3D printed objects proposed in the paper, which is based on the analysis of depth maps, allows for determining the quality of surfaces during printing for the devices equipped with the built-in 3D scanners. In the case of detected low quality, some corrections can be made or the printing process may be aborted to save the filament, time and energy. The application of the entropy analysis of the 3D scans allows evaluating the surface regularity independently on the color of the filament in contrast to many other possible methods based on the analysis of visible light images. The results obtained using the proposed approach are encouraging and further combination of the proposed approach with camera-based methods might be possible as well. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Reconstruction of PET Images Using Cross-Entropy and Field of Experts
Entropy 2019, 21(1), 83; https://doi.org/10.3390/e21010083
Received: 17 December 2018 / Revised: 7 January 2019 / Accepted: 14 January 2019 / Published: 18 January 2019
Cited by 1 | PDF Full-text (598 KB) | HTML Full-text | XML Full-text
Abstract
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In [...] Read more.
The reconstruction of positron emission tomography data is a difficult task, particularly at low count rates because Poisson noise has a significant influence on the statistical uncertainty of positron emission tomography (PET) measurements. Prior information is frequently used to improve image quality. In this paper, we propose the use of a field of experts to model a priori structure and capture anatomical spatial dependencies of the PET images to address the problems of noise and low count data, which make the reconstruction of the image difficult. We reconstruct PET images by using a modified MXE algorithm, which minimizes a objective function with the cross-entropy as a fidelity term, while the field of expert model is incorporated as a regularizing term. Comparisons with the expectation maximization algorithm and a iterative method with a prior penalizing relative differences showed that the proposed method can lead to accurate estimation of the image, especially with acquisitions at low count rate. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Uncertainty Assessment of Hyperspectral Image Classification: Deep Learning vs. Random Forest
Entropy 2019, 21(1), 78; https://doi.org/10.3390/e21010078
Received: 16 December 2018 / Revised: 10 January 2019 / Accepted: 10 January 2019 / Published: 16 January 2019
Cited by 3 | PDF Full-text (3969 KB) | HTML Full-text | XML Full-text
Abstract
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test [...] Read more.
Uncertainty assessment techniques have been extensively applied as an estimate of accuracy to compensate for weaknesses with traditional approaches. Traditional approaches to mapping accuracy assessment have been based on a confusion matrix, and hence are not only dependent on the availability of test data but also incapable of capturing the spatial variation in classification error. Here, we apply and compare two uncertainty assessment techniques that do not rely on test data availability and enable the spatial characterisation of classification accuracy before the validation phase, promoting the assessment of error propagation within the classified imagery products. We compared the performance of emerging deep neural network (DNN) with the popular random forest (RF) technique. Uncertainty assessment was implemented by calculating the Shannon entropy of class probabilities predicted by DNN and RF for every pixel. The classification uncertainties of DNN and RF were quantified for two different hyperspectral image datasets—Salinas and Indian Pines. We then compared the uncertainty against the classification accuracy of the techniques represented by a modified root mean square error (RMSE). The results indicate that considering modified RMSE values for various sample sizes of both datasets, the derived entropy based on the DNN algorithm is a better estimate of classification accuracy and hence provides a superior uncertainty estimate at the pixel level. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
An Image Encryption Algorithm Based on Time-Delay and Random Insertion
Entropy 2018, 20(12), 974; https://doi.org/10.3390/e20120974
Received: 19 November 2018 / Revised: 9 December 2018 / Accepted: 13 December 2018 / Published: 15 December 2018
Cited by 1 | PDF Full-text (2004 KB) | HTML Full-text | XML Full-text
Abstract
An image encryption algorithm is presented in this paper based on a chaotic map. Different from traditional methods based on the permutation-diffusion structure, the keystream here depends on both secret keys and the pre-processed image. In particular, in the permutation stage, a middle [...] Read more.
An image encryption algorithm is presented in this paper based on a chaotic map. Different from traditional methods based on the permutation-diffusion structure, the keystream here depends on both secret keys and the pre-processed image. In particular, in the permutation stage, a middle parameter is designed to revise the outputs of the chaotic map, yielding a temporal delay phenomena. Then, diffusion operation is applied after a group of random numbers is inserted into the permuted image. Therefore, the gray distribution can be changed and is different from that of the plain-image. This insertion acts as a one-time pad. Moreover, the keystream for the diffusion operation is designed to be influenced by secret keys assigned in the permutation stage. As a result, the two stages are mixed together to strengthen entirety. Experimental tests also suggest that our algorithm, permutation– insertion–diffusion (PID), performs better when expecting secure communications for images. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
A Novel Multi-Exposure Image Fusion Method Based on Adaptive Patch Structure
Entropy 2018, 20(12), 935; https://doi.org/10.3390/e20120935
Received: 29 October 2018 / Revised: 3 December 2018 / Accepted: 3 December 2018 / Published: 6 December 2018
Cited by 1 | PDF Full-text (5936 KB) | HTML Full-text | XML Full-text
Abstract
Multi-exposure image fusion methods are often applied to the fusion of low-dynamic images that are taken from the same scene at different exposure levels. The fused images not only contain more color and detailed information, but also demonstrate the same real visual effects [...] Read more.
Multi-exposure image fusion methods are often applied to the fusion of low-dynamic images that are taken from the same scene at different exposure levels. The fused images not only contain more color and detailed information, but also demonstrate the same real visual effects as the observation by the human eye. This paper proposes a novel multi-exposure image fusion (MEF) method based on adaptive patch structure. The proposed algorithm combines image cartoon-texture decomposition, image patch structure decomposition, and the structural similarity index to improve the local contrast of the image. Moreover, the proposed method can capture more detailed information of source images and produce more vivid high-dynamic-range (HDR) images. Specifically, image texture entropy values are used to evaluate image local information for adaptive selection of image patch size. The intermediate fused image is obtained by the proposed structure patch decomposition algorithm. Finally, the intermediate fused image is optimized by using the structural similarity index to obtain the final fused HDR image. The results of comparative experiments show that the proposed method can obtain high-quality HDR images with better visual effects and more detailed information. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Ultrasound Entropy Imaging of Nonalcoholic Fatty Liver Disease: Association with Metabolic Syndrome
Entropy 2018, 20(12), 893; https://doi.org/10.3390/e20120893
Received: 17 October 2018 / Revised: 18 November 2018 / Accepted: 20 November 2018 / Published: 22 November 2018
Cited by 5 | PDF Full-text (1230 KB) | HTML Full-text | XML Full-text
Abstract
Nonalcoholic fatty liver disease (NAFLD) is the leading cause of advanced liver diseases. Fat accumulation in the liver changes the hepatic microstructure and the corresponding statistics of ultrasound backscattered signals. Acoustic structure quantification (ASQ) is a typical model-based method for analyzing backscattered statistics. [...] Read more.
Nonalcoholic fatty liver disease (NAFLD) is the leading cause of advanced liver diseases. Fat accumulation in the liver changes the hepatic microstructure and the corresponding statistics of ultrasound backscattered signals. Acoustic structure quantification (ASQ) is a typical model-based method for analyzing backscattered statistics. Shannon entropy, initially proposed in information theory, has been demonstrated as a more flexible solution for imaging and describing backscattered statistics without considering data distribution. NAFLD is a hepatic manifestation of metabolic syndrome (MetS). Therefore, we investigated the association between ultrasound entropy imaging of NAFLD and MetS for comparison with that obtained from ASQ. A total of 394 participants were recruited to undergo physical examinations and blood tests to diagnose MetS. Then, abdominal ultrasound screening of the liver was performed to calculate the ultrasonographic fatty liver indicator (US-FLI) as a measure of NAFLD severity. The ASQ analysis and ultrasound entropy parametric imaging were further constructed using the raw image data to calculate the focal disturbance (FD) ratio and entropy value, respectively. Tertiles were used to split the data of the FD ratio and entropy into three groups for statistical analysis. The correlation coefficient r, probability value p, and odds ratio (OR) were calculated. With an increase in the US-FLI, the entropy value increased (r = 0.713; p < 0.0001) and the FD ratio decreased (r = –0.630; p < 0.0001). In addition, the entropy value and FD ratio correlated with metabolic indices (p < 0.0001). After adjustment for confounding factors, entropy imaging (OR = 7.91, 95% confidence interval (CI): 0.96–65.18 for the second tertile; OR = 20.47, 95% CI: 2.48–168.67 for the third tertile; p = 0.0021) still provided a more significant link to the risk of MetS than did the FD ratio obtained from ASQ (OR = 0.55, 95% CI: 0.27–1.14 for the second tertile; OR = 0.42, 95% CI: 0.15–1.17 for the third tertile; p = 0.13). Thus, ultrasound entropy imaging can provide information on hepatic steatosis. In particular, ultrasound entropy imaging can describe the risk of MetS for individuals with NAFLD and is superior to the conventional ASQ technique. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Blind Image Quality Assessment of Natural Scenes Based on Entropy Differences in the DCT Domain
Entropy 2018, 20(11), 885; https://doi.org/10.3390/e20110885
Received: 29 October 2018 / Revised: 15 November 2018 / Accepted: 16 November 2018 / Published: 17 November 2018
Cited by 2 | PDF Full-text (1683 KB) | HTML Full-text | XML Full-text
Abstract
Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain [...] Read more.
Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain for natural images is proposed. Information entropy is an effective measure of the amount of information in an image. We find the discrete cosine transform coefficient distribution of distorted natural images shows a pulse-shape phenomenon, which directly affects the differences of entropy. Then, a Weibull model is used to fit the distributions of natural and distorted images. This is because the Weibull model sufficiently approximates the pulse-shape phenomenon as well as the sharp-peak and heavy-tail phenomena of natural scene statistics rules. Four features that are related to entropy differences and human visual system are extracted from the Weibull model for three scaling images. Image quality is assessed by the support vector regression method based on the extracted features. This blind Weibull statistics algorithm is thoroughly evaluated using three widely used databases: LIVE, TID2008, and CSIQ. The experimental results show that the performance of the proposed blind Weibull statistics method is highly consistent with that of human visual perception and greater than that of the state-of-the-art blind and full-reference image quality assessment methods in most cases. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Improved Cryptanalysis and Enhancements of an Image Encryption Scheme Using Combined 1D Chaotic Maps
Entropy 2018, 20(11), 843; https://doi.org/10.3390/e20110843
Received: 4 October 2018 / Revised: 29 October 2018 / Accepted: 31 October 2018 / Published: 3 November 2018
Cited by 10 | PDF Full-text (6589 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an improved cryptanalysis of a chaos-based image encryption scheme, which integrated permutation, diffusion, and linear transformation process. It was found that the equivalent key streams and all the unknown parameters of the cryptosystem can be recovered by our chosen-plaintext attack [...] Read more.
This paper presents an improved cryptanalysis of a chaos-based image encryption scheme, which integrated permutation, diffusion, and linear transformation process. It was found that the equivalent key streams and all the unknown parameters of the cryptosystem can be recovered by our chosen-plaintext attack algorithm. Both a theoretical analysis and an experimental validation are given in detail. Based on the analysis of the defects in the original cryptosystem, an improved color image encryption scheme was further developed. By using an image content–related approach in generating diffusion arrays and the process of interweaving diffusion and confusion, the security of the cryptosystem was enhanced. The experimental results and security analysis demonstrate the security superiority of the improved cryptosystem. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Encryption Algorithm of Multiple-Image Using Mixed Image Elements and Two Dimensional Chaotic Economic Map
Entropy 2018, 20(10), 801; https://doi.org/10.3390/e20100801
Received: 15 September 2018 / Revised: 15 October 2018 / Accepted: 16 October 2018 / Published: 18 October 2018
Cited by 6 | PDF Full-text (3705 KB) | HTML Full-text | XML Full-text
Abstract
To enhance the encryption proficiency and encourage the protected transmission of multiple images, the current work introduces an encryption algorithm for multiple images using the combination of mixed image elements (MIES) and a two-dimensional economic map. Firstly, the original images are grouped into [...] Read more.
To enhance the encryption proficiency and encourage the protected transmission of multiple images, the current work introduces an encryption algorithm for multiple images using the combination of mixed image elements (MIES) and a two-dimensional economic map. Firstly, the original images are grouped into one big image that is split into many pure image elements (PIES); secondly, the logistic map is used to shuffle the PIES; thirdly, it is confused with the sequence produced by the two-dimensional economic map to get MIES; finally, the MIES are gathered into a big encrypted image that is split into many images of the same size as the original images. The proposed algorithm includes a huge number key size space, and this makes the algorithm secure against hackers. Even more, the encryption results obtained by the proposed algorithm outperform existing algorithms in the literature. A comparison between the proposed algorithm and similar algorithms is made. The analysis of the experimental results and the proposed algorithm shows that the proposed algorithm is efficient and secure. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
Video Summarization for Sign Languages Using the Median of Entropy of Mean Frames Method
Entropy 2018, 20(10), 748; https://doi.org/10.3390/e20100748
Received: 4 September 2018 / Revised: 27 September 2018 / Accepted: 27 September 2018 / Published: 29 September 2018
Cited by 1 | PDF Full-text (1954 KB) | HTML Full-text | XML Full-text
Abstract
Multimedia information requires large repositories of audio-video data. Retrieval and delivery of video content is a very time-consuming process and is a great challenge for researchers. An efficient approach for faster browsing of large video collections and more efficient content indexing and access [...] Read more.
Multimedia information requires large repositories of audio-video data. Retrieval and delivery of video content is a very time-consuming process and is a great challenge for researchers. An efficient approach for faster browsing of large video collections and more efficient content indexing and access is video summarization. Compression of data through extraction of keyframes is a solution to these challenges. A keyframe is a representative frame of the salient features of the video. The output frames must represent the original video in temporal order. The proposed research presents a method of keyframe extraction using the mean of consecutive k frames of video data. A sliding window of size k / 2 is employed to select the frame that matches the median entropy value of the sliding window. This is called the Median of Entropy of Mean Frames (MME) method. MME is mean-based keyframes selection using the median of the entropy of the sliding window. The method was tested for more than 500 videos of sign language gestures and showed satisfactory results. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
A New Image Encryption Algorithm Based on Chaos and Secure Hash SHA-256
Entropy 2018, 20(9), 716; https://doi.org/10.3390/e20090716
Received: 23 August 2018 / Revised: 16 September 2018 / Accepted: 17 September 2018 / Published: 19 September 2018
Cited by 8 | PDF Full-text (3582 KB) | HTML Full-text | XML Full-text
Abstract
In order to overcome the difficulty of key management in “one time pad” encryption schemes and also resist the attack of chosen plaintext, a new image encryption algorithm based on chaos and SHA-256 is proposed in this paper. The architecture of confusion and [...] Read more.
In order to overcome the difficulty of key management in “one time pad” encryption schemes and also resist the attack of chosen plaintext, a new image encryption algorithm based on chaos and SHA-256 is proposed in this paper. The architecture of confusion and diffusion is adopted. Firstly, the surrounding of a plaintext image is surrounded by a sequence generated from the SHA-256 hash value of the plaintext to ensure that each encrypted result is different. Secondly, the image is scrambled according to the random sequence obtained by adding the disturbance term associated with the plaintext to the chaotic sequence. Third, the cyphertext (plaintext) feedback mechanism of the dynamic index in the diffusion stage is adopted, that is, the location index of the cyphertext (plaintext) used for feedback is dynamic. The above measures can ensure that the algorithm can resist chosen plaintext attacks and can overcome the difficulty of key management in “one time pad” encryption scheme. Also, experimental results such as key space analysis, key sensitivity analysis, differential analysis, histograms, information entropy, and correlation coefficients show that the image encryption algorithm is safe and reliable, and has high application potential. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle
An Adaptive Weight Method for Image Retrieval Based Multi-Feature Fusion
Entropy 2018, 20(8), 577; https://doi.org/10.3390/e20080577
Received: 23 June 2018 / Revised: 30 July 2018 / Accepted: 31 July 2018 / Published: 6 August 2018
Cited by 1 | PDF Full-text (1510 KB) | HTML Full-text | XML Full-text
Abstract
With the rapid development of information storage technology and the spread of the Internet, large capacity image databases that contain different contents in the images are generated. It becomes imperative to establish an automatic and efficient image retrieval system. This paper proposes a [...] Read more.
With the rapid development of information storage technology and the spread of the Internet, large capacity image databases that contain different contents in the images are generated. It becomes imperative to establish an automatic and efficient image retrieval system. This paper proposes a novel adaptive weighting method based on entropy theory and relevance feedback. Firstly, we obtain single feature trust by relevance feedback (supervised) or entropy (unsupervised). Then, we construct a transfer matrix based on trust. Finally, based on the transfer matrix, we get the weight of single feature through several iterations. It has three outstanding advantages: (1) The retrieval system combines the performance of multiple features and has better retrieval accuracy and generalization ability than single feature retrieval system; (2) In each query, the weight of a single feature is updated dynamically with the query image, which makes the retrieval system make full use of the performance of several single features; (3) The method can be applied in two cases: supervised and unsupervised. The experimental results show that our method significantly outperforms the previous approaches. The top 20 retrieval accuracy is 97.09%, 92.85%, and 94.42% on the dataset of Wang, UC Merced Land Use, and RSSCN7, respectively. The Mean Average Precision is 88.45% on the dataset of Holidays. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top