entropy-logo

Journal Browser

Journal Browser

Application of Information Theory to Computer Vision and Image Processing II

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 25 February 2025 | Viewed by 10807

Special Issue Editors


E-Mail Website
Guest Editor
Facultad de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21376, Mexico
Interests: fourth industrial revolution; artificial intelligence; cybersystems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Applied Physics, Autonomous University of Baja California, Mexicali 21100, Mexico
Interests: automated metrology; 3D coordinates measurement; robotic navigation; machine vision; simulation of the robotic swarms behaviour
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Engineering Faculty, Universidad Autónoma de Baja California, Mexicali 21100, Mexico
Interests: machine vision; stereo vision; systems laser; scanner control; digital image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Systems, Tecnológico Nacional de México, IT de Mexicali, Mexicali 21376, Mexico
Interests: machine vision; stereo vision; systems laser; scanner control; analogic and digital processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that due to the great success of “Application of Information Theory to Computer Vision and Image Processing”, a new Special Issue titled “Application of Information Theory to Computer Vision and Image Processing II” is open to continue the inclusion of relevant papers of related topics.

The application of information theory to computer vision and image processing has significantly contributed to advancing the understanding and capabilities of computer science. Mathematics methods are applied to signal and image processing for quantifying and obtaining accurate information with enhanced efficiency upon every innovation. Providing valuable tools and techniques for the development of intelligent and adaptive machine vision systems for measuring and analyzing the amount of information contained within a signal and an image, including the entropy theory to estimate the average amount of uncertainty or randomness in a dataset, where a high entropy indicates a higher level of unpredictability, while low entropy suggests a more predictable and structured dataset.

This Special Issue aims to publish information theory, measurement methods, data processing, tools, and techniques for the design and instrumentation used in machine vision systems by the application of computer vision and image processing, for analyzing, processing, and understanding visual data based on principles of information content, redundancy, and statistical properties.

Dr. Wendy Flores-Fuentes
Dr. Oleg Sergiyenko
Prof. Dr. Julio Cesar Rodríguez-Quiñonez
Dr. Jesús Elías Miranda-Vega
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • information theory
  • entropy and coding theory (data compression, watermark, minimizing data loss, visual information in a more compact form, transmission, storage)
  • computer vision (identify relevant features and patterns)
  • machine vision (data analysis and understanding, segmentation, registration, denoising and restoration, object recognition, classification and tracking)
  • cyber-physical systems
  • instrumentation
  • signal and image processing
  • measurements (3D spatial coordinates, redundancy, statistical properties)
  • artificial intelligence
  • applications (navigation, surveillance, facial recognition, medicine, robotics, entertainment, and more)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 9600 KiB  
Article
A Hierarchical Neural Network for Point Cloud Segmentation and Geometric Primitive Fitting
by Honghui Wan and Feiyu Zhao
Entropy 2024, 26(9), 717; https://doi.org/10.3390/e26090717 - 23 Aug 2024
Viewed by 580
Abstract
Automated generation of geometric models from point cloud data holds significant importance in the field of computer vision and has expansive applications, such as shape modeling and object recognition. However, prevalent methods exhibit accuracy issues. In this study, we introduce a novel hierarchical [...] Read more.
Automated generation of geometric models from point cloud data holds significant importance in the field of computer vision and has expansive applications, such as shape modeling and object recognition. However, prevalent methods exhibit accuracy issues. In this study, we introduce a novel hierarchical neural network that utilizes recursive PointConv operations on nested subdivisions of point sets. This network effectively extracts features, segments point clouds, and accurately identifies and computes parameters of regular geometric primitives with notable resilience to noise. On fine-grained primitive detection, our approach outperforms Supervised Primitive Fitting Network (SPFN) by 18.5% and Cascaded Primitive Fitting Network (CPFN) by 11.2%. Additionally, our approach consistently maintains low absolute errors in parameter prediction across varying noise levels in the point cloud data. Our experiments validate the robustness of our proposed method and establish its superiority relative to other methodologies in the extant literature. Full article
Show Figures

Figure 1

19 pages, 43879 KiB  
Article
3D Data Processing and Entropy Reduction for Reconstruction from Low-Resolution Spatial Coordinate Clouds in a Technical Vision System
by Ivan Y. Alba Corpus, Wendy Flores-Fuentes, Oleg Sergiyenko, Julio C. Rodríguez-Quiñonez, Jesús E. Miranda-Vega, Wendy Garcia-González and José A. Núñez-López
Entropy 2024, 26(8), 646; https://doi.org/10.3390/e26080646 - 30 Jul 2024
Viewed by 781
Abstract
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using [...] Read more.
This paper proposes an advancement in the application of a Technical Vision System (TVS), which integrates a laser scanning mechanism with a single light sensor to measure 3D spatial coordinates. In this application, the system is used to scan and digitalize objects using a rotating table to explore the potential of the system for 3D scanning at reduced resolutions. The experiments undertaken searched for optimal scanning windows and used statistical data filtering techniques and regression models to find a method to generate a 3D scan that was still recognizable with the least amount of 3D points, balancing the number of points scanned and time, while at the same time reducing effects caused by the particularities of the TVS, such as noise and entropy in the form of natural distortion in the resulting scans. The evaluation of the experimentation results uses 3D point registration methods, joining multiple faces from the original volume scanned by the TVS and aligning it to the ground truth model point clouds, which are based on a commercial 3D camera to verify that the reconstructed 3D model retains substantial detail from the original object. This research finds it is possible to reconstruct sufficiently detailed 3D models obtained from the TVS, which contain coarsely scanned data or scans that initially lack high definition or are too noisy. Full article
Show Figures

Figure 1

18 pages, 5022 KiB  
Article
(HTBNet)Arbitrary Shape Scene Text Detection with Binarization of Hyperbolic Tangent and Cross-Entropy
by Zhao Chen
Entropy 2024, 26(7), 560; https://doi.org/10.3390/e26070560 - 29 Jun 2024
Viewed by 631
Abstract
The existing segmentation-based scene text detection methods mostly need complicated post-processing, and the post-processing operation is separated from the training process, which greatly reduces the detection performance. The previous method, DBNet, successfully simplified post-processing and integrated post-processing into a segmentation network. However, the [...] Read more.
The existing segmentation-based scene text detection methods mostly need complicated post-processing, and the post-processing operation is separated from the training process, which greatly reduces the detection performance. The previous method, DBNet, successfully simplified post-processing and integrated post-processing into a segmentation network. However, the training process of the model took a long time for 1200 epochs and the sensitivity to texts of various scales was lacking, leading to some text instances being missed. Considering the above two problems, we design the text detection Network with Binarization of Hyperbolic Tangent (HTBNet). First of all, we propose the Binarization of Hyperbolic Tangent (HTB), optimized along with which the segmentation network can expedite the initial convergent speed by reducing the number of epochs from 1200 to 600. Because features of different channels in the same scale feature map focus on the information of different regions in the image, to better represent the important features of all objects in the image, we devise the Multi-Scale Channel Attention (MSCA). Meanwhile, considering that multi-scale objects in the image cannot be simultaneously detected, we propose a novel module named Fused Module with Channel and Spatial (FMCS), which can fuse the multi-scale feature maps from channel and spatial dimensions. Finally, we adopt cross-entropy as the loss function, which measures the difference between predicted values and ground truths. The experimental results show that HTBNet, compared with lightweight models, has achieved competitive performance and speed on Total-Text (F-measure:86.0%, FPS:30) and MSRA-TD500 (F-measure:87.5%, FPS:30). Full article
Show Figures

Figure 1

35 pages, 6001 KiB  
Article
Lossless and Near-Lossless Compression Algorithms for Remotely Sensed Hyperspectral Images
by Amal Altamimi and Belgacem Ben Youssef
Entropy 2024, 26(4), 316; https://doi.org/10.3390/e26040316 - 5 Apr 2024
Cited by 2 | Viewed by 1826
Abstract
Rapid and continuous advancements in remote sensing technology have resulted in finer resolutions and higher acquisition rates of hyperspectral images (HSIs). These developments have triggered a need for new processing techniques brought about by the confined power and constrained hardware resources aboard satellites. [...] Read more.
Rapid and continuous advancements in remote sensing technology have resulted in finer resolutions and higher acquisition rates of hyperspectral images (HSIs). These developments have triggered a need for new processing techniques brought about by the confined power and constrained hardware resources aboard satellites. This article proposes two novel lossless and near-lossless compression methods, employing our recent seed generation and quadrature-based square rooting algorithms, respectively. The main advantage of the former method lies in its acceptable complexity utilizing simple arithmetic operations, making it suitable for real-time onboard compression. In addition, this near-lossless compressor could be incorporated for hard-to-compress images offering a stabilized reduction at nearly 40% with a maximum relative error of 0.33 and a maximum absolute error of 30. Our results also show that a lossless compression performance, in terms of compression ratio, of up to 2.6 is achieved when testing with hyperspectral images from the Corpus dataset. Further, an improvement in the compression rate over the state-of-the-art k2-raster technique is realized for most of these HSIs by all four variations of our proposed lossless compression method. In particular, a data reduction enhancement of up to 29.89% is realized when comparing their respective geometric mean values. Full article
Show Figures

Figure 1

18 pages, 7216 KiB  
Article
Style-Enhanced Transformer for Image Captioning in Construction Scenes
by Kani Song, Linlin Chen and Hengyou Wang
Entropy 2024, 26(3), 224; https://doi.org/10.3390/e26030224 - 1 Mar 2024
Viewed by 1395
Abstract
Image captioning is important for improving the intelligence of construction projects and assisting managers in mastering construction site activities. However, there are few image-captioning models for construction scenes at present, and the existing methods do not perform well in complex construction scenes. According [...] Read more.
Image captioning is important for improving the intelligence of construction projects and assisting managers in mastering construction site activities. However, there are few image-captioning models for construction scenes at present, and the existing methods do not perform well in complex construction scenes. According to the characteristics of construction scenes, we label a text description dataset based on the MOCS dataset and propose a style-enhanced Transformer for image captioning in construction scenes, simply called SETCAP. Specifically, we extract the grid features using the Swin Transformer. Then, to enhance the style information, we not only use the grid features as the initial detail semantic features but also extract style information by style encoder. In addition, in the decoder, we integrate the style information into the text features. The interaction between the image semantic information and the text features is carried out to generate content-appropriate sentences word by word. Finally, we add the sentence style loss into the total loss function to make the style of generated sentences closer to the training set. The experimental results show that the proposed method achieves encouraging results on both the MSCOCO and the MOCS datasets. In particular, SETCAP outperforms state-of-the-art methods by 4.2% CIDEr scores on the MOCS dataset and 3.9% CIDEr scores on the MSCOCO dataset, respectively. Full article
Show Figures

Figure 1

16 pages, 27918 KiB  
Article
Adaptive Dual Aggregation Network with Normalizing Flows for Low-Light Image Enhancement
by Hua Wang, Jianzhong Cao and Jijiang Huang
Entropy 2024, 26(3), 184; https://doi.org/10.3390/e26030184 - 22 Feb 2024
Viewed by 1259
Abstract
Low-light image enhancement (LLIE) aims to improve the visual quality of images taken under complex low-light conditions. Recent works focus on carefully designing Retinex-based methods or end-to-end networks based on deep learning for LLIE. However, these works usually utilize pixel-level error functions to [...] Read more.
Low-light image enhancement (LLIE) aims to improve the visual quality of images taken under complex low-light conditions. Recent works focus on carefully designing Retinex-based methods or end-to-end networks based on deep learning for LLIE. However, these works usually utilize pixel-level error functions to optimize models and have difficulty effectively modeling the real visual errors between the enhanced images and the normally exposed images. In this paper, we propose an adaptive dual aggregation network with normalizing flows (ADANF) for LLIE. First, an adaptive dual aggregation encoder is built to fully explore the global properties and local details of the low-light images for extracting illumination-robust features. Next, a reversible normalizing flow decoder is utilized to model real visual errors between enhanced and normally exposed images by mapping images into underlying data distributions. Finally, to further improve the quality of the enhanced images, a gated multi-scale information transmitting module is leveraged to introduce the multi-scale information from the adaptive dual aggregation encoder into the normalizing flow decoder. Extensive experiments on paired and unpaired datasets have verified the effectiveness of the proposed ADANF. Full article
Show Figures

Figure 1

21 pages, 5149 KiB  
Article
A Real-Time and Robust Neural Network Model for Low-Measurement-Rate Compressed-Sensing Image Reconstruction
by Pengchao Chen, Huadong Song, Yanli Zeng, Xiaoting Guo and Chaoqing Tang
Entropy 2023, 25(12), 1648; https://doi.org/10.3390/e25121648 - 12 Dec 2023
Viewed by 1163
Abstract
Compressed sensing (CS) is a popular data compression theory for many computer vision tasks, but the high reconstruction complexity for images prevents it from being used in many real-world applications. Existing end-to-end learning methods achieved real time sensing but lack theory guarantee for [...] Read more.
Compressed sensing (CS) is a popular data compression theory for many computer vision tasks, but the high reconstruction complexity for images prevents it from being used in many real-world applications. Existing end-to-end learning methods achieved real time sensing but lack theory guarantee for robust reconstruction results. This paper proposes a neural network called RootsNet, which integrates the CS mechanism into the network to prevent error propagation. So, RootsNet knows what will happen if some modules in the network go wrong. It also implements real-time and successfully reconstructed extremely low measurement rates that are impossible for traditional optimization-theory-based methods. For qualitative validation, RootsNet is implemented in two real-world measurement applications, i.e., a near-field microwave imaging system and a pipeline inspection system, where RootsNet easily saves 60% more measurement time and 95% more data compared with the state-of-the-art optimization-theory-based reconstruction methods. Without losing generality, comprehensive experiments are performed on general datasets, including evaluating the key components in RootsNet, the reconstruction uncertainty, quality, and efficiency. RootsNet has the best uncertainty performance and efficiency, and achieves the best reconstruction quality under super low-measurement rates. Full article
Show Figures

Figure 1

17 pages, 5045 KiB  
Article
Part-Aware Point Cloud Completion through Multi-Modal Part Segmentation
by Fuyang Yu, Runze Tian, Xuanjun Wang and Xiaohui Liang
Entropy 2023, 25(12), 1588; https://doi.org/10.3390/e25121588 - 27 Nov 2023
Viewed by 1400
Abstract
Point cloud completion aims to generate high-resolution point clouds using incomplete point clouds as input and is the foundational task for many 3D visual applications. However, most existing methods suffer from issues related to rough localized structures. In this paper, we attribute these [...] Read more.
Point cloud completion aims to generate high-resolution point clouds using incomplete point clouds as input and is the foundational task for many 3D visual applications. However, most existing methods suffer from issues related to rough localized structures. In this paper, we attribute these problems to the lack of attention to local details in the global optimization methods used for the task. Thus, we propose a new model, called PA-NET, to guide the network to pay more attention to local structures. Specifically, we first use textual embedding to assist in training a robust point assignment network, enabling the transformation of global optimization into the co-optimization of local and global aspects. Then, we design a novel plug-in module using the assignment network and introduce a new loss function to guide the network’s attention towards local structures. Numerous experiments were conducted, and the quantitative results demonstrate that our method achieves novel performance on different datasets. Additionally, the visualization results show that our method efficiently resolves the issue of poor local structures in the generated point cloud. Full article
Show Figures

Figure 1

Back to TopTop