Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (53)

Search Parameters:
Keywords = JPEG standard

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
7 pages, 1414 KiB  
Proceeding Paper
Improved Low Complexity Predictor for Block-Based Lossless Image Compression
by Huang-Chun Hsu, Jian-Jiun Ding and De-Yan Lu
Eng. Proc. 2025, 92(1), 38; https://doi.org/10.3390/engproc2025092038 - 30 Apr 2025
Viewed by 283
Abstract
Lossless image compression has been studied and widely applied, particularly in medicine, space exploration, aerial photography, and satellite communication. In this study, we proposed a low-complexity lossless compression for image (LOCO-I) predictor based on the joint photographic expert group–lossless standard (JPEG-LS). We analyzed [...] Read more.
Lossless image compression has been studied and widely applied, particularly in medicine, space exploration, aerial photography, and satellite communication. In this study, we proposed a low-complexity lossless compression for image (LOCO-I) predictor based on the joint photographic expert group–lossless standard (JPEG-LS). We analyzed the nature of the LOCO-I predictor and offered possible solutions. The improved LOCO-I outperformed LOCO-I by a reduction of 2.26% in entropy for the full image size and reductions of 2.70, 2.81, and 2.89% for 32 × 32, 16 × 16, and 8 × 8 block-based compression, respectively. In addition, we suggested vertical/horizontal flip for block-based compression, which requires extra bits to record and decreases the entropy. Compared with other state-of-the-art (SOTA) lossless image compression predictors, the proposed method has low computation complexity as it is multiplication- and division-free. The model is also better suited for hardware implementation. As the predictor exploits no inter-block relation, it enables parallel processing and random access if encoded by fix-length coding (FLC). Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

14 pages, 1039 KiB  
Communication
Using Compressed JPEG and JPEG2000 Medical Images in Deep Learning: A Review
by Ilona Anna Urbaniak
Appl. Sci. 2024, 14(22), 10524; https://doi.org/10.3390/app142210524 - 15 Nov 2024
Cited by 4 | Viewed by 1983
Abstract
Machine Learning (ML), particularly Deep Learning (DL), has become increasingly integral to medical imaging, significantly enhancing diagnostic processes and treatment planning. By leveraging extensive datasets and advanced algorithms, ML models can analyze medical images with exceptional precision. However, their effectiveness depends on large [...] Read more.
Machine Learning (ML), particularly Deep Learning (DL), has become increasingly integral to medical imaging, significantly enhancing diagnostic processes and treatment planning. By leveraging extensive datasets and advanced algorithms, ML models can analyze medical images with exceptional precision. However, their effectiveness depends on large datasets, which require extended training times for accurate predictions. With the rapid increase in data volume due to advancements in medical imaging technology, managing the data has become increasingly challenging. Consequently, irreversible compression of medical images has become essential for efficiently handling the substantial volume of data. Extensive research has established recommended compression ratios tailored to specific anatomies and imaging modalities, and these guidelines have been widely endorsed by government bodies and professional organizations globally. This work investigates the effects of irreversible compression on DL models by reviewing the relevant literature. It is crucial to understand how DL models respond to image compression degradations, particularly those introduced by JPEG and JPEG2000—both of which are the only permissible irreversible compression techniques in the most commonly used medical image format—the Digital Imaging and Communications in Medicine (DICOM) standard. This study provides insights into how DL models react to such degradations, focusing on the loss of high-frequency content and its implications for diagnostic interpretation. The findings suggest that while existing studies offer valuable insights, future research should systematically explore varying compression levels based on modality and anatomy, and consider developing strategies for integrating compressed images into DL model training for medical image analysis. Full article
Show Figures

Figure 1

7 pages, 1148 KiB  
Proceeding Paper
A Novel Method to Improve the Efficiency and Performance of Cloud-Based Visual Simultaneous Localization and Mapping
by Omar M. Salih, Hussam Rostum and József Vásárhelyi
Eng. Proc. 2024, 79(1), 78; https://doi.org/10.3390/engproc2024079078 - 11 Nov 2024
Cited by 2 | Viewed by 655
Abstract
Since Visual Simultaneous Localization and Mapping (VSLAM) inherently requires intensive computational operations and consumes many hardware resources, these limitations pose challenges to implementing the entire VSLAM architecture within limited processing power and battery capacity. This paper proposes a novel solution to improve the [...] Read more.
Since Visual Simultaneous Localization and Mapping (VSLAM) inherently requires intensive computational operations and consumes many hardware resources, these limitations pose challenges to implementing the entire VSLAM architecture within limited processing power and battery capacity. This paper proposes a novel solution to improve the efficiency and performance of exchanging data between the unmanned aerial vehicle (UAV) and the cloud server. First, an adaptive ORB (oriented FAST and rotated BRIEF) method is proposed for precise tracking, mapping, and re-localization. Second, efficient visual data encoding and decoding methods are proposed for exchanging the data between the edge device and the UAV. The results show an improvement in the trajectory RMSE and accurate tracking using the adaptive ORB-SLAM. Furthermore, the proposed visual data encoding and decoding showed an outstanding performance compared with the most used standard JPEG-based system over high quantization ratios. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2024)
Show Figures

Figure 1

27 pages, 2390 KiB  
Article
Visualizing Plant Responses: Novel Insights Possible Through Affordable Imaging Techniques in the Greenhouse
by Matthew M. Conley, Reagan W. Hejl, Desalegn D. Serba and Clinton F. Williams
Sensors 2024, 24(20), 6676; https://doi.org/10.3390/s24206676 - 17 Oct 2024
Cited by 1 | Viewed by 1528
Abstract
Efficient and affordable plant phenotyping methods are an essential response to global climatic pressures. This study demonstrates the continued potential of consumer-grade photography to capture plant phenotypic traits in turfgrass and derive new calculations. Yet the effects of image corrections on individual calculations [...] Read more.
Efficient and affordable plant phenotyping methods are an essential response to global climatic pressures. This study demonstrates the continued potential of consumer-grade photography to capture plant phenotypic traits in turfgrass and derive new calculations. Yet the effects of image corrections on individual calculations are often unreported. Turfgrass lysimeters were photographed over 8 weeks using a custom lightbox and consumer-grade camera. Subsequent imagery was analyzed for area of cover, color metrics, and sensitivity to image corrections. Findings were compared to active spectral reflectance data and previously reported measurements of visual quality, productivity, and water use. Results confirm that Red–Green–Blue imagery effectively measures plant treatment effects. Notable correlations were observed for corrected imagery, including between yellow fractional area with human visual quality ratings (r = −0.89), dark green color index with clipping productivity (r = 0.61), and an index combination term with water use (r = −0.60). The calculation of green fractional area correlated with Normalized Difference Vegetation Index (r = 0.91), and its RED reflectance spectra (r = −0.87). A new chromatic ratio correlated with Normalized Difference Red-Edge index (r = 0.90) and its Red-Edge reflectance spectra (r = −0.74), while a new calculation correlated strongest to Near-Infrared (r = 0.90). Additionally, the combined index term significantly differentiated between the treatment effects of date, mowing height, deficit irrigation, and their interactions (p < 0.001). Sensitivity and statistical analyses of typical image file formats and corrections that included JPEG, TIFF, geometric lens distortion correction, and color correction were conducted. Findings highlight the need for more standardization in image corrections and to determine the biological relevance of the new image data calculations. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

19 pages, 59781 KiB  
Article
A Watermark-Based Scheme for Authenticating JPEG 2000 Image Integrity That Complies with JPEG Privacy and Security
by Jinhee Lee, Oh-Jin Kwon, Yaseen and Seungcheol Choi
Appl. Sci. 2024, 14(18), 8428; https://doi.org/10.3390/app14188428 - 19 Sep 2024
Viewed by 1049
Abstract
Network development has made it easier to access multimedia material and to change it by allowing the collection, modification, and transmission of digital data. Additionally, this has led to a rise in malicious use, including unauthorized data distribution and copying. As the quantity [...] Read more.
Network development has made it easier to access multimedia material and to change it by allowing the collection, modification, and transmission of digital data. Additionally, this has led to a rise in malicious use, including unauthorized data distribution and copying. As the quantity of evil activities increases, security issues such as unauthorized use and image forgery are rising. While security solutions for JPEG-1 images are widely available, there remains a significant gap in protection for JPEG 2000 images. In this paper, we propose a new watermark-based forgery detection method to comply with the JPEG Privacy and Security standards and to authenticate JPEG 2000 image integrity in the discrete wavelet transform (DWT) domain. The method proposed divides JPEG 2000 images into groups of blocks (GOBs) within the DWT domain. The watermark is generated by collaborating with the neighboring GOBs and is embedded in the GOBs. It enables you to respond to the collage attack. Experimental results using various sample images show the superiority of the proposed method, showing negligible visual differences between the original and watermarked JPEG 2000 images. Full article
Show Figures

Figure 1

19 pages, 3651 KiB  
Article
Aspects of Occlusal Recordings Performed with the T-Scan System and with the Medit Intraoral Scanner
by Angelica Diana Popa, Diana Elena Vlăduțu, Adina Andreea Turcu, Daniel Adrian Târtea, Mihaela Ionescu, Cătălin Păunescu, Răzvan Sabin Stan and Veronica Mercuț
Diagnostics 2024, 14(13), 1457; https://doi.org/10.3390/diagnostics14131457 - 8 Jul 2024
Cited by 4 | Viewed by 2936
Abstract
Introduction: Dental occlusion refers to the static and dynamic relationships that are established between the teeth of the two arches and is an important factor in the homeostasis of the dento-maxillary system. The objective of the present study was to compare two digital [...] Read more.
Introduction: Dental occlusion refers to the static and dynamic relationships that are established between the teeth of the two arches and is an important factor in the homeostasis of the dento-maxillary system. The objective of the present study was to compare two digital occlusal analysis systems: the T-Scan III system and the Medit I600 intraoral scanner. Materials and Methods: The study was carried out on 20 students from the Faculty of Dental Medicine Craiova, whose dental occlusion was assessed with the T-Scan III system and with the Medit I600 intraoral scanner. Dental occlusion was assessed in the maximum intercuspation position, the edge-to-edge protrusion position, and the edge-to-edge position in right and left laterotrusion. The images of the 2D occlusal contact areas obtained by both methods were converted to .jpeg format and then transferred to Adobe Photoshop CS6 2021 (Adobe Systems, San Jose, CA, USA) for comparison. The recorded data were statistically processed. Results: Analyzing the data provided by the two digital occlusal analysis systems, it was found that the T-Scan III system provided data related to the amplitude of the occlusal forces, the surface on which they were distributed (the contact surface), the dynamics of the occlusal contacts, and the proportion in which they were distributed at the level of the two hemiarches, and the Medit I600 intraoral scanner performed an evaluation of the occlusal interface of the two arches, highlighting the extent of the contact areas with the degree of overlapping of the occlusal components. Although both methods of occlusal analysis recorded the highest values for the maximum intercuspation position, the results could not be compared. Conclusions: The two digital systems provide different data in occlusal analysis. As the T-Scan III system is considered the gold standard for occlusal analysis, more studies are needed to understand the data provided by the Medit I600 intraoral scanner and their significance. Full article
(This article belongs to the Special Issue Advances in Oral and Maxillofacial Radiology)
Show Figures

Figure 1

19 pages, 27425 KiB  
Article
N-DEPTH: Neural Depth Encoding for Compression-Resilient 3D Video Streaming
by Stephen Siemonsma and Tyler Bell
Electronics 2024, 13(13), 2557; https://doi.org/10.3390/electronics13132557 - 29 Jun 2024
Cited by 1 | Viewed by 1765
Abstract
Recent advancements in 3D data capture have enabled the real-time acquisition of high-resolution 3D range data, even in mobile devices. However, this type of high bit-depth data remains difficult to efficiently transmit over a standard broadband connection. The most successful techniques for tackling [...] Read more.
Recent advancements in 3D data capture have enabled the real-time acquisition of high-resolution 3D range data, even in mobile devices. However, this type of high bit-depth data remains difficult to efficiently transmit over a standard broadband connection. The most successful techniques for tackling this data problem thus far have been image-based depth encoding schemes that leverage modern image and video codecs. To our knowledge, no published work has directly optimized the end-to-end losses of a depth encoding scheme sandwiched around a lossy image compression codec. We present N-DEPTH, a compression-resilient neural depth encoding method that leverages deep learning to efficiently encode depth maps into 24-bit RGB representations that minimize end-to-end depth reconstruction errors when compressed with JPEG. N-DEPTH’s learned robustness to lossy compression expands to video codecs as well. Compared to an existing state-of-the-art encoding method, N-DEPTH achieves smaller file sizes and lower errors across a large range of compression qualities, in both image (JPEG) and video (H.264) formats. For example, reconstructions from N-DEPTH encodings stored with JPEG had dramatically lower error while still offering 29.8%-smaller file sizes. When H.264 video was used to target a 10 Mbps bit rate, N-DEPTH reconstructions had 85.1%-lower root mean square error (RMSE) and 15.3%-lower mean absolute error (MAE). Overall, our method offers an efficient and robust solution for emerging 3D streaming and 3D telepresence applications, enabling high-quality 3D depth data storage and transmission. Full article
(This article belongs to the Special Issue Recent Advances in Image Processing and Computer Vision)
Show Figures

Figure 1

13 pages, 2194 KiB  
Article
A Low-Bit-Rate Image Semantic Communication System Based on Semantic Graph
by Jiajun Dong, Tianfeng Yan and Wenhao Sun
Electronics 2024, 13(12), 2366; https://doi.org/10.3390/electronics13122366 - 17 Jun 2024
Viewed by 1468
Abstract
In the progress of research in the field of semantic communication, most efforts have been focused on optimizing the signal-to-noise ratio (SNR), while the design and optimization of the bit rate required for transmission have been relatively neglected. To address this issue, this [...] Read more.
In the progress of research in the field of semantic communication, most efforts have been focused on optimizing the signal-to-noise ratio (SNR), while the design and optimization of the bit rate required for transmission have been relatively neglected. To address this issue, this study introduces an innovative low-bit-rate image semantic communication system model, which aims to reconstruct images through the exchange of semantic information rather than traditional symbol transmission. This model employs an image feature extraction and optimization reconstruction framework, achieving visually satisfactory and semantically consistent reconstruction performance at extremely low bit rates (below 0.03 bits per pixel (bpp)). Unlike previous methods that used pixel accuracy as the standard for distortion measurement, this research introduces multiple perceptual metrics to train and evaluate the proposed image semantic encoding model, aligning more closely with the fundamental purpose of semantic communication. Experimental results demonstrate that, compared to WebP, JPEG, and deep learning-based image codecs, our model not only provides a more visually pleasing reconstruction effect but also significantly reduces the required bit rate while maintaining semantic consistency. Full article
Show Figures

Figure 1

16 pages, 1568 KiB  
Article
A Neural-Network-Based Watermarking Method Approximating JPEG Quantization
by Shingo Yamauchi and Masaki Kawamura
J. Imaging 2024, 10(6), 138; https://doi.org/10.3390/jimaging10060138 - 6 Jun 2024
Cited by 3 | Viewed by 1669
Abstract
We propose a neural-network-based watermarking method that introduces the quantized activation function that approximates the quantization of JPEG compression. Many neural-network-based watermarking methods have been proposed. Conventional methods have acquired robustness against various attacks by introducing an attack simulation layer between the embedding [...] Read more.
We propose a neural-network-based watermarking method that introduces the quantized activation function that approximates the quantization of JPEG compression. Many neural-network-based watermarking methods have been proposed. Conventional methods have acquired robustness against various attacks by introducing an attack simulation layer between the embedding network and the extraction network. The quantization process of JPEG compression is replaced by the noise addition process in the attack layer of conventional methods. In this paper, we propose a quantized activation function that can simulate the JPEG quantization standard as it is in order to improve the robustness against the JPEG compression. Our quantized activation function consists of several hyperbolic tangent functions and is applied as an activation function for neural networks. Our network was introduced in the attack layer of ReDMark proposed by Ahmadi et al. to compare it with their method. That is, the embedding and extraction networks had the same structure. We compared the usual JPEG compressed images and the images applying the quantized activation function. The results showed that a network with quantized activation functions can approximate JPEG compression with high accuracy. We also compared the bit error rate (BER) of estimated watermarks generated by our network with those generated by ReDMark. We found that our network was able to produce estimated watermarks with lower BERs than those of ReDMark. Therefore, our network outperformed the conventional method with respect to image quality and BER. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

17 pages, 15053 KiB  
Article
Encryption Method for JPEG Bitstreams for Partially Disclosing Visual Information
by Mare Hirose, Shoko Imaizumi and Hitoshi Kiya
Electronics 2024, 13(11), 2016; https://doi.org/10.3390/electronics13112016 - 22 May 2024
Cited by 1 | Viewed by 1580
Abstract
In this paper, we propose a novel encryption method for JPEG bitstreams in which encrypted data can preserve the JPEG file format with the same size as that without encryption. Accordingly, data encrypted with the method can be decoded without any modification of [...] Read more.
In this paper, we propose a novel encryption method for JPEG bitstreams in which encrypted data can preserve the JPEG file format with the same size as that without encryption. Accordingly, data encrypted with the method can be decoded without any modification of header information by using a standard JPEG decoder. In addition, the method makes two contributions that conventional methods allowing bitstream-level encryption do not: spatially partial encryption and block-permutation-based encryption. To achieve this, we propose using a code called restart marker for the first time, which can be inserted at regular intervals between minimum coded units (MCUs) for encryption. This allows us to define extended blocks separated by restart markers, so the two contributions are possible with restart markers. In experiments, the effectiveness of the method is verified in terms of file size preservation and the visibility of encrypted images. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

35 pages, 10230 KiB  
Review
Unveiling the Future of Human and Machine Coding: A Survey of End-to-End Learned Image Compression
by Chen-Hsiu Huang and Ja-Ling Wu
Entropy 2024, 26(5), 357; https://doi.org/10.3390/e26050357 - 24 Apr 2024
Cited by 3 | Viewed by 4536
Abstract
End-to-end learned image compression codecs have notably emerged in recent years. These codecs have demonstrated superiority over conventional methods, showcasing remarkable flexibility and adaptability across diverse data domains while supporting new distortion losses. Despite challenges such as computational complexity, learned image compression methods [...] Read more.
End-to-end learned image compression codecs have notably emerged in recent years. These codecs have demonstrated superiority over conventional methods, showcasing remarkable flexibility and adaptability across diverse data domains while supporting new distortion losses. Despite challenges such as computational complexity, learned image compression methods inherently align with learning-based data processing and analytic pipelines due to their well-suited internal representations. The concept of Video Coding for Machines has garnered significant attention from both academic researchers and industry practitioners. This concept reflects the growing need to integrate data compression with computer vision applications. In light of these developments, we present a comprehensive survey and review of lossy image compression methods. Additionally, we provide a concise overview of two prominent international standards, MPEG Video Coding for Machines and JPEG AI. These standards are designed to bridge the gap between data compression and computer vision, catering to practical industry use cases. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

15 pages, 10740 KiB  
Article
Security Improvements of JPEG Images Using Image De-Identification
by Ho-Seok Kang, Seongjun Cha and Sung-Ryul Kim
Electronics 2024, 13(7), 1332; https://doi.org/10.3390/electronics13071332 - 2 Apr 2024
Viewed by 1250
Abstract
Today, as data is easily exposed through various channels, such as storing data in cloud services or exchanging data through a SNS (Social Network Service), related privacy issues are receiving a significant amount of attention. In addition, for data that are more sensitive [...] Read more.
Today, as data is easily exposed through various channels, such as storing data in cloud services or exchanging data through a SNS (Social Network Service), related privacy issues are receiving a significant amount of attention. In addition, for data that are more sensitive to personal information, such as medical images, more attention should be paid to privacy protection. De-identification is a common method for privacy protection. Typically, it is a method of deleting or masking individual identifiers and omitting quasi-identifiers such as birth dates. In the case of images, de-identification is performed by mosaic processing or applying various effects. In this paper, we present a method of de-identifying an image by encrypting only some of the data in the JPEG (Joint Photograph Experts Group) image format, one of the most common image compression formats, so that the entire image cannot be recognized. The purpose of this paper is to protect images by encrypting only small parts, and not the entire image. This work is suitable for the fast and safe transmission and verification of high-capacity images. We have shown that images can be de-identified by encrypting data from the DHT (Define Huffman Table) segment among the JPEG header segments. Through experiments, we confirmed that that these images could not be identified after encrypting only a minimal portion, compared to previous studies that encrypted entire images, and the encryption speed and decryption speed were also faster and more effective than the results of previous studies. A model was implemented to de-identify images using AES-256 (Advanced Encryption Standard-256) and symmetric key encryption algorithm in the Huffman tables of JPEG headers, resulting in the ability to render entire images unidentifiable quickly and effectively. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 8787 KiB  
Communication
JPEG Image Enhancement with Pre-Processing of Color Reduction and Smoothing
by Akane Shoda, Tomo Miyazaki and Shinichiro Omachi
Sensors 2023, 23(21), 8861; https://doi.org/10.3390/s23218861 - 31 Oct 2023
Cited by 3 | Viewed by 2512
Abstract
JPEG is the international standard for still image encoding and is the most widely used compression algorithm because of its simple encoding process and low computational complexity. Recently, many methods have been developed to improve the quality of JPEG images by using deep [...] Read more.
JPEG is the international standard for still image encoding and is the most widely used compression algorithm because of its simple encoding process and low computational complexity. Recently, many methods have been developed to improve the quality of JPEG images by using deep learning. However, these methods require the use of high-performance devices since they need to perform neural network computation for decoding images. In this paper, we propose a method to generate high-quality images using deep learning without changing the decoding algorithm. The key idea is to reduce and smooth colors and gradient regions in the original images before JPEG compression. The reduction and smoothing can suppress red block noise and pseudo-contour in the compressed images. Furthermore, high-performance devices are unnecessary for decoding. The proposed method consists of two components: a color transformation network using deep learning and a pseudo-contour suppression model using signal processing. The experimental results showed that the proposed method outperforms standard JPEG in quality measurements correlated with human perception. Full article
(This article belongs to the Special Issue Image Processing in Sensors and Communication Systems)
Show Figures

Figure 1

14 pages, 3390 KiB  
Article
Computer Network Redundancy Reduction Using Video Compression
by Shabana Habib, Waleed Albattah, Mohammed F. Alsharekh, Muhammad Islam, Mohammad Munawar Shees and Hammad I. Sherazi
Symmetry 2023, 15(6), 1280; https://doi.org/10.3390/sym15061280 - 19 Jun 2023
Cited by 2 | Viewed by 2587
Abstract
Due to the strong correlation between symmetric frames, video signals have a high degree of temporal redundancy. Motion estimation techniques are computationally expensive and time-consuming processes used in symmetric video compression to reduce temporal redundancy. The block-matching technique is, on the other hand, [...] Read more.
Due to the strong correlation between symmetric frames, video signals have a high degree of temporal redundancy. Motion estimation techniques are computationally expensive and time-consuming processes used in symmetric video compression to reduce temporal redundancy. The block-matching technique is, on the other hand, the most popular and efficient of the different motion estimation and compensation techniques. Motion compensation based on the block-matching technique generally uses the minimization of either the mean square error (MSE) or mean absolute difference (MAD) in order to find the appropriate motion vector. This paper proposes to remove the highly temporally redundant information contained in each block of the video signal using the removing temporal redundancy (RTR) technique in order to improve the data rate and efficiency of the video signal. A comparison between the PSNR values of this technique and those of the JPEG video compression standard is made. As a result of its moderate memory and computation requirements, the algorithm was found to be suitable for mobile networks and embedded devices. Based on a detailed set of testing scenarios and the obtained results, it is evident that the RTR compression technique allowed a compression ratio of 22.71 and 95% loss in bit rate reduction while maintaining sufficient intact signal quality with minimized information loss. Full article
Show Figures

Figure 1

13 pages, 825 KiB  
Article
Canine Mammary Tumor Histopathological Image Classification via Computer-Aided Pathology: An Available Dataset for Imaging Analysis
by Giovanni P. Burrai, Andrea Gabrieli, Marta Polinas, Claudio Murgia, Maria Paola Becchere, Pierfranco Demontis and Elisabetta Antuofermo
Animals 2023, 13(9), 1563; https://doi.org/10.3390/ani13091563 - 6 May 2023
Cited by 7 | Viewed by 6016
Abstract
Histopathology, the gold-standard technique in classifying canine mammary tumors (CMTs), is a time-consuming process, affected by high inter-observer variability. Digital (DP) and Computer-aided pathology (CAD) are emergent fields that will improve overall classification accuracy. In this study, the ability of the CAD systems [...] Read more.
Histopathology, the gold-standard technique in classifying canine mammary tumors (CMTs), is a time-consuming process, affected by high inter-observer variability. Digital (DP) and Computer-aided pathology (CAD) are emergent fields that will improve overall classification accuracy. In this study, the ability of the CAD systems to distinguish benign from malignant CMTs has been explored on a dataset—namely CMTD—of 1056 hematoxylin and eosin JPEG images from 20 benign and 24 malignant CMTs, with three different CAD systems based on the combination of a convolutional neural network (VGG16, Inception v3, EfficientNet), which acts as a feature extractor, and a classifier (support vector machines (SVM) or stochastic gradient boosting (SGB)), placed on top of the neural net. Based on a human breast cancer dataset (i.e., BreakHis) (accuracy from 0.86 to 0.91), our models were applied to the CMT dataset, showing accuracy from 0.63 to 0.85 across all architectures. The EfficientNet framework coupled with SVM resulted in the best performances with an accuracy from 0.82 to 0.85. The encouraging results obtained by the use of DP and CAD systems in CMTs provide an interesting perspective on the integration of artificial intelligence and machine learning technologies in cancer-related research. Full article
(This article belongs to the Special Issue New Insights in Veterinary and Comparative Reproductive Pathology)
Show Figures

Figure 1

Back to TopTop