Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = encoding and decoding binary information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1263 KB  
Article
LFTD: Transformer-Enhanced Diffusion Model for Realistic Financial Time-Series Data Generation
by Gyumun Choi, Donghyeon Jo, Wonho Song, Hyungjong Na and Hyungjoon Kim
AI 2026, 7(2), 60; https://doi.org/10.3390/ai7020060 - 5 Feb 2026
Viewed by 75
Abstract
Firm-level financial statement data form multivariate annual time series with strong cross-variable dependencies and temporal dynamics, yet publicly available panels are often short and incomplete, limiting the generalization of predictive models. We present Latent Financial Time-Series Diffusion (LFTD), a structure-aware augmentation framework that [...] Read more.
Firm-level financial statement data form multivariate annual time series with strong cross-variable dependencies and temporal dynamics, yet publicly available panels are often short and incomplete, limiting the generalization of predictive models. We present Latent Financial Time-Series Diffusion (LFTD), a structure-aware augmentation framework that synthesizes realistic firm-level financial time series in a compact latent space. LFTD first learns information-preserving representations with a dual encoder: an FT-Transformer that captures within-year interactions across financial variables and a Time Series Transformer (TST) that models long-horizon evolution across years. On this latent sequence, we train a Transformer-based denoising diffusion model whose reverse process is FiLM-conditioned on the diffusion step as well as year, firm identity, and firm age, enabling controllable generation aligned with firm- and time-specific context. A TST-based Cross-Decoder then reconstructs continuous and binary financial variables for each year. Empirical evaluation on Korean listed-firm data from 2011 to 2023 shows that augmenting training sets with LFTD-generated samples consistently improves firm-value prediction for market-to-book and Tobin’s Q under both static (same-year) and dynamic (ττ + 1) forecasting settings and outperforms conventional generative augmentation baselines and ablated variants. These results suggest that domain-conditioned latent diffusion is a practical route to reliable augmentation for firm-level financial time series. Full article
21 pages, 1574 KB  
Article
Watershed Encoder–Decoder Neural Network for Nuclei Segmentation of Breast Cancer Histology Images
by Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi and Attipoe David Sena
Bioengineering 2026, 13(2), 154; https://doi.org/10.3390/bioengineering13020154 - 28 Jan 2026
Viewed by 155
Abstract
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key [...] Read more.
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder–decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images–watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores. Full article
Show Figures

Figure 1

23 pages, 1267 KB  
Article
Huffman Tree and Binary Conversion for Efficient and Secure Data Encryption and Decryption
by Suchart Khummanee, Thanapat Cheawchanwattana, Chanwit Suwannapong, Sarutte Atsawaraungsuk and Kritsanapong Somsuk
J. Cybersecur. Priv. 2026, 6(1), 1; https://doi.org/10.3390/jcp6010001 - 22 Dec 2025
Viewed by 424
Abstract
This study proposes the Huffman Tree and Binary Conversion (HTB) which is a preprocessing algorithm to transform the Huffman tree into binary representation before the encryption process. In fact, HTB can improve the structural readiness of plaintext by combining the Huffman code with [...] Read more.
This study proposes the Huffman Tree and Binary Conversion (HTB) which is a preprocessing algorithm to transform the Huffman tree into binary representation before the encryption process. In fact, HTB can improve the structural readiness of plaintext by combining the Huffman code with a deterministic binary representation of the Huffman tree. In addition, binary representation of the Huffman tree and the compressed information will be encrypted by standard cryptographic algorithms. Six datasets, divided into two groups (short and long texts), were chosen to evaluate compression behavior and the processing cost. Moreover, AES and RSA are chosen to combine with the proposed method to analyze the encryption and decryption cycles. The experimental results show that HTB introduces a small linear-time overhead. That means, it is slightly slower than applying only the Huffman code. Across these datasets, HTB maintained a consistently low processing cost. The processing time is below one millisecond in both encoding and decoding processes. However, for long texts, the structural conversion cost becomes amortized across larger encoded messages, and the reduction in plaintext size leads to fewer encryption blocks for both AES and RSA. The reduced plaintext size lowers the number of AES encryption blocks by approximately 30–45% and decreases the number of encryption and decryption rounds in RSA. The encrypted binary representation of the Huffman tree also decreased structural ambiguity and reduced the potential exposure of frequency-related metadata. Although HTB does not replace cryptographic security, it enhances the structural consistency of compression. Therefore, the proposed method demonstrates scalability, predictable overhead, and improved suitability for cryptographic workflows. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

20 pages, 3698 KB  
Article
Lightweight Neural Network for Holographic Reconstruction of Pseudorandom Binary Data
by Mikhail K. Drozdov, Dmitry A. Rymov, Andrey S. Svistunov, Pavel A. Cheremkhin, Anna V. Shifrina, Semen A. Kiriy, Evgenii Yu. Zlokazov, Elizaveta K. Petrova, Vsevolod A. Nebavskiy, Nikolay N. Evtikhiev and Rostislav S. Starikov
Technologies 2025, 13(10), 474; https://doi.org/10.3390/technologies13100474 - 19 Oct 2025
Viewed by 1155
Abstract
Neural networks are a state-of-the-art technology for fast and accurate holographic image reconstruction. However, at present, neural network-based reconstruction methods are predominantly applied to objects with simple, homogeneous spatial structures: blood cells, bacteria, microparticles in solutions, etc. However, in the case of objects [...] Read more.
Neural networks are a state-of-the-art technology for fast and accurate holographic image reconstruction. However, at present, neural network-based reconstruction methods are predominantly applied to objects with simple, homogeneous spatial structures: blood cells, bacteria, microparticles in solutions, etc. However, in the case of objects with high contrast details, the reconstruction needs to be as precise as possible to successfully extract details and parameters. In this paper we investigate the use of neural networks in holographic reconstruction of spatially inhomogeneous binary data containers (QR codes). Two modified lightweight convolutional neural networks (which we named HoloLightNet and HoloLightNet-Mini) with an encoder–decoder architecture have been used for image reconstruction. These neural networks enable high-quality reconstruction, guaranteeing the successful decoding of QR codes (both in demonstrated numerical and optical experiments). In addition, they perform reconstruction two orders of magnitude faster than more traditional architectures. In optical experiments with a liquid crystal spatial light modulator, the obtained bit error rate was equal to only 1.2%. These methods can be used for practical applications such as high-density data transmission in coherent systems, development of reliable digital information storage and memory techniques, secure optical information encryption and retrieval, and real-time precise reconstruction of complex objects. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Graphical abstract

22 pages, 4895 KB  
Article
Machine Learning-Assisted Secure Random Communication System
by Areeb Ahmed and Zoran Bosnić
Entropy 2025, 27(8), 815; https://doi.org/10.3390/e27080815 - 29 Jul 2025
Viewed by 1156
Abstract
Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver [...] Read more.
Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver that extracts binary information from the transmitted random noise carrier signals. The ML-RCS employs skewed alpha-stable (α-stable) noise as a random carrier to encode the incoming binary bits securely. The DT model is pretrained on an extensively developed dataset encompassing all the selected parameter combinations to generate and detect the α-stable noise signals. The legitimate receiver leverages the pretrained DT and a predetermined key, specifically the pulse length of a single binary information bit, to securely decode the hidden binary bits. The performance evaluations included the single-bit transmission, confusion matrices, and a bit error rate (BER) analysis via Monte Carlo simulations. The fact that the BER reached 10−3 confirms the ability of the proposed system to establish successful secure communication between a transmitter and legitimate receiver. Additionally, the ML-RCS provides an increased data rate compared to previous random communication systems. From the perspective of security, the confusion matrices and computed false negative rate of 50.2% demonstrate the failure of an eavesdropper to decode the binary bits without access to the predetermined key and the private dataset. These findings highlight the potential ability of unconventional ML-RCSs to promote the development of secure next-generation communication devices with built-in PLSs. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

16 pages, 833 KB  
Article
Research on Data Transmission of Laser Sensors for Reading Ruler
by Bailin Fan, JianWei Zhao, Rong Wang, Chen Lei, XiaoWu Li, ChaoYang Sun and Dazhi Zhang
Appl. Sci. 2025, 15(12), 6615; https://doi.org/10.3390/app15126615 - 12 Jun 2025
Viewed by 704
Abstract
A coding ruler is a device that marks position information in the fordigital signals, and a code reader is a device that decodes the signals on the coding ruler and converts them into digital signals. The code reader and encoder ruler are key [...] Read more.
A coding ruler is a device that marks position information in the fordigital signals, and a code reader is a device that decodes the signals on the coding ruler and converts them into digital signals. The code reader and encoder ruler are key devices in ensuring the positioning accuracy of coke oven locomotives and the safety of coke production. They are common information transmission and positioning detection devices that can provide accurate monitoring and information feedback for the position and speed of coke oven locomotives. Four encoding methods were studied, namely, binary encoding, Gray code encoding, shift continuous encoding, and hybrid encoding. The application scenarios and encoding characteristics of each encoding method are summarized in this paper. Hybrid encoding combines the advantages of two different encoding methods, absolute and incremental encoding, to achieve higher accuracy and stability. Hybrid coding has high positioning accuracy in the long-range coke oven tampering tracks, ensuring the accuracy and high efficiency of the tampering operation. A certain number of opposing laser sensors are installed inside the code reader to obtain 0/1 encoding and read the movement displacement of the code reader on the ruler. In order to effectively detect the swing of the coding ruler, a certain number of distance sensors are installed on both sides and on the same side of the code reader. Ruler swing is accurately detected by the sensors, which output and process corresponding signals. Timely adjustment and correction measures are taken on the production line according to the test results, which not only improves detection accuracy but also enhances the stability and reliability of the system. Full article
(This article belongs to the Topic Micro-Mechatronic Engineering, 2nd Edition)
Show Figures

Figure 1

24 pages, 8074 KB  
Article
MMRAD-Net: A Multi-Scale Model for Precise Building Extraction from High-Resolution Remote Sensing Imagery with DSM Integration
by Yu Gao, Huiming Chai and Xiaolei Lv
Remote Sens. 2025, 17(6), 952; https://doi.org/10.3390/rs17060952 - 7 Mar 2025
Viewed by 1457
Abstract
High-resolution remote sensing imagery (HRRSI) presents significant challenges for building extraction tasks due to its complex terrain structures, multi-scale features, and rich spectral and geometric information. Traditional methods often face limitations in effectively integrating multi-scale features while maintaining a balance between detailed and [...] Read more.
High-resolution remote sensing imagery (HRRSI) presents significant challenges for building extraction tasks due to its complex terrain structures, multi-scale features, and rich spectral and geometric information. Traditional methods often face limitations in effectively integrating multi-scale features while maintaining a balance between detailed and global semantic information. To address these challenges, this paper proposes an innovative deep learning network, Multi-Source Multi-Scale Residual Attention Network (MMRAD-Net). This model is built upon the classical encoder–decoder framework and introduces two key components: the GCN OA-SWinT Dense Module (GSTDM) and the Res DualAttention Dense Fusion Block (R-DDFB). Additionally, it incorporates Digital Surface Model (DSM) data, presenting a novel feature extraction and fusion strategy. Specifically, the model enhances building extraction accuracy and robustness through hierarchical feature modeling and a refined cross-scale fusion mechanism, while effectively preserving both detail information and global semantic relationships. Furthermore, we propose a Hybrid Loss, which combines Binary Cross-Entropy Loss (BCE Loss), Dice Loss, and an edge-sensitive term to further improve the precision of building edges and foreground reconstruction capabilities. Experiments conducted on the GF-7 and WHU datasets validate the performance of MMRAD-Net, demonstrating its superiority over traditional methods in boundary handling, detail recovery, and adaptability to complex scenes. On the GF-7 Dataset, MMRAD-Net achieved an F1-score of 91.12% and an IoU of 83.01%. On the WHU Building Dataset, the F1-score and IoU were 94.04% and 88.99%, respectively. Ablation studies and transfer learning experiments further confirm the rationality of the model design and its strong generalization ability. These results highlight that innovations in multi-source data fusion, multi-scale feature modeling, and detailed feature fusion mechanisms have enhanced the accuracy and robustness of building extraction. Full article
Show Figures

Figure 1

13 pages, 2178 KB  
Article
A Novel Method Combining U-Net with LSTM for Three-Dimensional Soil Pore Segmentation Based on Computed Tomography Images
by Lei Liu, Qiaoling Han, Yue Zhao and Yandong Zhao
Appl. Sci. 2024, 14(8), 3352; https://doi.org/10.3390/app14083352 - 16 Apr 2024
Cited by 5 | Viewed by 2561
Abstract
The non-destructive study of soil micromorphology via computed tomography (CT) imaging has yielded significant insights into the three-dimensional configuration of soil pores. Precise pore analysis is contingent on the accurate transformation of CT images into binary image representations. Notably, segmentation of 2D CT [...] Read more.
The non-destructive study of soil micromorphology via computed tomography (CT) imaging has yielded significant insights into the three-dimensional configuration of soil pores. Precise pore analysis is contingent on the accurate transformation of CT images into binary image representations. Notably, segmentation of 2D CT images frequently harbors inaccuracies. This paper introduces a novel three-dimensional pore segmentation method, BDULSTM, which integrates U-Net with convolutional long short-term memory (CLSTM) networks to harness sequence data from CT images and enhance the precision of pore segmentation. The BDULSTM method employs an encoder–decoder framework to holistically extract image features, utilizing skip connections to further refine the segmentation accuracy of soil structure. Specifically, the CLSTM component, critical for analyzing sequential information in soil CT images, is strategically positioned at the juncture of the encoder and decoder within the U-shaped network architecture. The validation of our method confirms its efficacy in advancing the accuracy of soil pore segmentation beyond that of previous deep learning techniques, such as U-Net and CLSTM independently. Indeed, BDULSTM exhibits superior segmentation capabilities across a diverse array of soil conditions. In summary, BDULSTM represents a state-of-the-art artificial intelligence technology for the 3D segmentation of soil pores and offers a promising tool for analyzing pore structure and soil quality. Full article
(This article belongs to the Special Issue New Insights into Digital Image Processing and Denoising)
Show Figures

Figure 1

26 pages, 4583 KB  
Article
An Overlay Accelerator of DeepLab CNN for Spacecraft Image Segmentation on FPGA
by Zibo Guo, Kai Liu, Wei Liu, Xiaoyao Sun, Chongyang Ding and Shangrong Li
Remote Sens. 2024, 16(5), 894; https://doi.org/10.3390/rs16050894 - 2 Mar 2024
Cited by 8 | Viewed by 3920
Abstract
Due to the absence of communication and coordination with external spacecraft, non-cooperative spacecraft present challenges for the servicing spacecraft in acquiring information about their pose and location. The accurate segmentation of non-cooperative spacecraft components in images is a crucial step in autonomously sensing [...] Read more.
Due to the absence of communication and coordination with external spacecraft, non-cooperative spacecraft present challenges for the servicing spacecraft in acquiring information about their pose and location. The accurate segmentation of non-cooperative spacecraft components in images is a crucial step in autonomously sensing the pose of non-cooperative spacecraft. This paper presents a novel overlay accelerator of DeepLab Convolutional Neural Networks (CNNs) for spacecraft image segmentation on a FPGA. First, several software–hardware co-design aspects are investigated: (1) A CNNs-domain COD instruction set (Control, Operation, Data Transfer) is presented based on a Load–Store architecture to enable the implementation of accelerator overlays. (2) An RTL-based prototype accelerator is developed for the COD instruction set. The accelerator incorporates dedicated units for instruction decoding and dispatch, scheduling, memory management, and operation execution. (3) A compiler is designed that leverages tiling and operation fusion techniques to optimize the execution of CNNs, generating binary instructions for the optimized operations. Our accelerator is implemented on a Xilinx Virtex-7 XC7VX690T FPGA at 200 MHz. Experiments demonstrate that with INT16 quantization our accelerator achieves an accuracy (mIoU) of 77.84%, experiencing only a 0.2% degradation compared to that of the original fully precision model, in accelerating the segmentation model of DeepLabv3+ ResNet18 on the spacecraft component images (SCIs) dataset. The accelerator boasts a performance of 184.19 GOPS/s and a computational efficiency (Runtime Throughput/Theoretical Roof Throughput) of 88.72%. Compared to previous work, our accelerator improves performance by 1.5× and computational efficiency by 43.93%, all while consuming similar hardware resources. Additionally, in terms of instruction encoding, our instructions reduce the size by 1.5× to 49× when compiling the same model compared to previous work. Full article
(This article belongs to the Special Issue Remote Sensing Image Classification and Semantic Segmentation)
Show Figures

Graphical abstract

17 pages, 2095 KB  
Article
Link Prediction for Temporal Heterogeneous Networks Based on the Information Lifecycle
by Jiaping Cao, Jichao Li and Jiang Jiang
Mathematics 2023, 11(16), 3541; https://doi.org/10.3390/math11163541 - 16 Aug 2023
Cited by 3 | Viewed by 2223
Abstract
Link prediction for temporal heterogeneous networks is an important task in the field of network science, and it has a wide range of real-world applications. Traditional link prediction methods are mainly based on static homogeneous networks, which do not distinguish between different types [...] Read more.
Link prediction for temporal heterogeneous networks is an important task in the field of network science, and it has a wide range of real-world applications. Traditional link prediction methods are mainly based on static homogeneous networks, which do not distinguish between different types of nodes in the real world and do not account for network structure evolution over time. To address these issues, in this paper, we study the link prediction problem in temporal heterogeneous networks and propose a link prediction method for temporal heterogeneous networks (LP-THN) based on the information lifecycle, which is an end-to-end encoder–decoder structure. The information lifecycle accounts for the active, decay and stable states of edges. Specifically, we first introduce the meta-path augmented residual information matrix to preserve the structure evolution mechanism and semantics in HINs, using it as input to the encoder to obtain a low-dimensional embedding representation of the nodes. Finally, the link prediction problem is considered a binary classification problem, and the decoder is utilized for link prediction. Our prediction process accounts for both network structure and semantic changes using meta-path augmented residual information matrix perturbations. Our experiments demonstrate that LP-THN outperforms other baselines in both prediction effectiveness and prediction efficiency. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

27 pages, 5182 KB  
Article
Output Feedback Control of Sine-Gordon Chain over the Limited Capacity Digital Communication Channel
by Boris Andrievsky, Yury Orlov and Alexander L. Fradkov
Electronics 2023, 12(10), 2269; https://doi.org/10.3390/electronics12102269 - 17 May 2023
Cited by 3 | Viewed by 1456
Abstract
With the digitalization of mechatronic systems in the conditions of a shortage of available bandwidth of digital communication channels, the problem of ensuring the transfer of information between various components of the system can arise. This problem can be especially challenging in the [...] Read more.
With the digitalization of mechatronic systems in the conditions of a shortage of available bandwidth of digital communication channels, the problem of ensuring the transfer of information between various components of the system can arise. This problem can be especially challenging in the observation and control of spatially distributed objects due to the complexity of their dynamics, wide frequency band, and other factors. In such cases, a useful approach is to employ smart sensors, in which the measurement results are encoded for transmission over a digital communication channel. Specifically, the article is focused on the transmission of measurement data for the control of energy for a spatially-distributed sine-Gordon chain. The procedures for binary coding of measurements by first- and full-order coder-decoder pairs are proposed and numerically investigated, for each of which the use of stationary and adaptive coding procedures is studied. The procedures for estimating the state of the circuit when measuring outputs are studied, and for each of them, the accuracy of not only estimating the state but also controlling the system by output with the help of an observer is considered. The results of comparative modeling are presented, demonstrating the dependence of the accuracy of estimation and control on the data transfer rate. Full article
Show Figures

Figure 1

19 pages, 4572 KB  
Article
Neutralization Method of Ransomware Detection Technology Using Format Preserving Encryption
by Jaehyuk Lee, Sun-Young Lee, Kangbin Yim and Kyungroul Lee
Sensors 2023, 23(10), 4728; https://doi.org/10.3390/s23104728 - 13 May 2023
Cited by 8 | Viewed by 2748
Abstract
Ransomware is one type of malware that involves restricting access to files by encrypting files stored on the victim’s system and demanding money in return for file recovery. Although various ransomware detection technologies have been introduced, existing ransomware detection technologies have certain limitations [...] Read more.
Ransomware is one type of malware that involves restricting access to files by encrypting files stored on the victim’s system and demanding money in return for file recovery. Although various ransomware detection technologies have been introduced, existing ransomware detection technologies have certain limitations and problems that affect their detection ability. Therefore, there is a need for new detection technologies that can overcome the problems of existing detection methods and minimize the damage from ransomware. A technology that can be used to detect files infected by ransomware and by measuring the entropy of files has been proposed. However, from an attacker’s point of view, neutralization technology can bypass detection through neutralization using entropy. A representative neutralization method is one that involves decreasing the entropy of encrypted files by using an encoding technology such as base64. This technology also makes it possible to detect files that are infected by ransomware by measuring entropy after decoding the encoded files, which, in turn, means the failure of the ransomware detection-neutralization technology. Therefore, this paper derives three requirements for a more sophisticated ransomware detection-neutralization method from the perspective of an attacker for it to have novelty. These requirements are (1) it must not be decoded; (2) it must support encryption using secret information; and (3) the entropy of the generated ciphertext must be similar to that of plaintext. The proposed neutralization method satisfies these requirements, supports encryption without decoding, and applies format-preserving encryption that can adjust the input and output lengths. To overcome the limitations of neutralization technology using the encoding algorithm, we utilized format-preserving encryption, which could allow the attacker to manipulate the entropy of the ciphertext as desired by changing the expression range of numbers and controlling the input and output lengths in a very free manner. To apply format-preserving encryption, Byte Split, BinaryToASCII, and Radix Conversion methods were evaluated, and an optimal neutralization method was derived based on the experimental results of these three methods. As a result of the comparative analysis of the neutralization performance with existing studies, when the entropy threshold value was 0.5 in the Radix Conversion method, which was the optimal neutralization method derived from the proposed study, the neutralization accuracy was improved by 96% based on the PPTX file format. The results of this study provide clues for future studies to derive a plan to counter the technology that can neutralize ransomware detection technology. Full article
(This article belongs to the Special Issue Network Security and IoT Security)
Show Figures

Figure 1

26 pages, 1939 KB  
Review
A Survey of Full-Cycle Cross-Modal Retrieval: From a Representation Learning Perspective
by Suping Wang, Ligu Zhu, Lei Shi, Hao Mo and Songfu Tan
Appl. Sci. 2023, 13(7), 4571; https://doi.org/10.3390/app13074571 - 4 Apr 2023
Cited by 5 | Viewed by 7724
Abstract
Cross-modal retrieval aims to elucidate information fusion, imitate human learning, and advance the field. Although previous reviews have primarily focused on binary and real-value coding methods, there is a scarcity of techniques grounded in deep representation learning. In this paper, we concentrated on [...] Read more.
Cross-modal retrieval aims to elucidate information fusion, imitate human learning, and advance the field. Although previous reviews have primarily focused on binary and real-value coding methods, there is a scarcity of techniques grounded in deep representation learning. In this paper, we concentrated on harmonizing cross-modal representation learning and the full-cycle modeling of high-level semantic associations between vision and language, diverging from traditional statistical methods. We systematically categorized and summarized the challenges and open issues in implementing current technologies and investigated the pipeline of cross-modal retrieval, including pre-processing, feature engineering, pre-training tasks, encoding, cross-modal interaction, decoding, model optimization, and a unified architecture. Furthermore, we propose benchmark datasets and evaluation metrics to assist researchers in keeping pace with cross-modal retrieval advancements. By incorporating recent innovative works, we offer a perspective on potential advancements in cross-modal retrieval. Full article
(This article belongs to the Special Issue Advances in Intelligent Information Systems and AI Applications)
Show Figures

Figure 1

20 pages, 3223 KB  
Article
A Novel Cipher-Based Data Encryption with Galois Field Theory
by Mohammad Mazyad Hazzazi, Sasidhar Attuluri, Zaid Bassfar and Kireet Joshi
Sensors 2023, 23(6), 3287; https://doi.org/10.3390/s23063287 - 20 Mar 2023
Cited by 22 | Viewed by 3741
Abstract
Both the act of keeping information secret and the research on how to achieve it are included in the broad category of cryptography. When people refer to “information security,” they are referring to the study and use of methods that make data transfers [...] Read more.
Both the act of keeping information secret and the research on how to achieve it are included in the broad category of cryptography. When people refer to “information security,” they are referring to the study and use of methods that make data transfers harder to intercept. When we talk about “information security,” this is what we have in mind. Using private keys to encrypt and decode messages is a part of this procedure. Because of its vital role in modern information theory, computer security, and engineering, cryptography is now considered to be a branch of both mathematics and computer science. Because of its mathematical properties, the Galois field may be used to encrypt and decode information, making it relevant to the subject of cryptography. The ability to encrypt and decode information is one such use. In this case, the data may be encoded as a Galois vector, and the scrambling process could include the application of mathematical operations that involve an inverse. While this method is unsafe when used on its own, it forms the foundation for secure symmetric algorithms like AES and DES when combined with other bit shuffling methods. A two-by-two encryption matrix is used to protect the two data streams, each of which contains 25 bits of binary information which is included in the proposed work. Each cell in the matrix represents an irreducible polynomial of degree 6. Fine-tuning the values of the bits that make up each of the two 25-bit binary data streams using the Discrete Cosine Transform (DCT) with the Advanced Encryption Standard (AES) Method yields two polynomials of degree 6. Optimization is carried out using the Black Widow Optimization technique is used to tune the key generation in the cryptographic processing. By doing so, we can produce two polynomials of the same degree, which was our original aim. Users may also use cryptography to look for signs of tampering, such as whether a hacker obtained unauthorized access to a patient’s medical records and made any changes to them. Cryptography also allows people to look for signs of tampering with data. Indeed, this is another use of cryptography. It also has the added value of allowing users to check for indications of data manipulation. Users may also positively identify faraway people and objects, which is especially useful for verifying a document’s authenticity since it lessens the possibility that it was fabricated. The proposed work achieves higher accuracy of 97.24%, higher throughput of 93.47%, and a minimum decryption time of 0.0047 s. Full article
Show Figures

Figure 1

22 pages, 394 KB  
Article
Private Key and Decoder Side Information for Secure and Private Source Coding
by Onur Günlü, Rafael F. Schaefer, Holger Boche and Harold Vincent Poor
Entropy 2022, 24(12), 1716; https://doi.org/10.3390/e24121716 - 24 Nov 2022
Cited by 5 | Viewed by 2724
Abstract
We extend the problem of secure source coding by considering a remote source whose noisy measurements are correlated random variables used for secure source reconstruction. The main additions to the problem are as follows: (1) all terminals noncausally observe a noisy [...] Read more.
We extend the problem of secure source coding by considering a remote source whose noisy measurements are correlated random variables used for secure source reconstruction. The main additions to the problem are as follows: (1) all terminals noncausally observe a noisy measurement of the remote source; (2) a private key is available to all legitimate terminals; (3) the public communication link between the encoder and decoder is rate-limited; and (4) the secrecy leakage to the eavesdropper is measured with respect to the encoder input, whereas the privacy leakage is measured with respect to the remote source. Exact rate regions are characterized for a lossy source coding problem with a private key, remote source, and decoder side information under security, privacy, communication, and distortion constraints. By replacing the distortion constraint with a reliability constraint, we obtain the exact rate region for the lossless case as well. Furthermore, the lossy rate region for scalar discrete-time Gaussian sources and measurement channels is established. An achievable lossy rate region that can be numerically computed is also provided for binary-input multiple additive discrete-time Gaussian noise measurement channels. Full article
(This article belongs to the Special Issue Information Theoretic Methods for Future Communication Systems)
Show Figures

Figure 1

Back to TopTop