error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (104)

Search Parameters:
Keywords = confusion entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 365 KB  
Article
Disentangling Brillouin’s Negentropy Law of Information and Landauer’s Law on Data Erasure
by Didier Lairez
Entropy 2026, 28(1), 37; https://doi.org/10.3390/e28010037 - 27 Dec 2025
Viewed by 196
Abstract
The link between information and energy introduces the observer and their knowledge into the understanding of a fundamental quantity in physics. Two approaches compete to account for this link—Brillouin’s negentropy law of information and Landauer’s law on data erasure—which are often confused. The [...] Read more.
The link between information and energy introduces the observer and their knowledge into the understanding of a fundamental quantity in physics. Two approaches compete to account for this link—Brillouin’s negentropy law of information and Landauer’s law on data erasure—which are often confused. The first, based on Clausius’ inequality and Shannon’s mathematical results, is very robust, whereas the second, based on the simple idea that information requires a material embodiment (data bits), is now perceived as more physical and therefore prevails. In this paper, we show that Landauer’s idea results from a confusion between information (a global emergent concept) and data (a local material object). This confusion leads to many inconsistencies and is incompatible with thermodynamics and information theory. The reason it prevails is interpreted as being due to a frequent tendency of materialism towards reductionism, neglecting emergence and seeking to eliminate the role of the observer. A paradoxical trend, considering that it is often accompanied by the materialist idea that all scientific knowledge, nevertheless, originates from observation. Information and entropy are actually emergent quantities introduced in the theory by convention. Full article
Show Figures

Figure 1

19 pages, 2700 KB  
Article
Content Generation Through the Integration of Markov Chains and Semantic Technology (CGMCST)
by Liliana Ibeth Barbosa-Santillán and Edgar León-Sandoval
Appl. Sci. 2025, 15(23), 12687; https://doi.org/10.3390/app152312687 - 30 Nov 2025
Viewed by 476
Abstract
In today’s rapidly evolving digital landscape, businesses are constantly under pressure to produce high-quality, engaging content for various marketing channels, including blog posts, social media updates, and email campaigns. However, the traditional manual content generation process is often time-consuming, resource-intensive, and inconsistent in [...] Read more.
In today’s rapidly evolving digital landscape, businesses are constantly under pressure to produce high-quality, engaging content for various marketing channels, including blog posts, social media updates, and email campaigns. However, the traditional manual content generation process is often time-consuming, resource-intensive, and inconsistent in maintaining the desired messaging and tone. As a result, the content production process can become a bottleneck, delay marketing campaigns, and reduce organizational agility. Furthermore, manual content generation introduces the risk of inconsistencies in tone, style, and messaging across different platforms and pieces of content. These inconsistencies can confuse the audience and dilute the message. We propose a hybrid approach for content generation based on the integration of Markov Chains with Semantic Technology (CGMCST). Based on the probabilistic nature of Markov chains, this approach allows an automated system to predict sequences of words and phrases, thereby generating coherent and contextually accurate content. Moreover, the application of semantic technology ensures that the generated content is semantically rich and maintains a consistent tone and style. Consistency across all marketing materials strengthens the message and enhances audience engagement. Automated content generation can scale effortlessly to meet increasing demands. The algorithm obtained an entropy of 9.6896 for the stationary distribution, indicating that the model can accurately predict the next word in sequences and generate coherent, contextually appropriate content that supports the efficacy of this novel CGMCST approach. The simulation was executed for a fixed time of 10,000 cycles, considering the weights based on the top three topics. These weights are determined both by the global document index and by term. The stationary distribution of the Markov chain for the top keywords, by stationary probability, includes a stationary distribution of “people” with a 0.004398 stationary distribution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 3941 KB  
Article
Deep Learning-Based Citrus Canker and Huanglongbing Disease Detection Using Leaf Images
by Maryjose Devora-Guadarrama, Benjamín Luna-Benoso, Antonio Alarcón-Paredes, Jose Cruz Martínez-Perales and Úrsula Samantha Morales-Rodríguez
Computers 2025, 14(11), 500; https://doi.org/10.3390/computers14110500 - 17 Nov 2025
Viewed by 670
Abstract
Early detection of plant diseases is key to ensuring food production, reducing economic losses, minimizing the use of agrochemicals, and maintaining the sustainability of the agricultural sector. Citrus plants, an important source of vitamin C, fiber, and antioxidants, are among the world’s most [...] Read more.
Early detection of plant diseases is key to ensuring food production, reducing economic losses, minimizing the use of agrochemicals, and maintaining the sustainability of the agricultural sector. Citrus plants, an important source of vitamin C, fiber, and antioxidants, are among the world’s most significant fruit crops but face threats such as canker and Huanglongbing (HLB), incurable diseases that require management strategies to mitigate their impact. Manual diagnosis, although common, I s imprecise, slow, and costly; therefore, efficient alternatives are emerging to identify diseases from early stages using Artificial Intelligence techniques. This study evaluated four deep learning models, specifically convolutional neural networks. In this study, we evaluated four convolutional neural network models (DenseNet121, ResNet50, EfficientNetB0, and MobileNetV2) to detect canker and HLB in citrus leaf images. We applied preprocessing and data-augmentation techniques; transfer learning via selective fine-tuning; stratified k-fold cross-validation; regularization methods such as dropout and weight decay; and hyperparameter-optimization techniques. The models were evaluated by the loss value and by metrics derived from the confusion matrix, including accuracy, recall, and F1-score. The best-performing model was EfficientNetB0, which achieved an average accuracy of 99.88% and the lowest loss value of 0.0058 using cross-entropy as the loss function. Since EfficientNetB0 is a lightweight model, the results show that lightweight models can achieve favorable performance compared to robust models, models that can be useful for disease detection in the agricultural sector using portable devices or drones for field monitoring. The high accuracy obtained is mainly because only two diseases were considered; consequently, it is possible that these results do not hold in a database that includes a larger number of diseases. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

38 pages, 1324 KB  
Article
A Systematic Approach to Exergy Efficiency of Steady-Flow Systems
by Yunus A. Çengel and Mehmet Kanoğlu
Entropy 2025, 27(11), 1108; https://doi.org/10.3390/e27111108 - 26 Oct 2025
Viewed by 1054
Abstract
Exergy efficiency is a measure of thermodynamic perfection. A device that operates reversibly has an exergy efficiency of 100 percent and is said to be thermodynamically perfect. A reversible process involves zero entropy generation and thus zero exergy destruction since Xdestroyed = [...] Read more.
Exergy efficiency is a measure of thermodynamic perfection. A device that operates reversibly has an exergy efficiency of 100 percent and is said to be thermodynamically perfect. A reversible process involves zero entropy generation and thus zero exergy destruction since Xdestroyed = T0Sgen. Exergy efficiency is generally defined as the ratio of exergy output to exergy input ηex = Xoutput/Xinput = 1 − (Xdestroyed + Xloss)/Xinput or the ratio of exergy recovered to exergy expended ηex = Xrecovered/Xexpended = 1 − Xdestroyed/Xexpended. In this paper, exergy efficiency relations are obtained first for a general steady-flow system using both approaches. Then, explicit general relations are obtained for common steady-flow devices, such as turbines, compressors, pumps, nozzles, diffusers, valves and heat exchangers, as well as heat engines, refrigerators, and heat pumps. For power and refrigeration cycles, five different forms of exergy efficiency relations are developed, and their equivalence is demonstrated. With the unified approach presented here and the insights provided, the controversy and confusion associated with different exergy efficiency definitions are largely alleviated. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

44 pages, 5324 KB  
Article
Secure Chaotic Cryptosystem for 3D Medical Images
by Antonios S. Andreatos and Apostolos P. Leros
Mathematics 2025, 13(20), 3310; https://doi.org/10.3390/math13203310 - 16 Oct 2025
Cited by 1 | Viewed by 621
Abstract
This study proposes a lightweight double-encryption cryptosystem for 3D medical images such as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) scans, and Computed Tomography scans (CT). The first encryption process uses chaotic pseudo-random numbers produced by a Lorenz chaotic system while the [...] Read more.
This study proposes a lightweight double-encryption cryptosystem for 3D medical images such as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) scans, and Computed Tomography scans (CT). The first encryption process uses chaotic pseudo-random numbers produced by a Lorenz chaotic system while the second applies Cipher Block Chaining (CBC) mode using outputs from a Pseudo-Random Number Generator (PRNG). To enhance diffusion and confusion, additional voxel shuffling and bit rotation operations are incorporated. Various sets of optimized parameters for the Lorenz system are calculated using either a genetic algorithm or a random walk. The master key of the cryptosystem is 672 bits long and consists of two components. The first component is the SHA-512 hash of the input image while the second component consists of the initial conditions of the Lorenz chaotic system and is 160 bits long. The master key is processed by a function that generates fourteen subkeys, which are then used in different stages of the algorithm. The cryptosystem exhibits excellent performance in terms of entropy, NPCR, UACI, key sensitivity, security, and speed, ensuring the confidentiality of personal medical data and resilience against advanced computational threats, and making it a good candidate for real-time 3D medical image encryption in healthcare systems. Full article
(This article belongs to the Special Issue Mathematical Computation for Pattern Recognition and Computer Vision)
Show Figures

Figure 1

17 pages, 2165 KB  
Article
Seizure Type Classification Based on Hybrid Feature Engineering and Mutual Information Analysis Using Electroencephalogram
by Yao Miao
Entropy 2025, 27(10), 1057; https://doi.org/10.3390/e27101057 - 11 Oct 2025
Viewed by 843
Abstract
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to [...] Read more.
Epilepsy has diverse seizure types that challenge diagnosis and treatment, requiring automated and accurate classification to improve patient outcomes. Traditional electroencephalogram (EEG)-based diagnosis relies on manual interpretation, which is subjective and inefficient, particularly for multi-class differentiation in imbalanced datasets. This study aims to develop a hybrid framework for automated multi-class seizure type classification using segment-wise EEG processing and multi-band feature engineering to enhance precision and address data challenges. EEG signals from the TUSZ dataset were segmented into 1-s windows with 0.5-s overlaps, followed by the extraction of multi-band features, including statistical measures, sample entropy, wavelet energies, Hurst exponent, and Hjorth parameters. The mutual information (MI) approach was employed to select the optimal features, and seven machine learning models (SVM, KNN, DT, RF, XGBoost, CatBoost, LightGBM) were evaluated via 10-fold stratified cross-validation with a class balancing strategy. The results showed the following: (1) XGBoost achieved the highest performance (accuracy: 0.8710, F1 score: 0.8721, AUC: 0.9797), with γ-band features dominating importance. (2) Confusion matrices indicated robust discrimination but noted overlaps in focal subtypes. This framework advances seizure type classification by integrating multi-band features and the MI method, which offers a scalable and interpretable tool for supporting clinical epilepsy diagnostics. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

16 pages, 1094 KB  
Article
Recognition of EEG Features in Autism Disorder Using SWT and Fisher Linear Discriminant Analysis
by Fahmi Fahmi, Melinda Melinda, Prima Dewi Purnamasari, Elizar Elizar and Aufa Rafiki
Diagnostics 2025, 15(18), 2291; https://doi.org/10.3390/diagnostics15182291 - 10 Sep 2025
Cited by 1 | Viewed by 1395
Abstract
Background/Objectives: An ASD diagnosis from EEG is challenging due to non-stationary, low-SNR signals and small cohorts. We propose a compact, interpretable pipeline that pairs a shift-invariant Stationary Wavelet Transform (SWT) with Fisher’s Linear Discriminant (FLDA) as a supervised projection method, delivering band-level [...] Read more.
Background/Objectives: An ASD diagnosis from EEG is challenging due to non-stationary, low-SNR signals and small cohorts. We propose a compact, interpretable pipeline that pairs a shift-invariant Stationary Wavelet Transform (SWT) with Fisher’s Linear Discriminant (FLDA) as a supervised projection method, delivering band-level insight and subject-wise evaluation suitable for resource-constrained clinics. Methods: EEG from the KAU dataset (eight ASD, eight controls; 256 Hz) was decomposed with SWT (db4). We retained levels 3, 4, and 6 (γ/β/θ) as features. FLDA learned a low-dimensional discriminant subspace, followed by a linear decision rule. Evaluation was conducted using a subject-wise 70/30 split (no subject overlap) with accuracy, precision, recall, F1, and confusion matrices. Results: The β band (Level 4) achieved the best performance (accuracy/precision/recall/F1 = 0.95), followed by γ (0.92) and θ (0.85). Despite partial overlap in FLDA scores, the projection maximized between-class separation relative to within-class variance, yielding robust linear decisions. Conclusions: Unlike earlier FLDA-only pipelines and wavelet–entropy–ANN approaches, our study (1) employs SWT (undecimated, shift-invariant) rather than DWT to stabilize sub-band features on short resting segments, (2) uses FLDA as a supervised projection to mitigate small-sample covariance pathologies before classification, (3) provides band-specific discriminative insight (β > γ/θ) under a subject-wise protocol, and (4) targets low-compute deployment. These choices yield a reproducible baseline with competitive accuracy and clear clinical interpretability. Future work will benchmark kernel/regularized discriminants and lightweight deep models as cohort size and compute permit. Full article
(This article belongs to the Special Issue Advances in the Diagnosis of Nervous System Diseases—3rd Edition)
Show Figures

Figure 1

16 pages, 1932 KB  
Article
2.5D Deep Learning and Machine Learning for Discriminative DLBCL and IDC with Radiomics on PET/CT
by Fei Liu, Wen Chen, Jianping Zhang, Jianling Zou, Bingxin Gu, Hongxing Yang, Silong Hu, Xiaosheng Liu and Shaoli Song
Bioengineering 2025, 12(8), 873; https://doi.org/10.3390/bioengineering12080873 - 12 Aug 2025
Cited by 1 | Viewed by 1506
Abstract
We aimed to establish non-invasive diagnostic models comparable to pathology testing and explore reliable digital imaging biomarkers to classify diffuse large B-cell lymphoma (DLBCL) and invasive ductal carcinoma (IDC). Our study enrolled 386 breast nodules from 279 patients with DLBCL and IDC, which [...] Read more.
We aimed to establish non-invasive diagnostic models comparable to pathology testing and explore reliable digital imaging biomarkers to classify diffuse large B-cell lymphoma (DLBCL) and invasive ductal carcinoma (IDC). Our study enrolled 386 breast nodules from 279 patients with DLBCL and IDC, which were pathologically confirmed and underwent 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) examination. Patients from two centers were separated into internal and external cohorts. Notably, we introduced 2.5D deep learning and machine learning to extract features, develop models, and discover biomarkers. Performances were assessed using the area under curve (AUC) and confusion matrix. Additionally, the Shapley additive explanation (SHAP) and local interpretable model-agnostic explanations (LIME) techniques were employed to interpret the model. On the internal cohort, the optimal model PT_TDC_SVM achieved an accuracy of 0.980 (95% confidence interval (CI): 0.957–0.991) and an AUC of 0.992 (95% CI: 0.946–0.998), surpassing the other models. On the external cohort, the accuracy was 0.975 (95% CI: 0.913–0.993) and the AUC was 0.996 (95% CI: 0.972–0.999). The optimal imaging biomarker PET_LBP-2D_gldm_DependenceEntropy demonstrated an average accuracy of 0.923/0.937 on internal/external testing. Our study presented an innovative automated model for DLBCL and IDC, identifying reliable digital imaging biomarkers with significant potential. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

20 pages, 1350 KB  
Article
Beyond the Second Law: Darwinian Evolution as a Tendency for Entropy Production to Increase
by Charles H. Lineweaver
Entropy 2025, 27(8), 850; https://doi.org/10.3390/e27080850 - 11 Aug 2025
Cited by 1 | Viewed by 3037
Abstract
There is much confusion about the apparent opposition between Darwinian evolution and the second law of thermodynamics. Both entropy and entropy production play more fundamental roles in the origin of life and Darwinian evolution than is generally recognized. I argue that Darwinian evolution [...] Read more.
There is much confusion about the apparent opposition between Darwinian evolution and the second law of thermodynamics. Both entropy and entropy production play more fundamental roles in the origin of life and Darwinian evolution than is generally recognized. I argue that Darwinian evolution can be understood as a tendency for entropy production to increase. Since the second law is about the increase in entropy, this hypothesis goes beyond the second law because it is about the increase in entropy production. This hypothesis can explain some aspects of biology that Darwinism struggles with, such as the origin of life, the origin of Darwinism, ecological successions, and an apparent general trend towards biological complexity. Gould proposed a wall of minimal complexity to explain this apparent increase in biological complexity. I argue that the apparent increase in biological complexity can be understood as a tendency for biological entropy production to increase through a broader range of free energy transduction mechanisms. In the context of a simple universe-in-a-cup-of-coffee model, entropy production is proposed as a more quantifiable replacement for the notion of complexity. Finally, I sketch the cosmic history of entropy production, which suggests that increases and decreases of free energy availability constrain the tendency for entropy production to increase. Full article
Show Figures

Figure 1

23 pages, 3561 KB  
Article
Chaos-Based Color Image Encryption with JPEG Compression: Balancing Security and Compression Efficiency
by Wei Zhang, Xue Zheng, Meng Xing, Jingjing Yang, Hai Yu and Zhiliang Zhu
Entropy 2025, 27(8), 838; https://doi.org/10.3390/e27080838 - 6 Aug 2025
Cited by 1 | Viewed by 1163
Abstract
In recent years, most proposed digital image encryption algorithms have primarily focused on encrypting raw pixel data, often neglecting the integration with image compression techniques. Image compression algorithms, such as JPEG, are widely utilized in internet applications, highlighting the need for encryption methods [...] Read more.
In recent years, most proposed digital image encryption algorithms have primarily focused on encrypting raw pixel data, often neglecting the integration with image compression techniques. Image compression algorithms, such as JPEG, are widely utilized in internet applications, highlighting the need for encryption methods that are compatible with compression processes. This study introduces an innovative color image encryption algorithm integrated with JPEG compression, designed to enhance the security of images susceptible to attacks or tampering during prolonged transmission. The research addresses critical challenges in achieving an optimal balance between encryption security and compression efficiency. The proposed encryption algorithm is structured around three key compression phases: Discrete Cosine Transform (DCT), quantization, and entropy coding. At each stage, the algorithm incorporates advanced techniques such as block segmentation, block replacement, DC coefficient confusion, non-zero AC coefficient transformation, and RSV (Run/Size and Value) pair recombination. Extensive simulations and security analyses demonstrate that the proposed algorithm exhibits strong robustness against noise interference and data loss, effectively meeting stringent security performance requirements. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 4682 KB  
Article
DERIENet: A Deep Ensemble Learning Approach for High-Performance Detection of Jute Leaf Diseases
by Mst. Tanbin Yasmin Tanny, Tangina Sultana, Md. Emran Biswas, Chanchol Kumar Modok, Arjina Akter, Mohammad Shorif Uddin and Md. Delowar Hossain
Information 2025, 16(8), 638; https://doi.org/10.3390/info16080638 - 27 Jul 2025
Cited by 1 | Viewed by 1031
Abstract
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability [...] Read more.
Jute, a vital lignocellulosic fiber crop with substantial industrial and ecological relevance, continues to suffer considerable yield and quality degradation due to pervasive foliar pathologies. Traditional diagnostic modalities reliant on manual field inspections are inherently constrained by subjectivity, diagnostic latency, and inadequate scalability across geographically distributed agrarian systems. To transcend these limitations, we propose DERIENet, a robust and scalable classification approach within a deep ensemble learning framework. It is meticulously engineered by integrating three high-performing convolutional neural networks—ResNet50, InceptionV3, and EfficientNetB0—along with regularization, batch normalization, and dropout strategies, to accurately classify jute leaf diseases such as Cercospora Leaf Spot, Golden Mosaic Virus, and healthy leaves. A key methodological contribution is the design of a novel augmentation pipeline, termed Geometric Localized Occlusion and Adaptive Rescaling (GLOAR), which dynamically modulates photometric and geometric distortions based on image entropy and luminance to synthetically upscale a limited dataset (920 images) into a significantly enriched and diverse dataset of 7800 samples, thereby mitigating overfitting and enhancing domain generalizability. Empirical evaluation, utilizing a comprehensive set of performance metrics—accuracy, precision, recall, F1-score, confusion matrices, and ROC curves—demonstrates that DERIENet achieves a state-of-the-art classification accuracy of 99.89%, with macro-averaged and weighted average precision, recall, and F1-score uniformly at 99.89%, and an AUC of 1.0 across all disease categories. The reliability of the model is validated by the confusion matrix, which shows that 899 out of 900 test images were correctly identified and that there was only one misclassification. Comparative evaluations of the various ensemble baselines, such as DenseNet201, MobileNetV2, and VGG16, and individual base learners demonstrate that DERIENet performs noticeably superior to all baseline models. It provides a highly interpretable, deployment-ready, and computationally efficient architecture that is ideal for integrating into edge or mobile platforms to facilitate in situ, real-time disease diagnostics in precision agriculture. Full article
Show Figures

Figure 1

26 pages, 8232 KB  
Article
A CML-ECA Chaotic Image Encryption System Based on Multi-Source Perturbation Mechanism and Dynamic DNA Encoding
by Xin Xie, Kun Zhang, Bing Zheng, Hao Ning, Yu Zhou, Qi Peng and Zhengyu Li
Symmetry 2025, 17(7), 1042; https://doi.org/10.3390/sym17071042 - 2 Jul 2025
Cited by 1 | Viewed by 1089
Abstract
To meet the growing demand for secure and reliable image protection in digital communication, this paper proposes a novel image encryption framework that addresses the challenges of high plaintext sensitivity, resistance to statistical attacks, and key security. The method combines a two-dimensional dynamically [...] Read more.
To meet the growing demand for secure and reliable image protection in digital communication, this paper proposes a novel image encryption framework that addresses the challenges of high plaintext sensitivity, resistance to statistical attacks, and key security. The method combines a two-dimensional dynamically coupled map lattice (2D DCML) with elementary cellular automata (ECA) to construct a heterogeneous chaotic system with strong spatiotemporal complexity. To further enhance nonlinearity and diffusion, a multi-source perturbation mechanism and adaptive DNA encoding strategy are introduced. These components work together to obscure the image structure, pixel correlations, and histogram characteristics. By embedding spatial and temporal symmetry into the coupled lattice evolution and perturbation processes, the proposed method ensures a more uniform and balanced transformation of image data. Meanwhile, the method enhances the confusion and diffusion effects by utilizing the principle of symmetric perturbation, thereby improving the overall security of the system. Experimental evaluations on standard images demonstrate that the proposed scheme achieves high encryption quality in terms of histogram uniformity, information entropy, NPCR, UACI, and key sensitivity tests. It also shows strong resistance to chosen plaintext attacks, confirming its robustness for secure image transmission. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 8812 KB  
Article
A Three-Channel Improved SE Attention Mechanism Network Based on SVD for High-Order Signal Modulation Recognition
by Xujia Zhou, Gangyi Tu, Xicheng Zhu, Di Zhao and Luyan Zhang
Electronics 2025, 14(11), 2233; https://doi.org/10.3390/electronics14112233 - 30 May 2025
Viewed by 954
Abstract
To address the issues of poor differentiation capability for high-order signals and low average recognition rates in existing communication modulation recognition techniques, this paper first performs denoising using an entropy-based dynamic Singular Value Decomposition (SVD) method and proposes a three-channel convolutional gated recurrent [...] Read more.
To address the issues of poor differentiation capability for high-order signals and low average recognition rates in existing communication modulation recognition techniques, this paper first performs denoising using an entropy-based dynamic Singular Value Decomposition (SVD) method and proposes a three-channel convolutional gated recurrent units (GRU) model combined with an improved SE attention mechanism for automatic modulation recognition.The model denoises in-phase/quadrature (I/Q) signals using the SVD method to enhance signal quality. By combining one-dimensional (1D) convolutional and two-dimensional (2D) convolutional, it employs a three-channel approach to extract spatial features and capture local correlations. GRU is utilized to capture temporal sequence features so as to enhance the perception of dynamic changes. Additionally, an improved SE block is introduced to optimize feature representation, adaptively adjust channel weights, and improve classification performance. Experiments on the RadioML2016.10a dataset show that the model has a maximum classification recognition rate of 92.54%. Compared with traditional CNN, ResNet, CLDNN, GRU2, DAE, and LSTM2, the average recognition accuracy is improved by 5.41% to 8.93%. At the same time, the model significantly enhances the differentiation capability between 16QAM and 64QAM, reducing the average confusion probability by 27.70% to 39.40%. Full article
Show Figures

Figure 1

21 pages, 5217 KB  
Article
Gait Phase Recognition in Multi-Task Scenarios Based on sEMG Signals
by Xin Shi, Xiaheng Zhang, Pengjie Qin, Liangwen Huang, Yaqin Zhu and Zixiang Yang
Biosensors 2025, 15(5), 305; https://doi.org/10.3390/bios15050305 - 10 May 2025
Viewed by 1202
Abstract
In the human–exoskeleton interaction process, accurately recognizing gait phases is crucial for effectively assessing the assistance provided by the exoskeleton. However, due to the similarity in muscle activation patterns between adjacent gait phases, the recognition accuracy is often low, which can easily lead [...] Read more.
In the human–exoskeleton interaction process, accurately recognizing gait phases is crucial for effectively assessing the assistance provided by the exoskeleton. However, due to the similarity in muscle activation patterns between adjacent gait phases, the recognition accuracy is often low, which can easily lead to confusion in surface electromyography (sEMG) feature extraction. This paper proposes a real-time recognition method based on multi-scale fuzzy approximate root mean entropy (MFAREn) and an Efficient Multi-Scale Attention Convolutional Neural Network (EMACNN), building upon the concept of fuzzy approximate entropy. MFAREn is used to extract the dynamic complexity and energy intensity features of sEMG signals, serving as the input matrix for EMACNN to achieve fast and accurate gait phase recognition. This study collected sEMG signals from 10 subjects performing continuous lower limb gait movements in five common motion scenarios for experimental validation. The results show that the proposed method achieves an average recognition accuracy of 95.72%, outperforming the other comparison methods. The method proposed in this paper is significantly different compared to other methods (p < 0.001). Notably, the recognition accuracy for walking in level walking, stairs ascending, and ramp ascending exceeds 95.5%. This method demonstrates a high recognition accuracy, enabling sEMG-based gait phase recognition and meeting the requirements for effective human–exoskeleton interaction. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Figure 1

27 pages, 1843 KB  
Article
Multi-Layered Security Framework Combining Steganography and DNA Coding
by Bhavya Kallapu, Avinash Nanda Janardhan, Rama Moorthy Hejamadi, Krishnaraj Rao Nandikoor Shrinivas, Saritha, Raghunandan Kemmannu Ramesh and Lubna A. Gabralla
Systems 2025, 13(5), 341; https://doi.org/10.3390/systems13050341 - 1 May 2025
Cited by 2 | Viewed by 2831
Abstract
With the rapid expansion of digital communication and data sharing, ensuring robust security for sensitive information has become increasingly critical, particularly when data are transmitted over public networks. Traditional encryption techniques are increasingly vulnerable to evolving cyber threats, making single-layer security mechanisms less [...] Read more.
With the rapid expansion of digital communication and data sharing, ensuring robust security for sensitive information has become increasingly critical, particularly when data are transmitted over public networks. Traditional encryption techniques are increasingly vulnerable to evolving cyber threats, making single-layer security mechanisms less effective. This study proposes a multi-layered security approach that integrates cryptographic and steganographic techniques to enhance data protection. The framework leverages advanced methods such as encrypted data embedding in images, DNA sequence coding, QR codes, and least significant bit (LSB) steganography. To evaluate its effectiveness, experiments were conducted using text messages, text files, and images, with security assessments based on PSNR, MSE, SNR, and encryption–decryption times for text data. Image security was analyzed through visual inspection, correlation, entropy, standard deviation, key space analysis, randomness, and differential analysis. The proposed method demonstrated strong resilience against differential cryptanalysis, achieving high NPCR values (99.5784%, 99.4292%, and 99.5784%) and UACI values (33.5873%, 33.5149%, and 33.3745%), indicating robust diffusion and confusion properties. These results highlight the reliability and effectiveness of the proposed framework in safeguarding data integrity and confidentiality, providing a promising direction for future cryptographic research. Full article
Show Figures

Figure 1

Back to TopTop