Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = distance learning systems (DLS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 8922 KiB  
Article
A Two-Stage Time-Domain Equalization Method for Mitigating Nonlinear Distortion in Single-Carrier THz Communication Systems
by Yunchuan Liu, Hongcheng Yang, Ziqi Liu, Minghan Jia, Shang Li, Jiajie Li, Jingsuo He, Zhe Yang and Cunlin Zhang
Sensors 2025, 25(15), 4825; https://doi.org/10.3390/s25154825 - 6 Aug 2025
Abstract
Terahertz (THz) communication is regarded as a key technology for achieving high-speed data transmission and wireless communication due to its ultra-high frequency and large bandwidth characteristics. In this study, we focus on a single-carrier THz communication system and propose a two-stage deep learning-based [...] Read more.
Terahertz (THz) communication is regarded as a key technology for achieving high-speed data transmission and wireless communication due to its ultra-high frequency and large bandwidth characteristics. In this study, we focus on a single-carrier THz communication system and propose a two-stage deep learning-based time-domain equalization method, specifically designed to mitigate the nonlinear distortions in such systems, thereby enhancing communication reliability and performance. The method adopts a progressive learning strategy, whereby global characteristics are initially captured before progressing to local levels. This enables the effective identification and equalization of channel characteristics, particularly in the mitigation of nonlinear distortion and random interference, which can otherwise negatively impact communication quality. In an experimental setting at a frequency of 230 GHz and a channel distance of 2.1 m, this method demonstrated a substantial reduction in the system’s bit error rate (BER), exhibiting particularly noteworthy performance enhancements in comparison to before equalization. To validate the model’s generalization capability, data collection and testing were also conducted at a frequency of 310 GHz and a channel distance of 1.5 m. Experimental results show that the proposed time-domain equalizer, trained using the two-stage DL framework, achieved significant BER reductions of approximately 92.15% at 230 GHz (2.1 m) and 83.33% at 310 GHz (1.5 m), compared to the system’s performance prior to equalization. The method exhibits stable performance under varying conditions, supporting its use in future THz communication studies. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

30 pages, 1142 KiB  
Review
Beyond the Backbone: A Quantitative Review of Deep-Learning Architectures for Tropical Cyclone Track Forecasting
by He Huang, Difei Deng, Liang Hu, Yawen Chen and Nan Sun
Remote Sens. 2025, 17(15), 2675; https://doi.org/10.3390/rs17152675 - 2 Aug 2025
Viewed by 202
Abstract
Accurate forecasting of tropical cyclone (TC) tracks is critical for disaster preparedness and risk mitigation. While traditional numerical weather prediction (NWP) systems have long served as the backbone of operational forecasting, they face limitations in computational cost and sensitivity to initial conditions. In [...] Read more.
Accurate forecasting of tropical cyclone (TC) tracks is critical for disaster preparedness and risk mitigation. While traditional numerical weather prediction (NWP) systems have long served as the backbone of operational forecasting, they face limitations in computational cost and sensitivity to initial conditions. In recent years, deep learning (DL) has emerged as a promising alternative, offering data-driven modeling capabilities for capturing nonlinear spatiotemporal patterns. This paper presents a comprehensive review of DL-based approaches for TC track forecasting. We categorize all DL-based TC tracking models according to the architecture, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), Transformers, graph neural networks (GNNs), generative models, and Fourier-based operators. To enable rigorous performance comparison, we introduce a Unified Geodesic Distance Error (UGDE) metric that standardizes evaluation across diverse studies and lead times. Based on this metric, we conduct a critical comparison of state-of-the-art models and identify key insights into their relative strengths, limitations, and suitable application scenarios. Building on this framework, we conduct a critical cross-model analysis that reveals key trends, performance disparities, and architectural tradeoffs. Our analysis also highlights several persistent challenges, such as long-term forecast degradation, limited physical integration, and generalization to extreme events, pointing toward future directions for developing more robust and operationally viable DL models for TC track forecasting. To support reproducibility and facilitate standardized evaluation, we release an open-source UGDE conversion tool on GitHub. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

12 pages, 630 KiB  
Systematic Review
Advancing Diagnostic Tools in Forensic Science: The Role of Artificial Intelligence in Gunshot Wound Investigation—A Systematic Review
by Francesco Sessa, Mario Chisari, Massimiliano Esposito, Elisa Guardo, Lucio Di Mauro, Monica Salerno and Cristoforo Pomara
Forensic Sci. 2025, 5(3), 30; https://doi.org/10.3390/forensicsci5030030 - 20 Jul 2025
Viewed by 359
Abstract
Background/Objectives: Artificial intelligence (AI) is beginning to be applied in wound ballistics, showing preliminary potential to improve the accuracy and objectivity of forensic analyses. This review explores the current state of AI applications in forensic firearm wound analysis, emphasizing its potential to [...] Read more.
Background/Objectives: Artificial intelligence (AI) is beginning to be applied in wound ballistics, showing preliminary potential to improve the accuracy and objectivity of forensic analyses. This review explores the current state of AI applications in forensic firearm wound analysis, emphasizing its potential to address challenges such as subjective interpretations and data heterogeneity. Methods: A systematic review adhering to PRISMA guidelines was conducted using databases such as Scopus and Web of Science. Keywords focused on AI and GSW classification identified 502 studies, narrowed down to 4 relevant articles after rigorous screening based on inclusion and exclusion criteria. Results: These studies examined the role of deep learning (DL) models in classifying GSWs by type, shooting distance, and entry or exit characteristics. The key findings demonstrated that DL models like TinyResNet, ResNet152, and ConvNext Tiny achieved accuracy ranging from 87.99% to 98%. Models were effective in tasks such as classifying GSWs and estimating shooting distances. However, most studies were exploratory in nature, with small sample sizes and, in some cases, reliance on animal models, which limits generalizability to real-world forensic scenarios. Conclusions: Comparisons with other forensic AI applications revealed that large, diverse datasets significantly enhance model performance. Transparent and interpretable AI systems utilizing techniques are essential for judicial acceptance and ethical compliance. Despite the encouraging results, the field remains in an early stage of development. Limitations highlight the need for standardized protocols, cross-institutional collaboration, and the integration of multimodal data for robust forensic AI systems. Future research should focus on overcoming current data and validation constraints, ensuring the ethical use of human forensic data, and developing AI tools that are scientifically sound and legally defensible. Full article
Show Figures

Figure 1

17 pages, 2550 KiB  
Article
Solar and Wind 24 H Sequenced Prediction Using L-Transform Component and Deep LSTM Learning in Representation of Spatial Pattern Correlation
by Ladislav Zjavka
Atmosphere 2025, 16(7), 859; https://doi.org/10.3390/atmos16070859 - 15 Jul 2025
Viewed by 270
Abstract
Spatiotemporal correlations between meteo-inputs and wind–solar outputs in an optimal regional scale are crucial for developing robust models, reliable in mid-term prediction time horizons. Modelling border conditions is vital for early recognition of progress in chaotic atmospheric processes at the destination of interest. [...] Read more.
Spatiotemporal correlations between meteo-inputs and wind–solar outputs in an optimal regional scale are crucial for developing robust models, reliable in mid-term prediction time horizons. Modelling border conditions is vital for early recognition of progress in chaotic atmospheric processes at the destination of interest. This approach is used in differential and deep learning; artificial intelligence (AI) techniques allow for reliable pattern representation in long-term uncertainty and regional irregularities. The proposed day-by-day estimation of the RE production potential is based on first data processing in detecting modelling initialisation times from historical databases, considering correlation distance. Optimal data sampling is crucial for AI training in statistically based predictive modelling. Differential learning (DfL) is a recently developed and biologically inspired strategy that combines numerical derivative solutions with neurocomputing. This hybrid approach is based on the optimal determination of partial differential equations (PDEs) composed at the nodes of gradually expanded binomial trees. It allows for modelling of highly uncertain weather-related physical systems using unstable RE. The main objective is to improve its self-evolution and the resulting computation in prediction time. Representing relevant patterns by their similarity factors in input–output resampling reduces ambiguity in RE forecasting. Node-by-node feature selection and dynamical PDE representation of DfL are evaluated along with long-short-term memory (LSTM) recurrent processing of deep learning (DL), capturing complex spatio-temporal patterns. Parametric C++ executable software with one-month spatial metadata records is available to compare additional modelling strategies. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Atmospheric Sciences)
Show Figures

Figure 1

17 pages, 16364 KiB  
Article
FedeAMR-CFF: A Federated Automatic Modulation Recognition Method Based on Characteristic Feature Fine-Tuning
by Meng Zhang, Jiankun Ma, Zhenxi Zhang and Feng Zhou
Sensors 2025, 25(13), 4000; https://doi.org/10.3390/s25134000 - 26 Jun 2025
Viewed by 418
Abstract
Modulation recognition technology, as one of the core technologies in the field of wireless communications, holds significant importance in intelligent communication systems such as link adaptation and IoT devices. In recent years, deep learning-based automatic modulation recognition (DL-AMR) has emerged as a major [...] Read more.
Modulation recognition technology, as one of the core technologies in the field of wireless communications, holds significant importance in intelligent communication systems such as link adaptation and IoT devices. In recent years, deep learning-based automatic modulation recognition (DL-AMR) has emerged as a major research direction in this domain. Existing DL-AMR schemes primarily adopt a centralized training architecture, where a unified model is trained on a central server using local data from terminal devices. Although such methods achieve high recognition accuracy, they carry substantial privacy leakage risks. Moreover, when terminal devices independently train models solely based on their local data, the model performance often suffers due to issues like data distribution disparities and insufficient training samples. To address the critical challenges of high data privacy leakage risks, excessive communication overhead, and data silos in automatic modulation recognition tasks, this paper proposes a federated automatic modulation recognition method based on characteristic feature fine-tuning (FedeAMR-CFF). Specifically, the clients extract representative features through distance-based metric screening, and the server aggregates model parameters via the FedAvg algorithm and fine-tunes the model using the collected features. This method not only safeguards client data privacy but also facilitates effective knowledge transfer across distributed datasets while significantly mitigating the non-independent and identically distributed problem. Experimental validation demonstrates that FedeAMR-CFF achieves an improvement of 3.43% compared to the best-performing local model. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

28 pages, 57781 KiB  
Article
Edge Computing for Smart-City Human Habitat: A Pandemic-Resilient, AI-Powered Framework
by Atlanta Choudhury, Kandarpa Kumar Sarma, Debashis Dev Misra, Koushik Guha and Jacopo Iannacci
J. Sens. Actuator Netw. 2024, 13(6), 76; https://doi.org/10.3390/jsan13060076 - 6 Nov 2024
Cited by 1 | Viewed by 1578
Abstract
The COVID-19 pandemic has highlighted the need for a robust medical infrastructure and crisis management strategy as part of smart-city applications, with technology playing a crucial role. The Internet of Things (IoT) has emerged as a promising solution, leveraging sensor arrays, wireless communication [...] Read more.
The COVID-19 pandemic has highlighted the need for a robust medical infrastructure and crisis management strategy as part of smart-city applications, with technology playing a crucial role. The Internet of Things (IoT) has emerged as a promising solution, leveraging sensor arrays, wireless communication networks, and artificial intelligence (AI)-driven decision-making. Advancements in edge computing (EC), deep learning (DL), and deep transfer learning (DTL) have made IoT more effective in healthcare and pandemic-resilient infrastructures. DL architectures are particularly suitable for integration into a pandemic-compliant medical infrastructures when combined with medically oriented IoT setups. The development of an intelligent pandemic-compliant infrastructure requires combining IoT, edge and cloud computing, image processing, and AI tools to monitor adherence to social distancing norms, mask-wearing protocols, and contact tracing. The proliferation of 4G and beyond systems including 5G wireless communication has enabled ultra-wide broadband data-transfer and efficient information processing, with high reliability and low latency, thereby enabling seamless medical support as part of smart-city applications. Such setups are designed to be ever-ready to deal with virus-triggered pandemic-like medical emergencies. This study presents a pandemic-compliant mechanism leveraging IoT optimized for healthcare applications, edge and cloud computing frameworks, and a suite of DL tools. The framework uses a composite attention-driven framework incorporating various DL pre-trained models (DPTMs) for protocol adherence and contact tracing, and can detect certain cyber-attacks when interfaced with public networks. The results confirm the effectiveness of the proposed methodologies. Full article
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 4824 KiB  
Article
CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network
by Wenna Zhang, Kailiang Xue, Aiqin Yao and Yunqiang Sun
Electronics 2024, 13(17), 3408; https://doi.org/10.3390/electronics13173408 - 27 Aug 2024
Cited by 4 | Viewed by 2250
Abstract
Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. However, convolutional neural networks (CNNs) [...] Read more.
Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are used for sequence recognition tasks, face two main challenges, respectively: the ineffective utilization of global information and slow processing speeds due to sequential operations. To address these issues, this paper introduces CTRNet, a novel automatic modulation recognition network that combines a CNN with Transformer. This combination leverages Transformer’s ability to adequately capture the long-distance dependencies between global sequences and its advantages in sequence modeling, along with the CNN’s capability to extract features from local feature regions of signals. During the data preprocessing stage, the original IQ-modulated signals undergo sliding-window processing. By selecting the appropriate window sizes and strides, multiple subsequences are formed, enabling the network to effectively handle complex modulation patterns. In the embedding module, token vectors are designed to integrate information from multiple samples within each window, enhancing the model’s understanding and modeling ability of global information. In the feedforward neural network, a more effective Bilinear layer is employed for processing to capture the higher-order relationship between input features, thereby enhancing the ability of the model to capture complex patterns. Experiments conducted on the RML2016.10A public dataset demonstrate that compared with the existing algorithms, the proposed algorithm not only exhibits significant advantages in terms of parameter efficiency but also achieves higher recognition accuracy under various signal-to-noise ratio (SNR) conditions. In particular, it performs relatively well in terms of accuracy, precision, recall, and F1-score, with clearer classification of higher-order modulations and notable overall accuracy improvement. Full article
Show Figures

Figure 1

14 pages, 537 KiB  
Article
The Digitisation of Italian Schools and the Pandemic Trigger: Actors and Policies in an Evolving Organisational Field
by Domenico Carbone and Cristina Calvi
Societies 2024, 14(6), 94; https://doi.org/10.3390/soc14060094 - 20 Jun 2024
Cited by 1 | Viewed by 1113
Abstract
This article analyses the ongoing processes in the organisational field of Italian schools in light of the innovations induced by digital education policies. Specifically, it focuses on the relationship between actors and digital policies concerning the experience of distance learning (DL) that characterised [...] Read more.
This article analyses the ongoing processes in the organisational field of Italian schools in light of the innovations induced by digital education policies. Specifically, it focuses on the relationship between actors and digital policies concerning the experience of distance learning (DL) that characterised the period of the COVID-19 pandemic. The paper reflects on DL outcomes regarding the three expectations that have often characterised the rhetoric associated with the promotion of digital educational policies, namely: the raising of learning levels, the development of digital competences and the increase in school inclusion. Through an analysis of a series of empirical studies exploring the point of view of the paper, this paper highlights what progress has been made in the digital schooling in Italy and what are still its main limitations. The results of the study show both the limits of the effectiveness of educational policies constructed with a top-down approach and highlight the potential for policy recalibration offered by a reorganisation of the decision-making process through the active involvement of all the actors in the educational system. Full article
Show Figures

Figure 1

12 pages, 2534 KiB  
Article
Improved Test Input Prioritization Using Verification Monitors with False Prediction Cluster Centroids
by Hyekyoung Hwang, Il Yong Chun and Jitae Shin
Electronics 2024, 13(1), 21; https://doi.org/10.3390/electronics13010021 - 19 Dec 2023
Cited by 1 | Viewed by 1304
Abstract
Deep learning (DL) systems have been remarkably successful in various applications, but they could have critical misbehaviors. To identify the weakness of a trained model and overcome it with new data collection(s), one needs to figure out the corner cases of a trained [...] Read more.
Deep learning (DL) systems have been remarkably successful in various applications, but they could have critical misbehaviors. To identify the weakness of a trained model and overcome it with new data collection(s), one needs to figure out the corner cases of a trained model. Constructing new datasets to retrain a DL model requires extra budget and time. Test input prioritization (TIP) techniques have been proposed to identify corner cases more effectively. The state-of-the-art TIP approach adopts a monitoring method to TIP and prioritizes based on Gini impurity; one estimates the similarity between a DL prediction probability and uniform distribution. This letter proposes a new TIP method that uses a distance between false prediction cluster (FPC) centroids in a training set and a test instance in the last-layer feature space to prioritize error-inducing instances among an unlabeled test set. We refer to the proposed method as DeepFPC. Our numerical experiments show that the proposed DeepFPC method achieves significantly improved TIP performance in several image classification and active learning tasks. Full article
(This article belongs to the Special Issue Image/Video Processing and Encoding for Contemporary Applications)
Show Figures

Figure 1

12 pages, 1207 KiB  
Article
Introduction of Deep Learning-Based Infrared Image Analysis to Marginal Reflex Distance1 Measurement Method to Simultaneously Capture Images and Compute Results: Clinical Validation Study
by Bokeun Song, Hyeokjae Kwon, Sunje Kim, Yooseok Ha, Sang-Ha Oh and Seung-Han Song
J. Clin. Med. 2023, 12(23), 7466; https://doi.org/10.3390/jcm12237466 - 1 Dec 2023
Cited by 7 | Viewed by 2039
Abstract
Marginal reflex distance1 (MRD1) is a crucial clinical tool used to evaluate the position of the eyelid margin in relation to the cornea. Traditionally, this assessment has been conducted manually by plastic surgeons, ophthalmologists, or trained technicians. However, with the advancements in artificial [...] Read more.
Marginal reflex distance1 (MRD1) is a crucial clinical tool used to evaluate the position of the eyelid margin in relation to the cornea. Traditionally, this assessment has been conducted manually by plastic surgeons, ophthalmologists, or trained technicians. However, with the advancements in artificial intelligence (AI) technology, there is a growing interest in the development of automated systems capable of accurately measuring MRD1. In this context, we introduce novel MRD1 measurement methods based on deep learning algorithms that can simultaneously capture images and compute the results. This prospective observational study involved 154 eyes of 77 patients aged over 18 years who visited Chungnam National University Hospital between 1 January 2023 and 29 July 2023. We collected four different MRD1 datasets from patients using three distinct measurement methods, each tailored to the individual patient. The mean MRD1 values, measured through the manual method using a penlight, the deep learning method, ImageJ analysis from RGB eye images, and ImageJ analysis from IR eye images in 56 eyes of 28 patients, were 2.64 ± 1.04 mm, 2.85 ± 1.07 mm, 2.78 ± 1.08 mm, and 3.07 ± 0.95 mm, respectively. Notably, the strongest agreement was observed between MRD1_deep learning (DL) and MRD1_IR (0.822, p < 0.01). In a Bland–Altman plot, the smallest difference was observed between MRD1_DL and MRD1_IR ImageJ, with a mean difference of 0.0611 and ΔLOA (limits of agreement) of 2.5162, which was the smallest among all of the groups. In conclusion, this novel MRD1 measurement method, based on an IR camera and deep learning, demonstrates statistical significance and can be readily applied in clinical settings. Full article
Show Figures

Figure 1

20 pages, 10770 KiB  
Article
Deep-Neural-Network-Based Receiver Design for Downlink Non-Orthogonal Multiple-Access Underwater Acoustic Communication
by Habib Hussain Zuberi, Songzuo Liu, Muhammad Bilal, Ayman Alharbi, Amar Jaffar, Syed Agha Hussnain Mohsan, Abdulaziz Miyajan and Mohsin Abrar Khan
J. Mar. Sci. Eng. 2023, 11(11), 2184; https://doi.org/10.3390/jmse11112184 - 17 Nov 2023
Cited by 11 | Viewed by 2803
Abstract
The excavation of the ocean has led to the submersion of numerous autonomous vehicles and sensors. Hence, there is a growing need for multi-user underwater acoustic communication. On the other hand, due to the limited bandwidth of the underwater acoustic channel, downlink non-orthogonal [...] Read more.
The excavation of the ocean has led to the submersion of numerous autonomous vehicles and sensors. Hence, there is a growing need for multi-user underwater acoustic communication. On the other hand, due to the limited bandwidth of the underwater acoustic channel, downlink non-orthogonal multiple access (NOMA) is one of the fundamental pieces of technology for solving the problem of limited bandwidth, and it is expected to be beneficial for many modern wireless underwater acoustic applications. NOMA downlink underwater acoustic communication (UWA) is accomplished by broadcasting data symbols from a source station to several users, which uses superimposed coding with variable power levels to enable detection through successive interference cancellation (SIC) receivers. Nevertheless, comprehensive information of the channel condition and channel state information (CSI) are both essential for SIC receivers, but they can be difficult to obtain, particularly in an underwater environment. To address this critical issue, this research proposes downlink underwater acoustic communication using a deep neural network utilizing a 1D convolution neural network (CNN). Two cases are considered for the proposed system in the first case: in the first case, two users with different power levels and distances from the transmitter employ BPSK and QPSK modulations to support multi-user communication, while, in the second case, three users employ BPSK modulation. Users far from the base station receive the most power. The base station uses superimposed coding. The BELLHOP ray-tracing algorithm is utilized to generate the training dataset with user depth and range modifications. For training the model, a composite signal passes through the samples of the UWA channel and is fed to the model along with labels. The DNN receiver learns the characteristic of the UWA channel and does not depend on CSI. The testing CIR is used to evaluate the trained model. The results are compared to the traditional SIC receiver. The DNN-based DL NOMA underwater acoustic receiver outperformed the SIC receiver in terms of BER in simulation results for all the modulation orders. Full article
Show Figures

Figure 1

28 pages, 4644 KiB  
Article
A Deep-Learning-Based Secure Routing Protocol to Avoid Blackhole Attacks in VANETs
by Amalia Amalia, Yushintia Pramitarini, Ridho Hendra Yoga Perdana, Kyusung Shim and Beongku An
Sensors 2023, 23(19), 8224; https://doi.org/10.3390/s23198224 - 2 Oct 2023
Cited by 13 | Viewed by 1893
Abstract
Vehicle ad hoc networks (VANETs) are a vital part of intelligent transportation systems (ITS), offering a variety of advantages from reduced traffic to increased road safety. Despite their benefits, VANETs remain vulnerable to various security threats, including severe blackhole attacks. In this paper, [...] Read more.
Vehicle ad hoc networks (VANETs) are a vital part of intelligent transportation systems (ITS), offering a variety of advantages from reduced traffic to increased road safety. Despite their benefits, VANETs remain vulnerable to various security threats, including severe blackhole attacks. In this paper, we propose a deep-learning-based secure routing (DLSR) protocol using a deep-learning-based clustering (DLC) protocol to establish a secure route against blackhole attacks. The main features and contributions of this paper are as follows. First, the DLSR protocol utilizes deep learning (DL) at each node to choose secure routing or normal routing while establishing secure routes. Additionally, we can identify the behavior of malicious nodes to determine the best possible next hop based on its fitness function value. Second, the DLC protocol is considered an underlying structure to enhance connectivity between nodes and reduce control overhead. Third, we design a deep neural network (DNN) model to optimize the fitness function in both DLSR and DLC protocols. The DLSR protocol considers parameters such as remaining energy, distance, and hop count, while the DLC protocol considers cosine similarity, cosine distance, and the node’s remaining energy. Finally, from the performance results, we evaluate the performance of the proposed routing and clustering protocol in the viewpoints of packet delivery ratio, routing delay, control overhead, packet loss ratio, and number of packet losses. Additionally, we also exploit the impact of the mobility model such as reference point group mobility (RPGM) and random waypoint (RWP) on the network metrics. Full article
Show Figures

Figure 1

18 pages, 6024 KiB  
Article
Deep Learning-Assisted Transmit Antenna Classifiers for Fully Generalized Spatial Modulation: Online Efficiency Replaces Offline Complexity
by Hindavi Kishor Jadhav and Vinoth Babu Kumaravelu
Appl. Sci. 2023, 13(8), 5134; https://doi.org/10.3390/app13085134 - 20 Apr 2023
Cited by 4 | Viewed by 2166
Abstract
In this work, deep learning (DL)-based transmit antenna selection (TAS) strategies are employed to enhance the average bit error rate (ABER) and energy efficiency (EE) performance of a spectrally efficient fully generalized spatial modulation (FGSM) scheme. The Euclidean distance-based antenna selection (EDAS), a [...] Read more.
In this work, deep learning (DL)-based transmit antenna selection (TAS) strategies are employed to enhance the average bit error rate (ABER) and energy efficiency (EE) performance of a spectrally efficient fully generalized spatial modulation (FGSM) scheme. The Euclidean distance-based antenna selection (EDAS), a frequently employed TAS technique, has a high search complexity but offers optimal ABER performance. To address TAS with minimal complexity, we present DL-based approaches that reframe the traditional TAS problem as a classification learning problem. To reduce the energy consumption and latency of the system, we presented three DL architectures in this study, namely a feed-forward neural network (FNN), a recurrent neural network (RNN), and a 1D convolutional neural network (CNN). The proposed system can efficiently process and make predictions based on the new data with minimal latency, as DL-based modeling is a one-time procedure. In addition, the performance of the proposed DL strategies is compared to two other popular machine learning methods: support vector machine (SVM) and K-nearest neighbor (KNN). While comparing DL architectures with SVM on the same dataset, it is seen that the proposed FNN architecture offers a ~3.15% accuracy boost. The proposed FNN architecture achieves an improved signal-to-noise ratio (SNR) gain of ~2.2 dB over FGSM without TAS (FGSM-WTAS). All proposed DL techniques outperform FGSM-WTAS. Full article
(This article belongs to the Special Issue Recent Challenges and Solutions in Wireless Communication Engineering)
Show Figures

Figure 1

17 pages, 5136 KiB  
Article
A Novel Hybrid Approach for a Content-Based Image Retrieval Using Feature Fusion
by Shahbaz Sikandar, Rabbia Mahum and AbdulMalik Alsalman
Appl. Sci. 2023, 13(7), 4581; https://doi.org/10.3390/app13074581 - 4 Apr 2023
Cited by 27 | Viewed by 6494
Abstract
The multimedia content generated by devices and image processing techniques requires high computation costs to retrieve images similar to the user’s query from the database. An annotation-based traditional system of image retrieval is not coherent because pixel-wise matching of images brings significant variations [...] Read more.
The multimedia content generated by devices and image processing techniques requires high computation costs to retrieve images similar to the user’s query from the database. An annotation-based traditional system of image retrieval is not coherent because pixel-wise matching of images brings significant variations in terms of pattern, storage, and angle. The Content-Based Image Retrieval (CBIR) method is more commonly used in these cases. CBIR efficiently quantifies the likeness between the database images and the query image. CBIR collects images identical to the query image from a huge database and extracts more useful features from the image provided as a query image. Then, it relates and matches these features with the database images’ features and retakes them with similar features. In this study, we introduce a novel hybrid deep learning and machine learning-based CBIR system that uses a transfer learning technique and is implemented using two pre-trained deep learning models, ResNet50 and VGG16, and one machine learning model, KNN. We use the transfer learning technique to obtain the features from the images by using these two deep learning (DL) models. The image similarity is calculated using the machine learning (ML) model KNN and Euclidean distance. We build a web interface to show the result of similar images, and the Precision is used as the performance measure of the model that achieved 100%. Our proposed system outperforms other CBIR systems and can be used in many applications that need CBIR, such as digital libraries, historical research, fingerprint identification, and crime prevention. Full article
(This article belongs to the Special Issue Deep Learning for Image Recognition and Processing)
Show Figures

Figure 1

14 pages, 2823 KiB  
Article
Automatic Planning Tools for Lumbar Pedicle Screws: Comparison and Validation of Planning Accuracy for Self-Derived Deep-Learning-Based and Commercial Atlas-Based Approaches
by Moritz Scherer, Lisa Kausch, Akbar Bajwa, Jan-Oliver Neumann, Basem Ishak, Paul Naser, Philipp Vollmuth, Karl Kiening, Klaus Maier-Hein and Andreas Unterberg
J. Clin. Med. 2023, 12(7), 2646; https://doi.org/10.3390/jcm12072646 - 2 Apr 2023
Cited by 6 | Viewed by 2806
Abstract
Background: This ex vivo experimental study sought to compare screw planning accuracy of a self-derived deep-learning-based (DL) and a commercial atlas-based (ATL) tool and to assess robustness towards pathologic spinal anatomy. Methods: From a consecutive registry, 50 cases (256 screws in L1-L5) were [...] Read more.
Background: This ex vivo experimental study sought to compare screw planning accuracy of a self-derived deep-learning-based (DL) and a commercial atlas-based (ATL) tool and to assess robustness towards pathologic spinal anatomy. Methods: From a consecutive registry, 50 cases (256 screws in L1-L5) were randomly selected for experimental planning. Reference screws were manually planned by two independent raters. Additional planning sets were created using the automatic DL and ATL tools. Using Python, automatic planning was compared to the reference in 3D space by calculating minimal absolute distances (MAD) for screw head and tip points (mm) and angular deviation (degree). Results were evaluated for interrater variability of reference screws. Robustness was evaluated in subgroups stratified for alteration of spinal anatomy. Results: Planning was successful in all 256 screws using DL and in 208/256 (81%) using ATL. MAD to the reference for head and tip points and angular deviation was 3.93 ± 2.08 mm, 3.49 ± 1.80 mm and 4.46 ± 2.86° for DL and 7.77 ± 3.65 mm, 7.81 ± 4.75 mm and 6.70 ± 3.53° for ATL, respectively. Corresponding interrater variance for reference screws was 4.89 ± 2.04 mm, 4.36 ± 2.25 mm and 5.27 ± 3.20°, respectively. Planning accuracy was comparable to the manual reference for DL, while ATL produced significantly inferior results (p < 0.0001). DL was robust to altered spinal anatomy while planning failure was pronounced for ATL in 28/82 screws (34%) in the subgroup with severely altered spinal anatomy and alignment (p < 0.0001). Conclusions: Deep learning appears to be a promising approach to reliable automated screw planning, coping well with anatomic variations of the spine that severely limit the accuracy of ATL systems. Full article
(This article belongs to the Special Issue Spine Surgery – from Basics to Advances Technology)
Show Figures

Figure 1

Back to TopTop