Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = on-the-fly network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 6955 KiB  
Article
Framework Design for the Dynamic Reconfiguration of IoT-Enabled Embedded Systems and “On-the-Fly” Code Execution
by Elmin Marevac, Esad Kadušić, Nataša Živić, Nevzudin Buzađija and Samir Lemeš
Future Internet 2025, 17(1), 23; https://doi.org/10.3390/fi17010023 - 7 Jan 2025
Cited by 1 | Viewed by 1736
Abstract
Embedded systems, particularly when integrated into the Internet of Things (IoT) landscape, are critical for projects requiring robust, energy-efficient interfaces to collect real-time data from the environment. As these systems become complex, the need for dynamic reconfiguration, improved availability, and stability becomes increasingly [...] Read more.
Embedded systems, particularly when integrated into the Internet of Things (IoT) landscape, are critical for projects requiring robust, energy-efficient interfaces to collect real-time data from the environment. As these systems become complex, the need for dynamic reconfiguration, improved availability, and stability becomes increasingly important. This paper presents the design of a framework architecture that supports dynamic reconfiguration and “on-the-fly” code execution in IoT-enabled embedded systems, including a virtual machine capable of hot reloads, ensuring system availability even during configuration updates. A “hardware-in-the-loop” workflow manages communication between the embedded components, while low-level coding constraints are accessible through an additional abstraction layer, with examples such as MicroPython or Lua. The study results demonstrate the VM’s ability to handle serialization and deserialization with minimal impact on system performance, even under high workloads, with serialization having a median time of 160 microseconds and deserialization having a median of 964 microseconds. Both processes were fast and resource-efficient under normal conditions, supporting real-time updates with occasional outliers, suggesting room for optimization and also highlighting the advantages of VM-based firmware update methods, which outperform traditional approaches like Serial and OTA (Over-the-Air, the ability to update or configure firmware, software, or devices via wireless connection) updates by achieving lower latency and greater consistency. With these promising results, however, challenges like occasional deserialization time outliers and the need for optimization in memory management and network protocols remain for future work. This study also provides a comparative analysis of currently available commercial solutions, highlighting their strengths and weaknesses. Full article
Show Figures

Figure 1

14 pages, 2453 KiB  
Article
Advancing Persistent Character Generation: Comparative Analysis of Fine-Tuning Techniques for Diffusion Models
by Luca Martini, Saverio Iacono, Daniele Zolezzi and Gianni Viardo Vercelli
AI 2024, 5(4), 1779-1792; https://doi.org/10.3390/ai5040088 - 29 Sep 2024
Viewed by 3391
Abstract
In the evolving field of artificial intelligence, fine-tuning diffusion models is crucial for generating contextually coherent digital characters across various media. This paper examines four advanced fine-tuning techniques: Low-Rank Adaptation (LoRA), DreamBooth, Hypernetworks, and Textual Inversion. Each technique enhances the specificity and consistency [...] Read more.
In the evolving field of artificial intelligence, fine-tuning diffusion models is crucial for generating contextually coherent digital characters across various media. This paper examines four advanced fine-tuning techniques: Low-Rank Adaptation (LoRA), DreamBooth, Hypernetworks, and Textual Inversion. Each technique enhances the specificity and consistency of character generation, expanding the applications of diffusion models in digital content creation. LoRA efficiently adapts models to new tasks with minimal adjustments, making it ideal for environments with limited computational resources. It excels in low VRAM contexts due to its targeted fine-tuning of low-rank matrices within cross-attention layers, enabling faster training and efficient parameter tweaking. DreamBooth generates highly detailed, subject-specific images but is computationally intensive and suited for robust hardware environments. Hypernetworks introduce auxiliary networks that dynamically adjust the model’s behavior, allowing for flexibility during inference and on-the-fly model switching. This adaptability, however, can result in slightly lower image quality. Textual Inversion embeds new concepts directly into the model’s embedding space, allowing for rapid adaptation to novel styles or concepts, but is less effective for precise character generation. This analysis shows that LoRA is the most efficient for producing high-quality outputs with minimal computational overhead. In contrast, DreamBooth excels in high-fidelity images at the cost of longer training. Hypernetworks provide adaptability with some tradeoffs in quality, while Textual Inversion serves as a lightweight option for style integration. These techniques collectively enhance the creative capabilities of diffusion models, delivering high-quality, contextually relevant outputs. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

19 pages, 2442 KiB  
Article
Prediction of Accident Risk Levels in Traffic Accidents Using Deep Learning and Radial Basis Function Neural Networks Applied to a Dataset with Information on Driving Events
by Cristian Arciniegas-Ayala, Pablo Marcillo, Ángel Leonardo Valdivieso Caraguay and Myriam Hernández-Álvarez
Appl. Sci. 2024, 14(14), 6248; https://doi.org/10.3390/app14146248 - 18 Jul 2024
Cited by 6 | Viewed by 3085
Abstract
A complex AI system must be worked offline because the training and execution phases are processed separately. This process often requires different computer resources due to the high model requirements. A limitation of this approach is the convoluted training process that needs to [...] Read more.
A complex AI system must be worked offline because the training and execution phases are processed separately. This process often requires different computer resources due to the high model requirements. A limitation of this approach is the convoluted training process that needs to be repeated to obtain models with new data continuously incorporated into the knowledge base. Although the environment may be not static, it is crucial to dynamically train models by integrating new information during execution. In this article, artificial neural networks (ANNs) are developed to predict risk levels in traffic accidents with relatively simpler configurations than a deep learning (DL) model, which is more computationally intensive. The objective is to demonstrate that efficient, fast, and comparable results can be obtained using simple architectures such as that offered by the Radial Basis Function neural network (RBFNN). This work led to the generation of a driving dataset, which was subsequently validated for testing ANN models. The driving dataset simulated the dynamic approach by adding new data to the training on-the-fly, given the constant changes in the drivers’ data, vehicle information, environmental conditions, and traffic accidents. This study compares the processing time and performance of a Convolutional Neural Network (CNN), Random Forest (RF), Radial Basis Function (RBF), and Multilayer Perceptron (MLP), using evaluation metrics of accuracy, Specificity, and Sensitivity-recall to recommend an appropriate, simple, and fast ANN architecture that can be implemented in a secure alert traffic system that uses encrypted data. Full article
Show Figures

Figure 1

29 pages, 1832 KiB  
Article
A Parallel Compression Pipeline for Improving GPU Virtualization Data Transfers
by Cristian Peñaranda, Carlos Reaño and Federico Silla
Sensors 2024, 24(14), 4649; https://doi.org/10.3390/s24144649 - 17 Jul 2024
Viewed by 1362
Abstract
GPUs are commonly used to accelerate the execution of applications in domains such as deep learning. Deep learning applications are applied to an increasing variety of scenarios, with edge computing being one of them. However, edge devices present severe computing power and energy [...] Read more.
GPUs are commonly used to accelerate the execution of applications in domains such as deep learning. Deep learning applications are applied to an increasing variety of scenarios, with edge computing being one of them. However, edge devices present severe computing power and energy limitations. In this context, the use of remote GPU virtualization solutions is an efficient way to address these concerns. Nevertheless, the limited network bandwidth might be an issue. This limitation can be alleviated by leveraging on-the-fly compression within the communication layer of remote GPU virtualization solutions. In this way, data exchanged with the remote GPU is transparently compressed before being transmitted, thus increasing network bandwidth in practice. In this paper, we present the implementation of a parallel compression pipeline designed to be used within remote GPU virtualization solutions. A thorough performance analysis shows that network bandwidth can be increased by a factor of up to 2×. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 2861 KiB  
Article
Real-Time On-the-Fly Motion Planning for Urban Air Mobility via Updating Tree Data of Sampling-Based Algorithms Using Neural Network Inference
by Junlin Lou, Burak Yuksek, Gokhan Inalhan and Antonios Tsourdos
Aerospace 2024, 11(1), 99; https://doi.org/10.3390/aerospace11010099 - 22 Jan 2024
Cited by 1 | Viewed by 2299
Abstract
In this study, we consider the problem of motion planning for urban air mobility applications to generate a minimal snap trajectory and trajectory that cost minimal time to reach a goal location in the presence of dynamic geo-fences and uncertainties in the urban [...] Read more.
In this study, we consider the problem of motion planning for urban air mobility applications to generate a minimal snap trajectory and trajectory that cost minimal time to reach a goal location in the presence of dynamic geo-fences and uncertainties in the urban airspace. We have developed two separate approaches for this problem because designing an algorithm individually for each objective yields better performance. The first approach that we propose is a decoupled method that includes designing a policy network based on a recurrent neural network for a reinforcement learning algorithm, and then combining an online trajectory generation algorithm to obtain the minimal snap trajectory for the vehicle. Additionally, in the second approach, we propose a coupled method using a generative adversarial imitation learning algorithm for training a recurrent-neural-network-based policy network and generating the time-optimized trajectory. The simulation results show that our approaches have a short computation time when compared to other algorithms with similar performance while guaranteeing sufficient exploration of the environment. In urban air mobility operations, our approaches are able to provide real-time on-the-fly motion re-planning for vehicles, and the re-planned trajectories maintain continuity for the executed trajectory. To the best of our knowledge, we propose one of the first approaches enabling one to perform an on-the-fly update of the final landing position and to optimize the path and trajectory in real-time while keeping explorations in the environment. Full article
(This article belongs to the Special Issue Integrated Airborne Urban Mobility: A Multidisciplinary View)
Show Figures

Figure 1

24 pages, 11959 KiB  
Article
Real-Time Processing and High-Quality Imaging of Navigation Strip Data Using SSS Based on AUVs
by Yulin Tang, Junsen Wang, Shaohua Jin, Jianhu Zhao, Liming Wang, Gang Bian and Xinyang Zhao
J. Mar. Sci. Eng. 2023, 11(9), 1769; https://doi.org/10.3390/jmse11091769 - 10 Sep 2023
Cited by 3 | Viewed by 1913
Abstract
In light of the prevailing approach in which data from side-scan sonar (SSS) from Autonomous Underwater Vehicles (AUVs) are primarily processed and visualized post mission, failing to meet the requirements in terms of timeliness for on-the-fly image acquisition, this paper introduces a novel [...] Read more.
In light of the prevailing approach in which data from side-scan sonar (SSS) from Autonomous Underwater Vehicles (AUVs) are primarily processed and visualized post mission, failing to meet the requirements in terms of timeliness for on-the-fly image acquisition, this paper introduces a novel method for real-time processing and superior imaging of navigation strip data from SSS aboard AUVs. Initially, a comprehensive description of the real-time processing sequence is provided, encompassing the integration of multi-source navigation data using Kalman filtering, and high-pass filtering of attitude and heading data to exclude anomalies, as well as the use of bidirectional filtering techniques within and between pings, ensuring real-time quality control of raw data. In addition, this study adopts the semantic segmentation Unet network for automatic real-time tracking of seafloor lines, devises a real-time correction strategy for radial distortion based on historical echo data, and utilizes the alternating direction multiplier method for real-time noise reduction in strip images. With the combined application of these four pivotal techniques, we adeptly address the primary challenges in real-time navigation data processing. In conclusion, marine tests conducted in Bohai Bay substantiate the efficacy of the methodologies delineated in this research, offering a fresh paradigm for real-time processing and superior visualization of SSS navigation strip data on AUVs. Full article
Show Figures

Figure 1

17 pages, 61863 KiB  
Article
A Novel 6G Conversational Orchestration Framework for Enhancing Performance and Resource Utilization in Autonomous Vehicle Networks
by Sonia Shahzadi, Nauman Riaz Chaudhry and Muddesar Iqbal
Sensors 2023, 23(17), 7366; https://doi.org/10.3390/s23177366 - 23 Aug 2023
Cited by 2 | Viewed by 2042
Abstract
A vision of 6G aims to automate versatile services by eliminating the complexity of human effort for Industry 5.0 applications. This results in an intelligent environment with cognitive and collaborative capabilities of AI conversational orchestration that enable a variety of applications across smart [...] Read more.
A vision of 6G aims to automate versatile services by eliminating the complexity of human effort for Industry 5.0 applications. This results in an intelligent environment with cognitive and collaborative capabilities of AI conversational orchestration that enable a variety of applications across smart Autonomous Vehicle (AV) networks. In this article, an innovative framework for AI conversational orchestration is proposed by enabling on-the-fly virtual infrastructure service orchestration for Anything-as-a-Service (XaaS) to automate a network service paradigm. The proposed framework will potentially contribute to the growth of 6G conversational orchestration by enabling on-the-fly automation of cloud and network services. The orchestration aspect of the 6G vision is not limited to cognitive collaborative communications, but also extends to context-aware personalized infrastructure for 6G automation. The experimental results of the implemented proof-of-concept framework are presented. These experiments not only affirm the technical capabilities of this framework, but also push into several Industry 5.0 applications. Full article
(This article belongs to the Special Issue Fault-Tolerant Sensing Paradigms for Autonomous Vehicles)
Show Figures

Figure 1

19 pages, 713 KiB  
Article
Machine-Learning-Assisted Cyclostationary Spectral Analysis for Joint Signal Classification and Jammer Detection at the Physical Layer of Cognitive Radio
by Tassadaq Nawaz and Ali Alzahrani
Sensors 2023, 23(16), 7144; https://doi.org/10.3390/s23167144 - 12 Aug 2023
Cited by 9 | Viewed by 3096
Abstract
Cognitive radio technology was introduced as a possible solution for spectrum scarcity by exploiting dynamic spectrum access. In the last two decades, most researchers focused on enabling cognitive radios for managing the spectrum. However, due to their intelligent nature, cognitive radios can scan [...] Read more.
Cognitive radio technology was introduced as a possible solution for spectrum scarcity by exploiting dynamic spectrum access. In the last two decades, most researchers focused on enabling cognitive radios for managing the spectrum. However, due to their intelligent nature, cognitive radios can scan the radio frequency environment and change their transmission parameters accordingly on-the-fly. Such capabilities make it suitable for the design of both advanced jamming and anti-jamming systems. In this context, our work presents a novel, robust algorithm for spectrum characterisation in wideband radios. The proposed algorithm considers that a wideband spectrum is sensed by a cognitive radio terminal. The wideband is constituted of different narrowband signals that could either be licit signals or signals jammed by stealthy jammers. Cyclostationary feature detection is adopted to measure the spectral correlation density function of each narrowband signal. Then, cyclic and angular frequency profiles are obtained from the spectral correlation density function, concatenated, and used as the feature sets for the artificial neural network, which characterise each narrowband signal as a licit signal with a particular modulation scheme or a signal jammed by a specific stealthy jammer. The algorithm is tested under both multi-tone and modulated stealthy jamming attacks. Results show that the classification accuracy of our novel algorithm is superior when compared with recently proposed signal classifications and jamming detection algorithms. The applications of the algorithm can be found in both commercial and military communication systems. Full article
(This article belongs to the Special Issue Security and Privacy in Wireless Communications and Networking)
Show Figures

Figure 1

25 pages, 4434 KiB  
Article
A Novel Dynamic Software-Defined Networking Approach to Neutralize Traffic Burst
by Aakanksha Sharma, Venki Balasubramanian and Joarder Kamruzzaman
Computers 2023, 12(7), 131; https://doi.org/10.3390/computers12070131 - 27 Jun 2023
Cited by 6 | Viewed by 3065
Abstract
Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or [...] Read more.
Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. Full article
(This article belongs to the Special Issue Software-Defined Internet of Everything)
Show Figures

Figure 1

28 pages, 2842 KiB  
Review
A Survey on Parameters Affecting MANET Performance
by Ahmed M. Eltahlawy, Heba K. Aslan, Eslam G. Abdallah, Mahmoud Said Elsayed, Anca D. Jurcut and Marianne A. Azer
Electronics 2023, 12(9), 1956; https://doi.org/10.3390/electronics12091956 - 22 Apr 2023
Cited by 25 | Viewed by 4921
Abstract
A mobile ad hoc network (MANET) is an infrastructure-less network where mobile nodes can share information through wireless links without dedicated hardware that handles the network routing. MANETs’ nodes create on-the-fly connections with each other to share information, and they frequently join and [...] Read more.
A mobile ad hoc network (MANET) is an infrastructure-less network where mobile nodes can share information through wireless links without dedicated hardware that handles the network routing. MANETs’ nodes create on-the-fly connections with each other to share information, and they frequently join and leave MANET during run time. Therefore, flexibility in MANETs is needed to be able to handle variations in the number of existing network nodes. An effective routing protocol should be used to be able to route data packets within this dynamic network. Lacking centralized infrastructure in MANETs makes it harder to secure communication between network nodes, and this lack of infrastructure makes network nodes vulnerable to harmful attacks. Testbeds might be used to test MANETs under specific conditions, but researchers prefer to use simulators to obtain more flexibility and less cost during MANETs’ environment setup and testing. A MANET’s environment is dependent on the required scenario, and an appropriate choice of the used simulator that fulfills the researcher’s needs is very important. Furthermore, researchers need to define the simulation parameters and the other parameters required by the used routing protocol. In addition, if the MANET’s environment handles some conditions where malicious nodes perform network attacks, the parameters affecting the MANET from the attack perspective need to be understood. This paper collects environmental parameters that might be needed to be able to set up the required environment. To be able to evaluate the network’s performance under attack, different environmental parameters that evaluate the overall performance are also collected. A survey of the literature contribution is performed based on 50 recent papers. Comparison tables and statistical charts are created to show the literature contribution and the used parameters within the scope of the collected papers of our survey. Results show that the NS-2 simulator is the most popular simulator used in MANETs. Full article
(This article belongs to the Special Issue Advancement in Blockchain Technology and Applications)
Show Figures

Figure 1

34 pages, 8904 KiB  
Article
A Lightweight Authentication MAC Protocol for CR-WSNs
by Bashayer Othman Aloufi and Wajdi Alhakami
Sensors 2023, 23(4), 2015; https://doi.org/10.3390/s23042015 - 10 Feb 2023
Cited by 6 | Viewed by 2409
Abstract
Cognitive radio (CR) has emerged as one of the most investigated techniques in wireless networks. Research is ongoing in terms of this technology and its potential use. This technology relies on making full use of the unused spectrum to solve the problem of [...] Read more.
Cognitive radio (CR) has emerged as one of the most investigated techniques in wireless networks. Research is ongoing in terms of this technology and its potential use. This technology relies on making full use of the unused spectrum to solve the problem of the spectrum shortage in wireless networks based on the excessive demand for spectrum use. While the wireless network technology node’s range of applications in various sectors may have security drawbacks and issues leading to deteriorating the network, combining it with CR technology might enhance the network performance and improve its security. In order to enhance the performance of the wireless sensor networks (WSNs), a lightweight authentication medium access control (MAC) protocol for CR-WSNs that is highly compatible with current WSNs is proposed. Burrows–Abadi–Needham (BAN) logic is used to prove that the proposed protocol achieves secure and mutual authentication. The automated verification of internet security protocols and applications (AVISPA) simulation is used to simulate the system security of the proposed protocol and to provide formal verification. The result clearly shows that the proposed protocol is SAFE under the on-the-fly model-checker (OFMC) backend, which means the proposed protocol is immune to passive and active attacks such as man-in-the-middle (MITM) attacks and replay attacks. The performance of the proposed protocol is evaluated and compared with related protocols in terms of the computational cost, which is 0.01184 s. The proposed protocol provides higher security, which makes it more suitable for the CR-WSN environment and ensures its resistance against different types of attacks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 2332 KiB  
Article
An Automatic Premature Ventricular Contraction Recognition System Based on Imbalanced Dataset and Pre-Trained Residual Network Using Transfer Learning on ECG Signal
by Hadaate Ullah, Md Belal Bin Heyat, Faijan Akhtar, Abdullah Y. Muaad, Chiagoziem C. Ukwuoma, Muhammad Bilal, Mahdi H. Miraz, Mohammad Arif Sobhan Bhuiyan, Kaishun Wu, Robertas Damaševičius, Taisong Pan, Min Gao, Yuan Lin and Dakun Lai
Diagnostics 2023, 13(1), 87; https://doi.org/10.3390/diagnostics13010087 - 28 Dec 2022
Cited by 27 | Viewed by 4800
Abstract
The development of automatic monitoring and diagnosis systems for cardiac patients over the internet has been facilitated by recent advancements in wearable sensor devices from electrocardiographs (ECGs), which need the use of patient-specific approaches. Premature ventricular contraction (PVC) is a common chronic cardiovascular [...] Read more.
The development of automatic monitoring and diagnosis systems for cardiac patients over the internet has been facilitated by recent advancements in wearable sensor devices from electrocardiographs (ECGs), which need the use of patient-specific approaches. Premature ventricular contraction (PVC) is a common chronic cardiovascular disease that can cause conditions that are potentially fatal. Therefore, for the diagnosis of likely heart failure, precise PVC detection from ECGs is crucial. In the clinical settings, cardiologists typically employ long-term ECGs as a tool to identify PVCs, where a cardiologist must put in a lot of time and effort to appropriately assess the long-term ECGs which is time consuming and cumbersome. By addressing these issues, we have investigated a deep learning method with a pre-trained deep residual network, ResNet-18, to identify PVCs automatically using transfer learning mechanism. Herein, features are extracted by the inner layers of the network automatically compared to hand-crafted feature extraction methods. Transfer learning mechanism handles the difficulties of required large volume of training data for a deep model. The pre-trained model is evaluated on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia and Institute of Cardiological Technics (INCART) datasets. First, we used the Pan–Tompkins algorithm to segment 44,103 normal and 6423 PVC beats, as well as 106,239 normal and 9987 PVC beats from the MIT-BIH Arrhythmia and IN-CART datasets, respectively. The pre-trained model employed the segmented beats as input after being converted into 2D (two-dimensional) images. The method is optimized with the using of weighted random samples, on-the-fly augmentation, Adam optimizer, and call back feature. The results from the proposed method demonstrate the satisfactory findings without the using of any complex pre-processing and feature extraction technique as well as design complexity of model. Using LOSOCV (leave one subject out cross-validation), the received accuracies on MIT-BIH and INCART are 99.93% and 99.77%, respectively, suppressing the state-of-the-art methods for PVC recognition on unseen data. This demonstrates the efficacy and generalizability of the proposed method on the imbalanced datasets. Due to the absence of device-specific (patient-specific) information at the evaluating stage on the target datasets in this study, the method might be used as a general approach to handle the situations in which ECG signals are obtained from different patients utilizing a variety of smart sensor devices. Full article
(This article belongs to the Special Issue Implementing AI in Diagnosis of Cardiovascular Diseases)
Show Figures

Figure 1

22 pages, 5465 KiB  
Article
Fast and Interactive Positioning of Proteins within Membranes
by André Lanrezac, Benoist Laurent, Hubert Santuz, Nicolas Férey and Marc Baaden
Algorithms 2022, 15(11), 415; https://doi.org/10.3390/a15110415 - 7 Nov 2022
Cited by 5 | Viewed by 3137
Abstract
(1) Background: We developed an algorithm to perform interactive molecular simulations (IMS) of protein alignment in membranes, allowing on-the-fly monitoring and manipulation of such molecular systems at various scales. (2) Methods: UnityMol, an advanced molecular visualization software; MDDriver, a socket for data communication; [...] Read more.
(1) Background: We developed an algorithm to perform interactive molecular simulations (IMS) of protein alignment in membranes, allowing on-the-fly monitoring and manipulation of such molecular systems at various scales. (2) Methods: UnityMol, an advanced molecular visualization software; MDDriver, a socket for data communication; and BioSpring, a Spring network simulation engine, were extended to perform IMS. These components are designed to easily communicate with each other, adapt to other molecular simulation software, and provide a development framework for adding new interaction models to simulate biological phenomena such as protein alignment in the membrane at a fast enough rate for real-time experiments. (3) Results: We describe in detail the integration of an implicit membrane model for Integral Membrane Protein And Lipid Association (IMPALA) into our IMS framework. Our implementation can cover multiple levels of representation, and the degrees of freedom can be tuned to optimize the experience. We explain the validation of this model in an interactive and exhaustive search mode. (4) Conclusions: Protein positioning in model membranes can now be performed interactively in real time. Full article
(This article belongs to the Special Issue Algorithms for Computational Biology 2022)
Show Figures

Figure 1

15 pages, 1175 KiB  
Article
Detection of Malicious Network Flows with Low Preprocessing Overhead
by Garett Fox and Rajendra V. Boppana
Network 2022, 2(4), 628-642; https://doi.org/10.3390/network2040036 - 4 Nov 2022
Cited by 7 | Viewed by 4609
Abstract
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously [...] Read more.
Machine learning (ML) is frequently used to identify malicious traffic flows on a network. However, the requirement of complex preprocessing of network data to extract features or attributes of interest before applying the ML models restricts their use to offline analysis of previously captured network traffic to identify attacks that have already occurred. This paper applies machine learning analysis for network security with low preprocessing overhead. Raw network data are converted directly into bitmap files and processed through a Two-Dimensional Convolutional Neural Network (2D-CNN) model to identify malicious traffic. The model has high accuracy in detecting various malicious traffic flows, even zero-day attacks, based on testing with three open-source network traffic datasets. The overhead of preprocessing the network data before applying the 2D-CNN model is very low, making it suitable for on-the-fly network traffic analysis for malicious traffic flows. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management)
Show Figures

Figure 1

16 pages, 4904 KiB  
Article
Machine Learning with Quantum Matter: An Example Using Lead Zirconate Titanate
by Edward Rietman, Leslie Schuum, Ayush Salik, Manor Askenazi and Hava Siegelmann
Quantum Rep. 2022, 4(4), 418-433; https://doi.org/10.3390/quantum4040030 - 3 Oct 2022
Viewed by 2654
Abstract
Stephen Wolfram (2002) proposed the concept of computational equivalence, which implies that almost any dynamical system can be considered as a computation, including programmable matter and nonlinear materials such as, so called, quantum matter. Memristors are often used in building and evaluating hardware [...] Read more.
Stephen Wolfram (2002) proposed the concept of computational equivalence, which implies that almost any dynamical system can be considered as a computation, including programmable matter and nonlinear materials such as, so called, quantum matter. Memristors are often used in building and evaluating hardware neural networks. Ukil (2011) demonstrated a theoretical relationship between piezoelectrical materials and memristors. We review that work as a necessary background prior to our work on exploring a piezoelectric material for neural network computation. Our method consisted of using a cubic block of unpoled lead zirconate titanate (PZT) ceramic, to which we have attached wires for programming the PZT as a programmable substrate. We then, by means of pulse trains, constructed on-the-fly internal patterns of regions of aligned polarization and unaligned, or disordered regions. These dynamic patterns come about through constructive and destructive interference and may be exploited as a type of reservoir network. Using MNIST data we demonstrate a learning machine. Full article
Show Figures

Figure 1

Back to TopTop