Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = native AI enablers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 1277 KiB  
Article
Distributed Prediction-Enhanced Beamforming Using LR/SVR Fusion and MUSIC Refinement in 5G O-RAN Systems
by Mustafa Mayyahi, Jordi Mongay Batalla, Jerzy Żurek and Piotr Krawiec
Appl. Sci. 2025, 15(13), 7428; https://doi.org/10.3390/app15137428 - 2 Jul 2025
Viewed by 308
Abstract
Low-latency and robust beamforming are vital for sustaining signal quality and spectral efficiency in emerging high-mobility 5G and future 6G wireless networks. Conventional beam management approaches, which rely on periodic Channel State Information feedback and static codebooks, as outlined in 3GPP standards, are [...] Read more.
Low-latency and robust beamforming are vital for sustaining signal quality and spectral efficiency in emerging high-mobility 5G and future 6G wireless networks. Conventional beam management approaches, which rely on periodic Channel State Information feedback and static codebooks, as outlined in 3GPP standards, are insufficient in rapidly varying propagation environments. In this work, we propose a Dominance-Enforced Adaptive Clustered Sliding Window Regression (DE-ACSW-R) framework for predictive beamforming in O-RAN Split 7-2x architectures. DE-ACSW-R leverages a sliding window of recent angle of arrival (AoA) estimates, applying in-window change-point detection to segment user trajectories and performing both Linear Regression (LR) and curvature-adaptive Support Vector Regression (SVR) for short-term and non-linear prediction. A confidence-weighted fusion mechanism adaptively blends LR and SVR outputs, incorporating robust outlier detection and a dominance-enforced selection regime to address strong disagreements. The Open Radio Unit (O-RU) autonomously triggers localised MUSIC scans when prediction confidence degrades, minimising unnecessary full-spectrum searches and saving delay. Simulation results demonstrate that the proposed DE-ACSW-R approach significantly enhances AoA tracking accuracy, beamforming gain, and adaptability under realistic high-mobility conditions, surpassing conventional LR/SVR baselines. This AI-native modular pipeline aligns with O-RAN architectural principles, enabling scalable and real-time beam management for next-generation wireless deployments. Full article
Show Figures

Figure 1

21 pages, 597 KiB  
Article
Cloud Security Automation Through Symmetry: Threat Detection and Response
by Harshad Pitkar
Symmetry 2025, 17(6), 859; https://doi.org/10.3390/sym17060859 - 1 Jun 2025
Viewed by 1151
Abstract
Cloud security automation has emerged as a critical solution for organizations facing increasingly complex cybersecurity challenges in cloud environments. This study examines the current state of cloud security automation, focusing on its role in symmetry between threat detection and response capabilities. Through analysis [...] Read more.
Cloud security automation has emerged as a critical solution for organizations facing increasingly complex cybersecurity challenges in cloud environments. This study examines the current state of cloud security automation, focusing on its role in symmetry between threat detection and response capabilities. Through analysis of recent market trends and technological developments, this paper explores key technologies, including Security Information and Event Management (SIEM), Extended Detection and Response (XDR), and Security Orchestration, Automation, and Response (SOAR) platforms. The integration of artificial intelligence and machine learning has transformed these systems, enabling real-time threat detection and automated response mechanisms. The research examines real-world applications and highlights that organizations implementing automated security solutions have demonstrated improved incident response times and reduced security breaches. However, challenges remain in terms of the complexity of integration and symmetry between automation and human expertise. As the global AI cybersecurity market is projected to reach $134 billion by 2030, the future of cloud security automation lies in advanced AI-driven solutions and improved threat intelligence integration. Even though cloud platforms are widely used, existing security tools have challenges in identifying real-time threats, the integration of heterogeneous data sources, and actionable intelligence generation. The majority of current solutions are not designed for cloud-native platforms and do not scale or evolve. This paper overcomes these challenges by introducing a scalable and extensible cloud security architecture, which uses sophisticated correlation and threat intelligence to provide increased detection accuracies as well as reduced response times for the challenging environment of advanced cloud-based infrastructures. This research aims to equip organizations with proven methods from real-world use cases and strategies that they can adopt to enable automated threat detection and response. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

30 pages, 3401 KiB  
Article
Explainable AI Assisted IoMT Security in Future 6G Networks
by Navneet Kaur and Lav Gupta
Future Internet 2025, 17(5), 226; https://doi.org/10.3390/fi17050226 - 20 May 2025
Viewed by 677
Abstract
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack [...] Read more.
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack surface, exposing healthcare systems to cybersecurity risks like data breaches, device manipulation, and potentially life-threatening disruptions. While 6G networks offer significant benefits for healthcare, such as ultra-low latency, extensive connectivity, and AI-native capabilities, as highlighted in the ITU 6G (IMT-2030) framework, they are expected to introduce new and potentially more severe security challenges. These advancements put critical medical systems at greater risk, highlighting the need for more robust security measures. This study leverages AI techniques to systematically identify security vulnerabilities within 6G-enabled healthcare environments. Additionally, the proposed approach strengthens AI-driven security through use of multiple XAI techniques cross-validated against each other. Drawing on the insights provided by XAI, we tailor our mitigation strategies to the ITU-defined 6G usage scenarios, with a focus on their applicability to medical IoT networks. We propose that these strategies will effectively address potential vulnerabilities and enhance the security of medical systems leveraging IoT and 6G networks. Full article
(This article belongs to the Special Issue Toward 6G Networks: Challenges and Technologies)
Show Figures

Figure 1

24 pages, 985 KiB  
Article
Secure Hierarchical Federated Learning for Large-Scale AI Models: Poisoning Attack Defense and Privacy Preservation in AIoT
by Chengzhuo Han, Tingting Yang, Xin Sun and Zhengqi Cui
Electronics 2025, 14(8), 1611; https://doi.org/10.3390/electronics14081611 - 16 Apr 2025
Cited by 1 | Viewed by 761
Abstract
The rapid integration of large-scale AI models into distributed systems, such as the Artificial Intelligence of Things (AIoT), has introduced critical security and privacy challenges. While configurable models enhance resource efficiency, their deployment in heterogeneous edge environments remains vulnerable to poisoning attacks, data [...] Read more.
The rapid integration of large-scale AI models into distributed systems, such as the Artificial Intelligence of Things (AIoT), has introduced critical security and privacy challenges. While configurable models enhance resource efficiency, their deployment in heterogeneous edge environments remains vulnerable to poisoning attacks, data leakage, and adversarial interference, threatening the integrity of collaborative learning and responsible AI deployment. To address these issues, this paper proposes a Hierarchical Federated Cross-domain Retrieval (FHCR) framework tailored for secure and privacy-preserving AIoT systems. By decoupling models into a shared retrieval layer (globally optimized via federated learning) and device-specific layers (locally personalized), FHCR minimizes communication overhead while enabling dynamic module selection. Crucially, we integrate a retrieval-layer mean inspection (RLMI) mechanism to detect and filter malicious gradient updates, effectively mitigating poisoning attacks and reducing attack success rates by 20% compared to conventional methods. Extensive evaluation on General-QA and IoT-Native datasets demonstrates the robustness of FHCR against adversarial threats, with FHCR maintaining global accuracy not lower than baseline levels while reducing communication costs by 14%. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Graphical abstract

18 pages, 2974 KiB  
Article
Evolving Towards Artificial-Intelligence-Driven Sixth-Generation Mobile Networks: An End-to-End Framework, Key Technologies, and Opportunities
by Zexu Li, Jingyi Wang, Song Zhao, Qingtian Wang and Yue Wang
Appl. Sci. 2025, 15(6), 2920; https://doi.org/10.3390/app15062920 - 7 Mar 2025
Cited by 2 | Viewed by 2960
Abstract
The incorporation of artificial intelligence (AI) into sixth-generation (6G) mobile networks is expected to revolutionize communication systems, transforming them into intelligent platforms that provide seamless connectivity and intelligent services. This paper explores the evolution of 6G architectures, as well as the enabling technologies [...] Read more.
The incorporation of artificial intelligence (AI) into sixth-generation (6G) mobile networks is expected to revolutionize communication systems, transforming them into intelligent platforms that provide seamless connectivity and intelligent services. This paper explores the evolution of 6G architectures, as well as the enabling technologies required to integrate AI across the cloud, core network (CN), radio access network (RAN), and terminals. It begins by examining the necessity of embedding AI into 6G networks, making it a native capability. The analysis then outlines potential evolutionary paths for the RAN architecture and proposes an end-to-end AI-driven framework. Additionally, key technologies such as cross-domain AI collaboration, native computing, and native security mechanisms are discussed. The study identifies potential use cases, including embodied intelligence, wearable devices, and generative AI, which offer valuable insights into fostering collaboration within the AI-driven ecosystem and highlight new revenue model opportunities and challenges. The paper concludes with a forward-looking perspective on the convergence of AI and 6G technology. Full article
(This article belongs to the Special Issue 5G/6G Mechanisms, Services, and Applications)
Show Figures

Figure 1

13 pages, 385 KiB  
Article
Availability, Scalability, and Security in the Migration from Container-Based to Cloud-Native Applications
by Bruno Nascimento, Rui Santos, João Henriques, Marco V. Bernardo and Filipe Caldeira
Computers 2024, 13(8), 192; https://doi.org/10.3390/computers13080192 - 9 Aug 2024
Cited by 3 | Viewed by 4149
Abstract
The shift from traditional monolithic architectures to container-based solutions has revolutionized application deployment by enabling consistent, isolated environments across various platforms. However, as organizations look for improved efficiency, resilience, security, and scalability, the limitations of container-based applications, such as their manual scaling, resource [...] Read more.
The shift from traditional monolithic architectures to container-based solutions has revolutionized application deployment by enabling consistent, isolated environments across various platforms. However, as organizations look for improved efficiency, resilience, security, and scalability, the limitations of container-based applications, such as their manual scaling, resource management challenges, potential single points of failure, and operational complexities, become apparent. These challenges, coupled with the need for sophisticated tools and expertise for monitoring and security, drive the move towards cloud-native architectures. Cloud-native approaches offer a more robust integration with cloud services, including managed databases and AI/ML services, providing enhanced agility and efficiency beyond what standalone containers can achieve. Availability, scalability, and security are the cornerstone requirements of these cloud-native applications. This work explores how containerized applications can be customized to address such requirements during their shift to cloud-native orchestrated environments. A Proof of Concept (PoC) demonstrated the technical aspects of such a move into a Kubernetes environment in Azure. The results from its evaluation highlighted the suitability of Kubernetes in addressing such a demand for availability and scalability while safeguarding security when moving containerized applications to cloud-native environments. Full article
Show Figures

Figure 1

34 pages, 2035 KiB  
Review
Nanofibrous Scaffolds in Biomedicine
by Hossein Omidian and Erma J. Gill
J. Compos. Sci. 2024, 8(7), 269; https://doi.org/10.3390/jcs8070269 - 12 Jul 2024
Cited by 11 | Viewed by 2616
Abstract
This review explores the design, fabrication, and biomedical applications of nanofibrous scaffolds, emphasizing their impact on tissue engineering and regenerative medicine. Advanced techniques like electrospinning and 3D printing have enabled precise control over scaffold architecture, crucial for mimicking native tissue structures. Integrating bioactive [...] Read more.
This review explores the design, fabrication, and biomedical applications of nanofibrous scaffolds, emphasizing their impact on tissue engineering and regenerative medicine. Advanced techniques like electrospinning and 3D printing have enabled precise control over scaffold architecture, crucial for mimicking native tissue structures. Integrating bioactive materials has significantly enhanced cellular interactions, mechanical properties, and the controlled release of therapeutic agents. Applications span bone, cardiovascular, soft tissue, neural regeneration, wound healing, and advanced drug delivery. Despite these advancements, challenges such as scalability, biocompatibility, and long-term stability remain barriers to clinical translation. Future research should focus on developing smart scaffolds and utilizing AI-enhanced manufacturing for more personalized and effective regenerative therapies. Full article
Show Figures

Figure 1

21 pages, 10290 KiB  
Article
Smartphone-Based Citizen Science Tool for Plant Disease and Insect Pest Detection Using Artificial Intelligence
by Panagiotis Christakakis, Garyfallia Papadopoulou, Georgios Mikos, Nikolaos Kalogiannidis, Dimosthenis Ioannidis, Dimitrios Tzovaras and Eleftheria Maria Pechlivani
Technologies 2024, 12(7), 101; https://doi.org/10.3390/technologies12070101 - 3 Jul 2024
Cited by 14 | Viewed by 7610
Abstract
In recent years, the integration of smartphone technology with novel sensing technologies, Artificial Intelligence (AI), and Deep Learning (DL) algorithms has revolutionized crop pest and disease surveillance. Efficient and accurate diagnosis is crucial to mitigate substantial economic losses in agriculture caused by diseases [...] Read more.
In recent years, the integration of smartphone technology with novel sensing technologies, Artificial Intelligence (AI), and Deep Learning (DL) algorithms has revolutionized crop pest and disease surveillance. Efficient and accurate diagnosis is crucial to mitigate substantial economic losses in agriculture caused by diseases and pests. An innovative Apple® and Android™ mobile application for citizen science has been developed, to enable real-time detection and identification of plant leaf diseases and pests, minimizing their impact on horticulture, viticulture, and olive cultivation. Leveraging DL algorithms, this application facilitates efficient data collection on crop pests and diseases, supporting crop yield protection and cost reduction in alignment with the Green Deal goal for 2030 by reducing pesticide use. The proposed citizen science tool involves all Farm to Fork stakeholders and farm citizens in minimizing damage to plant health by insect and fungal diseases. It utilizes comprehensive datasets, including images of various diseases and insects, within a robust Decision Support System (DSS) where DL models operate. The DSS connects directly with users, allowing them to upload crop pest data via the mobile application, providing data-driven support and information. The application stands out for its scalability and interoperability, enabling the continuous integration of new data to enhance its capabilities. It supports AI-based imaging analysis of quarantine pests, invasive alien species, and emerging and native pests, thereby aiding post-border surveillance programs. The mobile application, developed using a Python-based REST API, PostgreSQL, and Keycloak, has been field-tested, demonstrating its effectiveness in real-world agriculture scenarios, such as detecting Tuta absoluta (Meyrick) infestation in tomato cultivations. The outcomes of this study in T. absoluta detection serve as a showcase scenario for the proposed citizen science tool’s applicability and usability, demonstrating a 70.2% accuracy (mAP50) utilizing advanced DL models. Notably, during field testing, the model achieved detection confidence levels of up to 87%, enhancing pest management practices. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

23 pages, 922 KiB  
Review
Distributed Machine Learning and Native AI Enablers for End-to-End Resources Management in 6G
by Orfeas Agis Karachalios, Anastasios Zafeiropoulos, Kimon Kontovasilis and Symeon Papavassiliou
Electronics 2023, 12(18), 3761; https://doi.org/10.3390/electronics12183761 - 6 Sep 2023
Cited by 2 | Viewed by 2208
Abstract
6G targets a broad and ambitious range of networking scenarios with stringent and diverse requirements. Such challenging demands require a multitude of computational and communication resources and means for their efficient and coordinated management in an end-to-end fashion across various domains. Conventional approaches [...] Read more.
6G targets a broad and ambitious range of networking scenarios with stringent and diverse requirements. Such challenging demands require a multitude of computational and communication resources and means for their efficient and coordinated management in an end-to-end fashion across various domains. Conventional approaches cannot handle the complexity, dynamicity, and end-to-end scope of the problem, and solutions based on artificial intelligence (AI) become necessary. However, current applications of AI to resource management (RM) tasks provide partial ad hoc solutions that largely lack compatibility with notions of native AI enablers, as foreseen in 6G, and either have a narrow focus, without regard for an end-to-end scope, or employ non-scalable representations/learning. This survey article contributes a systematic demonstration that the 6G vision promotes the employment of appropriate distributed machine learning (ML) frameworks that interact through native AI enablers in a composable fashion towards a versatile and effective end-to-end RM framework. We start with an account of 6G challenges that yields three criteria for benchmarking the suitability of candidate ML-powered RM methodologies for 6G, also in connection with an end-to-end scope. We then proceed with a focused survey of appropriate methodologies in light of these criteria. All considered methodologies are classified in accordance with six distinct methodological frameworks, and this approach invites broader insight into the potential and limitations of the more general frameworks, beyond individual methodologies. The landscape is complemented by considering important AI enablers, discussing their functionality and interplay, and exploring their potential for supporting each of the six methodological frameworks. The article culminates with lessons learned, open issues, and directions for future research. Full article
Show Figures

Figure 1

17 pages, 40887 KiB  
Article
Deep Learning within a DICOM WSI Viewer for Histopathology
by Noelia Vallez, Jose Luis Espinosa-Aranda, Anibal Pedraza, Oscar Deniz and Gloria Bueno
Appl. Sci. 2023, 13(17), 9527; https://doi.org/10.3390/app13179527 - 23 Aug 2023
Cited by 5 | Viewed by 2641
Abstract
Microscopy scanners and artificial intelligence (AI) techniques have facilitated remarkable advancements in biomedicine. Incorporating these advancements into clinical practice is, however, hampered by the variety of digital file formats used, which poses a significant challenge for data processing. Open-source and commercial software solutions [...] Read more.
Microscopy scanners and artificial intelligence (AI) techniques have facilitated remarkable advancements in biomedicine. Incorporating these advancements into clinical practice is, however, hampered by the variety of digital file formats used, which poses a significant challenge for data processing. Open-source and commercial software solutions have attempted to address proprietary formats, but they fall short of providing comprehensive access to vital clinical information beyond image pixel data. The proliferation of competing proprietary formats makes the lack of interoperability even worse. DICOM stands out as a standard that transcends internal image formats via metadata-driven image exchange in this context. DICOM defines imaging workflow information objects for images, patients’ studies, reports, etc. DICOM promises standards-based pathology imaging, but its clinical use is limited. No FDA-approved digital pathology system natively generates DICOM, and only one high-performance whole slide images (WSI) device has been approved for diagnostic use in Asia and Europe. In a recent series of Digital Pathology Connectathons, the interoperability of our solution was demonstrated by integrating DICOM digital pathology imaging, i.e., WSI, into PACs and enabling their visualisation. However, no system that incorporates state-of-the-art AI methods and directly applies them to DICOM images has been presented. In this paper, we present the first web viewer system that employs WSI DICOM images and AI models. This approach aims to bridge the gap by integrating AI methods with DICOM images in a seamless manner, marking a significant step towards more effective CAD WSI processing tasks. Within this innovative framework, convolutional neural networks, including well-known architectures such as AlexNet and VGG, have been successfully integrated and evaluated. Full article
(This article belongs to the Special Issue Recent Advances in Bioinformatics and Health Informatics)
Show Figures

Figure 1

33 pages, 6020 KiB  
Article
ASSIST-IoT: A Modular Implementation of a Reference Architecture for the Next Generation Internet of Things
by Paweł Szmeja, Alejandro Fornés-Leal, Ignacio Lacalle, Carlos E. Palau, Maria Ganzha, Wiesław Pawłowski, Marcin Paprzycki and Johan Schabbink
Electronics 2023, 12(4), 854; https://doi.org/10.3390/electronics12040854 - 8 Feb 2023
Cited by 26 | Viewed by 5421
Abstract
Next Generation Internet of Things (NGIoT) addresses the deployment of complex, novel IoT ecosystems. These ecosystems are related to different technologies and initiatives, such as 5G/6G, AI, cybersecurity, and data science. The interaction with these disciplines requires addressing complex challenges related with the [...] Read more.
Next Generation Internet of Things (NGIoT) addresses the deployment of complex, novel IoT ecosystems. These ecosystems are related to different technologies and initiatives, such as 5G/6G, AI, cybersecurity, and data science. The interaction with these disciplines requires addressing complex challenges related with the implementation of flexible solutions that mix heterogeneous software and hardware, while providing high levels of customisability and manageability, creating the need for a blueprint reference architecture (RA) independent of particular existing vertical markets (e.g., energy, automotive, or smart cities). Different initiatives have partially dealt with the requirements of the architecture. However, the first complete, consolidated NGIoT RA, covering the hardware and software building blocks, and needed for the advent of NGIoT, has been designed in the ASSIST-IoT project. The ASSIST-IoT RA delivers a layered and modular design that divides the edge-cloud continuum into independent functions and cross-cutting capabilities. This contribution discusses practical aspects of implementation of the proposed architecture within the context of real-world applications. In particular, it is shown how use of cloud-native concepts (microservices and applications, containerisation, and orchestration) applied to the edge-cloud continuum IoT systems results in bringing the ASSIST-IoT concepts to reality. The description of how the design elements can be implemented in practice is presented in the context of an ecosystem, where independent software packages are deployed and run at the selected points in the hardware environment. Both implementation aspects and functionality of selected groups of virtual artefacts (micro-applications called enablers) are described, along with the hardware and software contexts in which they run. Full article
(This article belongs to the Special Issue Large-Scale and Complex Systems: Advances in Modeling and Control)
Show Figures

Figure 1

16 pages, 1100 KiB  
Article
Technological Transformation of Telco Operators towards Seamless IoT Edge-Cloud Continuum
by Kasim Oztoprak, Yusuf Kursat Tuncel and Ismail Butun
Sensors 2023, 23(2), 1004; https://doi.org/10.3390/s23021004 - 15 Jan 2023
Cited by 27 | Viewed by 4175
Abstract
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, [...] Read more.
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, including the evolution of cloud-native computing, Software-Defined Networking (SDN), and network automation platforms. The cultural shift in software development and management with DevOps enabled the development of significant technologies in the telecommunication world, including network equipment, application development, and system orchestration. The effect of the aforementioned cultural shift to the application area, especially from the IoT point of view, is investigated. The enormous change in service diversity and delivery capabilities to mass devices are also discussed. During the last two decades, desktop and server virtualization has played an active role in the Information Technology (IT) world. With the use of OpenFlow, SDN, and Network Functions Virtualization (NFV), the network revolution has got underway. The shift from monolithic application development and deployment to micro-services changed the whole picture. On the other hand, the data centers evolved in several generations where the control plane cannot cope with all the networks without an intelligent decision-making process, benefiting from the AI/ML techniques. AI also enables operators to forecast demand more accurately, anticipate network load, and adjust capacity and throughput automatically. Going one step further, zero-touch networking and service management (ZSM) is proposed to get high-level human intents to generate a low-level configuration for network elements with validated results, minimizing the ratio of faults caused by human intervention. Harmonizing all signs of progress in different communication technologies enabled the use of edge computing successfully. Low-powered (from both energy and processing perspectives) IoT networks have disrupted the customer and end-point demands within the sector, as such paved the path towards devising the edge computing concept, which finalized the whole picture of the edge-cloud continuum. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

20 pages, 1114 KiB  
Article
Random-Delay-Corrected Deep Reinforcement Learning Framework for Real-World Online Closed-Loop Network Automation
by Keliang Du, Luhan Wang, Yu Liu, Haiwen Niu, Shaoxin Huang and Xiangming Wen
Appl. Sci. 2022, 12(23), 12297; https://doi.org/10.3390/app122312297 - 1 Dec 2022
Cited by 1 | Viewed by 2306
Abstract
The future mobile communication networks (beyond 5th generation (5G)) are evolving toward the service-based architecture where network functions are fine-grained, thereby meeting the dynamic requirements of diverse and differentiated vertical applications. Consequently, the complexity of network management becomes higher, and artificial intelligence (AI) [...] Read more.
The future mobile communication networks (beyond 5th generation (5G)) are evolving toward the service-based architecture where network functions are fine-grained, thereby meeting the dynamic requirements of diverse and differentiated vertical applications. Consequently, the complexity of network management becomes higher, and artificial intelligence (AI) technologies can improve AI-native network automation with their ability to solve complex problems. Specifically, deep reinforcement learning (DRL) technologies are considered the key to intelligent network automation with a feedback mechanism similar to that of online closed-loop architecture. However, the 0-delay assumptions of the standard Markov decision process (MDP) of traditional DRL algorithms cannot directly be adopted into real-world networks because there exist random delays between the agent and the environment that will affect the performance significantly. To address this problem, this paper proposes a random-delay-corrected framework. We first abstract the scenario and model it as a partial history-dependent MDP (PH-MDP), and prove that it can be transformed to be the standard MDP solved by the traditional DRL algorithms. Then, we propose a random-delay-corrected DRL framework with a forward model and a delay-corrected trajectory sampling to obtain samples by continuous interactions to train the agent. Finally, we propose a delayed-deep-Q-network (delayed-DQN) algorithm based on the framework. For the evaluation, we develop a real-world cloud-native 5G core network prototype whose management architecture follows an online closed-loop mechanism. A use case on the top of the prototype namely delayed-DQN-enabled access and mobility management function (AMF) scaling is implemented for specific evaluations. Several experiments are designed and the results show that our proposed methodologies perform better in the random-delayed networks than other methods (e.g., the standard DQN algorithm). Full article
Show Figures

Figure 1

18 pages, 3623 KiB  
Article
Visual Object Detection with DETR to Support Video-Diagnosis Using Conference Tools
by Attila Biró, Katalin Tünde Jánosi-Rancz, László Szilágyi, Antonio Ignacio Cuesta-Vargas, Jaime Martín-Martín and Sándor Miklós Szilágyi
Appl. Sci. 2022, 12(12), 5977; https://doi.org/10.3390/app12125977 - 12 Jun 2022
Cited by 10 | Viewed by 3935
Abstract
Real-time multilingual phrase detection from/during online video presentations—to support instant remote diagnostics—requires near real-time visual (textual) object detection and preprocessing for further analysis. Connecting remote specialists and sharing specific ideas is most effective using the native language. The main objective of this paper [...] Read more.
Real-time multilingual phrase detection from/during online video presentations—to support instant remote diagnostics—requires near real-time visual (textual) object detection and preprocessing for further analysis. Connecting remote specialists and sharing specific ideas is most effective using the native language. The main objective of this paper is to analyze and propose—through DEtection TRansformer (DETR) models, architectures, hyperparameters—recommendation, and specific procedures with simplified methods to achieve reasonable accuracy to support real-time textual object detection for further analysis. The development of real-time video conference translation based on artificial intelligence supported solutions has a relevant impact in the health sector, especially on clinical practice via better video consultation (VC) or remote diagnosis. The importance of this development was augmented by the COVID-19 pandemic. The challenge of this topic is connected to the variety of languages and dialects that the involved specialists speak and that usually needs human translator proxies which can be substituted by AI-enabled technological pipelines. The sensitivity of visual textual element localization is directly connected to complexity, quality, and the variety of collected training data sets. In this research, we investigated the DETR model with several variations. The research highlights the differences of the most prominent real-time object detectors: YOLO4, DETR, and Detectron2, and brings AI-based novelty to collaborative solutions combined with OCR. The performance of the procedures was evaluated through two research phases: a 248/512 (Phase1/Phase2) record train data set, with a 55/110 set of validated data instances for 7/10 application categories and 3/3 object categories, using the same object categories for annotation. The achieved score breaks the expected values in terms of visual text detection scope, giving high detection accuracy of textual data, the mean average precision ranging from 0.4 to 0.65. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop