applsci-logo

Journal Browser

Journal Browser

Future Information & Communication Engineering 2024

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 29883

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical, Electronic and Control Engineering Hankyong National University, Anseong 17579, Republic of Korea
Interests: compact modeling for circuit simulation; device modeling for TCAD simulation; device characterization; steep-switching device; GAA NW-FET; 2D material transistor; neuromorphic device
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
Interests: aging science; applied artificial intelligence; digital healthcare; human computer interaction; software engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Artificial Intelligence, Silla University, Busan 46958, Republic of Korea
Interests: fuzzy neural network; image processing; medical image recognition; biosignal processing; genetic algorithm; watermarking
Special Issues, Collections and Topics in MDPI journals
School of IT Convergence, University of Ulsan, Ulsan 44610, Republic of Korea
Interests: virtual/mixed reality; human computer interaction; virtual human
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Business Administration, Seoul Women’s University, Seoul, Republic of Korea
Interests: information systems; e-business and management; business analytics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are organizing a Special Issue comprising selected original research papers from the 16th International Conference on Future Information and Communication Engineering (ICFICE) 2024 on all the technical aspects of computer science, information, and communication engineering. Potential topics include but are not limited to the following:

  • Communication systems and applications;
  • Networking and security;
  • AI and intelligent information systems;
  • Multimedia and digital convergence;
  • Semiconductor and communication services;
  • Biomedical imaging and engineering;
  • Ubiquitous sensor network;
  • Database and Internet application;
  • IoT and big data;
  • IT convergence technology;
  • Industrial session.

Prof. Dr. Yun Seop Yu
Prof. Dr. Hee-Cheol Kim
Prof. Dr. Kwang-Baek Kim
Dr. Dongsik Jo
Dr. Jongtae Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • communication systems
  • networking
  • smart security
  • intelligent information systems
  • artificial intelligence
  • machine learning
  • biomedical imaging
  • multimedia and digital convergence
  • semiconductors
  • ubiquitous sensor networks
  • databases
  • Internet application
  • big data
  • Internet of Things (IoT)
  • information technology (IT) convergence
  • augmented reality (AR)/virtual reality (VR)
  • metaverse

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 4309 KiB  
Article
Optimizing Agent Behavior in the MiniGrid Environment Using Reinforcement Learning Based on Large Language Models
by Byeong-Ju Park, Sung-Jung Yong, Hyun-Seo Hwang and Il-Young Moon
Appl. Sci. 2025, 15(4), 1860; https://doi.org/10.3390/app15041860 - 11 Feb 2025
Viewed by 819
Abstract
Reinforcement learning is one of the most prominent research areas in the field of artificial intelligence, playing a crucial role in developing agents that autonomously make decisions in complex environments. This study proposes a method to optimize agent behavior in the MiniGrid-Empty-5x5-v0 environment [...] Read more.
Reinforcement learning is one of the most prominent research areas in the field of artificial intelligence, playing a crucial role in developing agents that autonomously make decisions in complex environments. This study proposes a method to optimize agent behavior in the MiniGrid-Empty-5x5-v0 environment using large language models (LLMs). By leveraging the natural language processing capabilities of LLMs to interpret environmental states and select appropriate actions, this research explores an approach that differs from traditional reinforcement learning methods. Experimental results confirm that LLM-based agents can effectively achieve their goals, and it is anticipated that maximizing the synergy between LLMs and reinforcement learning will contribute to the development of more intelligent and adaptable AI systems. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

19 pages, 8495 KiB  
Article
Design and Development of a Precision Defect Detection System Based on a Line Scan Camera Using Deep Learning
by Byungcheol Kim, Moonsun Shin and Seonmin Hwang
Appl. Sci. 2024, 14(24), 12054; https://doi.org/10.3390/app142412054 - 23 Dec 2024
Viewed by 2162
Abstract
The manufacturing industry environment is rapidly evolving into smart manufacturing. It prioritizes digital innovations such as AI and digital transformation (DX) to increase productivity and create value through automation and intelligence. Vision systems for defect detection and quality control are being implemented across [...] Read more.
The manufacturing industry environment is rapidly evolving into smart manufacturing. It prioritizes digital innovations such as AI and digital transformation (DX) to increase productivity and create value through automation and intelligence. Vision systems for defect detection and quality control are being implemented across industries, including electronics, semiconductors, printing, metal, food, and packaging. Small and medium-sized manufacturing companies are increasingly demanding smart factory solutions for quality control to create added value and enhance competitiveness. In this paper, we design and develop a high-speed defect detection system based on a line-scan camera using deep learning. The camera is positioned for side-view imaging, allowing for detailed inspection of the component mounting and soldering quality on PCBs. To detect defects on PCBs, the system gathers extensive images of both flawless and defective products to train a deep learning model. An AI engine generated through this deep learning process is then applied to conduct defect inspections. The developed high-speed defect detection system was evaluated to have an accuracy of 99.5% in the experiment. This will be highly beneficial for precision quality management in small- and medium-sized enterprises Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

23 pages, 1691 KiB  
Article
Adaptive Learning in AI Agents for the Metaverse: The ALMAA Framework
by Yina Xia, Seong-Yoon Shin and Hyun-Ae Lee
Appl. Sci. 2024, 14(23), 11410; https://doi.org/10.3390/app142311410 - 7 Dec 2024
Cited by 1 | Viewed by 2684
Abstract
This study investigates the adaptability of Artificial Intelligence (AI) agents in the Metaverse, focusing on their ability to enhance responsiveness, decision-making, and engagement through the proposed Adaptive Learning Model for AI Agents (ALMAA) framework. The research does not introduce new interventions to existing [...] Read more.
This study investigates the adaptability of Artificial Intelligence (AI) agents in the Metaverse, focusing on their ability to enhance responsiveness, decision-making, and engagement through the proposed Adaptive Learning Model for AI Agents (ALMAA) framework. The research does not introduce new interventions to existing platforms like Epic Games or AltspaceVR but instead analyzes how their operations align with adaptive learning principles. By examining these platforms, the study demonstrates the alignment between real-world practices and theoretical constructs, offering insights into how adaptive AI systems operate in dynamic virtual environments. Case observations highlight key metrics such as user interaction efficiency, contextual decision accuracy, and predictive engagement strategies. The data, derived from detailed user interaction logs and feedback reports, underscore the practical application of adaptive learning in optimizing user satisfaction and system performance. Statistical analyses reveal notable gains in response speed, predictive precision, and user engagement, validating the theoretical framework’s relevance. This paper positions the ALMAA framework as a critical lens for understanding and analyzing adaptive AI in virtual settings. It emphasizes theoretical exploration rather than experimental application, providing a foundation for future research into scalable, user-centered AI systems tailored for the Metaverse’s evolving demands. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

22 pages, 6362 KiB  
Article
CGADNet: A Lightweight, Real-Time, and Robust Crosswalk and Guide Arrow Detection Network for Complex Scenes
by Guangxing Wang, Tao Lin, Xiwei Dong, Longchun Wang, Qingming Leng and Seong-Yoon Shin
Appl. Sci. 2024, 14(20), 9445; https://doi.org/10.3390/app14209445 - 16 Oct 2024
Viewed by 1448
Abstract
In the context of edge environments with constrained resources, realizing real-time and robust crosswalk and guide arrow detection poses a significant challenge for autonomous driving systems. This paper proposes a crosswalk and guide arrow detection network (CGADNet), a lightweight visual neural network derived [...] Read more.
In the context of edge environments with constrained resources, realizing real-time and robust crosswalk and guide arrow detection poses a significant challenge for autonomous driving systems. This paper proposes a crosswalk and guide arrow detection network (CGADNet), a lightweight visual neural network derived from YOLOv8. Specifically designed for the swift and accurate detection of crosswalks and guide arrows within the field of view of the vehicle, the CGADNet can seamlessly be implemented on the Jetson Orin Nano device to achieve real-time processing. In this study, we incorporated a novel C2f_Van module based on VanillaBlock, employed depth-separable convolution to reduce the parameters efficiently, utilized partial convolution (PConv) for lightweight FasterDetect, and utilized a bounding box regression loss with a dynamic focusing mechanism—WIoUv3—to enhance the detection performance. In complex scenarios, the proposed method in the stability of the mAP@0.5 was maintained, resulting in a 4.1% improvement in the mAP@0.5:0.95. The network parameters, floating point operations (FLOPs), and weights were reduced by 63.81%, 70.07%, and 63.11%, respectively. Ultimately, a detection speed of 50.35 FPS was achieved on the Jetson Orin Nano. This research provides practical methodologies for deploying crosswalk and guide arrow detection networks on edge computing devices. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

15 pages, 1465 KiB  
Article
Alzheimer’s Multiclassification Using Explainable AI Techniques
by Kamese Jordan Junior, Kouayep Sonia Carole, Tagne Poupi Theodore Armand, Hee-Cheol Kim and The Alzheimer’s Disease Neuroimaging Initiative
Appl. Sci. 2024, 14(18), 8287; https://doi.org/10.3390/app14188287 - 14 Sep 2024
Cited by 1 | Viewed by 2458
Abstract
In this study, we address the early detection challenges of Alzheimer’s disease (AD) using explainable artificial intelligence (XAI) techniques. AD, characterized by amyloid plaques and tau tangles, leads to cognitive decline and remains hard to diagnose due to genetic and environmental factors. Utilizing [...] Read more.
In this study, we address the early detection challenges of Alzheimer’s disease (AD) using explainable artificial intelligence (XAI) techniques. AD, characterized by amyloid plaques and tau tangles, leads to cognitive decline and remains hard to diagnose due to genetic and environmental factors. Utilizing deep learning models, we analyzed brain MRI scans from the ADNI database, categorizing them into normal cognition (NC), mild cognitive impairment (MCI), and AD. The ResNet-50 architecture was employed, enhanced by a channel-wise attention mechanism to improve feature extraction. To ensure model transparency, we integrated local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping (Grad-CAM), highlighting significant image regions contributing to predictions. Our model achieved 85% accuracy, effectively distinguishing between the classes. The LIME and Grad-CAM visualizations provided insights into the model’s decision-making process, particularly emphasizing changes near the hippocampus for MCI. These XAI methods enhance the interpretability of AI-driven AD diagnosis, fostering trust and aiding clinical decision-making. Our approach demonstrates the potential of combining deep learning with XAI for reliable and transparent medical applications. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

15 pages, 2972 KiB  
Article
Robust Bluetooth AoA Estimation for Indoor Localization Using Particle Filter Fusion
by Kaiyue Qiu, Ruizhi Chen, Guangyi Guo, Yuan Wu and Wei Li
Appl. Sci. 2024, 14(14), 6208; https://doi.org/10.3390/app14146208 - 17 Jul 2024
Cited by 1 | Viewed by 1621
Abstract
With the growing demand for positioning services, angle-of-arrival (AoA) estimation or direction-finding (DF) has been widely investigated for applications in fifth-generation (5G) technologies. Many existing AoA estimation algorithms only require the measurement of the direction of the incident wave at the transmitter to [...] Read more.
With the growing demand for positioning services, angle-of-arrival (AoA) estimation or direction-finding (DF) has been widely investigated for applications in fifth-generation (5G) technologies. Many existing AoA estimation algorithms only require the measurement of the direction of the incident wave at the transmitter to obtain correct results. However, for most cellular systems, such as Bluetooth indoor positioning systems, due to multipath and non-line-of-sight (NLOS) propagation, indoor positioning accuracy is severely affected. In this paper, a comprehensive algorithm that combines radio measurements from Bluetooth AoA local navigation systems with indoor position estimates is investigated, which is obtained using particle filtering. This algorithm allows us to explore new optimized methods to reduce estimation errors in indoor positioning. First, particle filtering is used to predict the rough position of a moving target. Then, an algorithm with robust beam weighting is used to estimate the AoA of the multipath components. Based on this, a system of pseudo-linear equations for target positioning based on the probabilistic framework of PF and AoA measurement is derived. Theoretical analysis and simulation results show that the algorithm can improve the positioning accuracy by approximately 25.7% on average. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

27 pages, 2381 KiB  
Article
Cross-Cultural Intelligent Language Learning System (CILS): Leveraging AI to Facilitate Language Learning Strategies in Cross-Cultural Communication
by Yina Xia, Seong-Yoon Shin and Jong-Chan Kim
Appl. Sci. 2024, 14(13), 5651; https://doi.org/10.3390/app14135651 - 28 Jun 2024
Cited by 16 | Viewed by 15278
Abstract
This research presents the Cross-Cultural Intelligent Language Learning System (CILS), a novel approach integrating artificial intelligence (AI) into language education to enhance cross-cultural communication. CILS utilizes advanced AI technologies to provide adaptive, personalized learning experiences that cater to the unique linguistic and cultural [...] Read more.
This research presents the Cross-Cultural Intelligent Language Learning System (CILS), a novel approach integrating artificial intelligence (AI) into language education to enhance cross-cultural communication. CILS utilizes advanced AI technologies to provide adaptive, personalized learning experiences that cater to the unique linguistic and cultural backgrounds of each learner. By dynamically adjusting content and methodology, CILS significantly improves linguistic proficiency and cultural understanding, essential for effective global interactions. The implementation of CILS in platforms such as Busuu and HelloTalk has demonstrated marked improvements in engagement and communication skills among learners. Empirical studies validate the system’s effectiveness in real-world settings, showing enhanced learner performance and increased intercultural competence. Additionally, the Technology Acceptance Model (TAM) applied confirms that the usability and perceived usefulness of AI-driven systems strongly influence learner acceptance and sustained use. This study not only underscores the potential of AI in transforming language education but also highlights the critical role of cultural sensitivity in designing educational technologies. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

13 pages, 1837 KiB  
Article
Indoor Positioning by Double Deep Q-Network in VLC-Based Empty Office Environment
by Sung Hyun Oh and Jeong Gon Kim
Appl. Sci. 2024, 14(9), 3684; https://doi.org/10.3390/app14093684 - 26 Apr 2024
Viewed by 1317
Abstract
Recently, artificial intelligence (AI) has been applied in various industries. One such application is indoor user positioning using Big Data. The traditional method for positioning is the global positioning system (GPS). However, the performance of GPS is limited indoors due to propagation loss. [...] Read more.
Recently, artificial intelligence (AI) has been applied in various industries. One such application is indoor user positioning using Big Data. The traditional method for positioning is the global positioning system (GPS). However, the performance of GPS is limited indoors due to propagation loss. Hence, radio frequency (RF)-based communication methods such as WiFi and Bluetooth have been proposed as indoor positioning solutions. However, positioning performance inaccuracies arise due to signal interference caused by RF band saturation. Therefore, this study proposes indoor user positioning based on visible light communication (VLC). The proposed method involves the sequential application of fingerprinting and double deep Q-Network. Fingerprinting is utilized to define the action and state of the double deep Q-Network agent. The agent is designed to learn and locate the reference point (RP) closest to the user’s position in a shorter search time. The core idea of the proposed system is to converge a Cell-ID scheme and fingerprinting. Through this, the initial state of the double deep Q-Network agent can be limited. A limited initial state can increase the positioning speed. Simulation results show that the proposed scheme attains a positioning resolution of less than 13 cm and achieves a processing time of less than 0.03 s to obtain the final position in VLC-based office environments. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 2758 KiB  
Review
A Review of Traffic Flow Prediction Methods in Intelligent Transportation System Construction
by Runpeng Liu and Seong-Yoon Shin
Appl. Sci. 2025, 15(7), 3866; https://doi.org/10.3390/app15073866 - 1 Apr 2025
Viewed by 718
Abstract
With the continuous development of intelligent transportation systems (ITSs), traffic flow prediction methods have become the cornerstone of this technology. This paper comprehensively reviews the traffic flow prediction methods used in ITSs and divides them into three categories: statistics-based, machine learning-based, and deep [...] Read more.
With the continuous development of intelligent transportation systems (ITSs), traffic flow prediction methods have become the cornerstone of this technology. This paper comprehensively reviews the traffic flow prediction methods used in ITSs and divides them into three categories: statistics-based, machine learning-based, and deep learning-based methods. Although statistics-based methods have lower data requirements and machine learning methods have faster calculation speeds, this paper concludes that deep learning methods have the best overall effect after a comprehensive analysis of the principles, advantages, limitations, and practical applications of each method. Deep learning methods can overcome many limitations that traditional statistical methods and machine learning methods cannot surpass, such as the ability to model complex nonlinear relationships. Experimental results show that hybrid neural networks are significantly superior to traditional methods in terms of their prediction accuracy and generalization abilities. By combining multiple models and techniques, hybrid neural networks can improve the accuracy of traffic flow prediction under different conditions. Although deep learning methods have achieved remarkable success in short-term prediction, challenges still exist, such as the generalization of models in different traffic scenarios and the difficulty of long-term traffic flow prediction. Finally, this paper discusses future research directions and anticipates the future development of ITS technology. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

Back to TopTop