Next Article in Journal
CubeSat Imaging Payload Design for Environmental Monitoring of Greenland
Previous Article in Journal
A Generalized Design of On-Chip LTCC Balanced Filters Using Novel Hybrid Resonators with Intrinsic Ultra-Wideband Suppression for 5G Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Advances in Image Processing, Artificial Intelligence, and Intelligent Robotics

1
Department of Mechanical Engineering, Electrical Engineering and Computer Science, Technical College of Applied Sciences in Zrenjanin, Đorđa Stratimirovića 23, 23000 Zrenjanin, Serbia
2
John von Neumann Faculty of Informatics, Óbuda University, Becsi Str. 96/B, H-1034 Budapest, Hungary
3
Symbolic Methods in Material Analysis and Tomography Research Group, Faculty of Engineering and Information Technology, University of Pecs, Boszorkany Str. 6, H-7624 Pecs, Hungary
4
Institute of Information Technology, University of Dunaujvaros, Tancsics M. Str. 1/A, H-2401 Dunaujvaros, Hungary
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(1), 19; https://doi.org/10.3390/electronics14010019
Submission received: 16 September 2024 / Accepted: 3 December 2024 / Published: 25 December 2024
(This article belongs to the Section Computer Science & Engineering)

1. Special Issue Summary

In recent years, scientists and engineers have been striving to make digital image processing as efficient as the human visual system [1]. The integration of artificial intelligence, deep learning, and soft computing techniques has led to the development of advanced image processing algorithms [1,2]. Image processing is crucial in intelligent robotics, where the objective is to achieve precise, robust, and intelligent control based on visual data [3,4,5]. This has inspired the use of vision sensors and cameras in robotics, leading to innovative applications across industries such as agriculture, biology, and medicine [6,7,8,9,10]. Modern industrial systems are characterized by the extensive integration of various imaging devices and sensors, which in itself requires the use of multifarious image processing and artificial intelligence-based algorithms, which form the basis of or serve as an aid in the production and control process. Further, in medical diagnostics, image processing and artificial intelligence are of particular importance, since they play a significant role in novel medical devices. Recently, the use of artificial intelligence in image processing has attracted significant attention in all industries and is a very popular research direction in the scientific and research community at present.
This Special Issue aims to provide researchers with a platform to present new trends, recent advancements, and future research directions. Contributions that addressed key challenges in image processing, deep learning, soft computing, sensor fusion, and robotic vision, as well as their applications in industrial robotics, were invited and accepted.

2. Summary of the Special Issue Research Works

Within this Special Issue, twelve papers are published, covering various aspects of image processing algorithms, computer vision, object recognition, artificial intelligence, networks, measurements, and signal processing. In the following, the published papers will be summarized in brief.
Tadic [Contribution 1], in his work, presented the use of the ZED 2i depth sensor in a robotic system designed for automatic electric vehicle charging. The ZED 2i depth sensor was used to record scenes under artificial lighting conditions. Subsequently, socket detection and extraction were achieved using a combination of basic image processing techniques and morphological operations within an object extraction algorithm. The method focused on using well-established, straightforward image-processing techniques to ensure the robot’s vision system operates reliably and efficiently in an industrial setting. The experimental results showed that the algorithm successfully extracted the charging socket and accurately determined its slope angles and coordinates under various depth conditions.
Al-Azzawi and Lencse [Contribution 2] examined the security challenges associated with DS-Lite (Dual-Stack Lite), one of the most prominent IPv6 transition technologies. The security vulnerabilities of this technology were analyzed using the STRIDE method, which encompasses Spoofing, Tampering, Repudiation, Information Disclosure, and Elevation of Privilege. A testbed for the DS-Lite topology was constructed using several virtual machines, created using CentOS Linux images. Various types of attacks were conducted against the DS-Lite infrastructure, with particular focus on the B4 (Basic Bridging Broadband) and AFTR (Address Family Transition Router) components. The most common attacks to which DS-Lite is vulnerable were identified, and mitigation strategies for these attacks were proposed.
Chen et al. [Contribution 3] proposed a novel transformer network with cross-window aggregated attention. Their network is structured as a generative adversarial model, and the Window Aggregation Transformer (WAT) module is embedded to enhance information aggregation across windows without increasing computational complexity. This approach effectively captures long-range dependencies in images, overcoming the limitation of convolutional operations that extract only local features. The experimental results have verified that the novel Transformer window attention network enriches and naturalizes the structured texture of restored images, particularly when restoring large, broken, or structurally complex images.
Han et al. [Contribution 4] proposed a real-time emotion recognition system utilizing photoplethysmography (PPG) and electromyography (EMG) with physiological signals. A complex-valued neural network is employed to extract common features from the physiological signals, allowing for successful emotion recognition without interference. The system is divided into three stages: single-pulse extraction, a physiological coherence feature module, and a physiological common feature module. The experimental results demonstrate that the proposed method outperforms alternative approaches in terms of accuracy and recognition interval.
Chen et al. [Contribution 5] applied a novel technique to image inpainting tasks. Specifically, a two-stage image inpainting network, which focuses on collaborative subtasks, was improved. Additionally, a transform module based on the cross-aggregation of windows was designed to enhance the acquisition of long-distance contextual information in image inpainting and to improve the structural and textural integrity of images. The improved two-stage network has a significantly better repair performance compared to common inpainting methods.
Jang et al. [Contribution 6], in their research, presented a novel reinforcement learning (RL)-based autonomous driving system, which combines obstacle-dependent Gaussian (ODG) RL, soft actor-critic (SAC), and meta-learning algorithms. This approach addresses key issues in RL, including the overestimation phenomenon and sparse reward problems, by incorporating prior knowledge from the ODG algorithm. The proposed algorithm was evaluated on official F1 circuits using high-fidelity racing simulations with complex dynamics. The results demonstrate exceptional performance.
Peng et al. [Contribution 7] focused on applying an output filtering method to perform system parameter identification for an unstable wheel-driven pendulum system. Their newly developed parameter identification algorithm attenuates the influence of quantization effects and noisy data, thereby achieving more accurate parameter identification results. Comparative simulations reveal that the output filtering method offers a superior parameter estimation performance compared to the direct numerical differentiation method.
Ha et al. [Contribution 8], in their work, applied a photon-counting imaging technology exclusively to the areas in which objects were located, effectively eliminating background noise. They demonstrated that the image quality achieved with this proposed method is superior to that of conventional methods, and the object recognition rate is also higher. Optical experiments were conducted to validate the denoising performance of the proposed method. Additionally, the structure similarity index measure (SSIM) was used as a performance metric and the YOLOv5 model was applied to assess the object recognition rate. It is anticipated that the proposed method will accelerate advancements in astrophotography and medical imaging technologies.
Vizvari et al. [Contribution 9] integrated basic and applied research, combining the superior properties found in the research to develop innovative and precise methodologies. These methodologies enable the direct modeling of EI measurements, avoiding the inaccuracies commonly associated with numerical approaches. As a result of the method’s efficiency and robustness, the conductivity of both the material and the electrodes is represented by a common piecewise function, which is used to solve the differential equation governing EI measurements. The simulation results indicated that their novel method is a powerful tool for biological research, with applications ranging from in vitro experiments to animal studies and human applications.
Zhang et al. [Contribution 10], in their paper, proposed an image retrieval model for lunar complex craters, termed LC2R−Net. The model integrates visual and depth features to improve retrieval performance. For depth feature extraction, Swin Transformer is employed as the core architecture, while a triplet loss function is used to generate highly discriminative image embeddings, optimizing the embedding space for similarity retrieval. In visual feature extraction, Local Binary Patterns (LBP) and Hu moments are utilized to capture texture and shape features of crater images. These features are weighted and fused, with Principal Component Analysis (PCA) applied for dimensionality reductions, effectively combining visual and depth features and improving retrieval efficiency. Finally, cosine similarity is used to calculate the similarities between the query images and database images and the most similar images form the retrieval results.
Ji et al. [Contribution 11] proposed a novel Rapid Point Cloud Stitching–Constrained Particle Filter (RPCS-CPF) method in their work. Random noise is introduced to simulate servo motor rotational errors, and both local and global point clouds are sequentially processed using RPCS-CPF operations. This results in smooth and continuous coal pile point clouds. Additionally, a coal pile boundary detection method based on gradient region growing clustering is presented. The experimental results demonstrated that the RPCS-CPF method effectively generates smooth and continuous coal pile point clouds, even in the presence of servo motor rotational errors.
Huang et al. [Contribution 12] introduced a defect detection algorithm based on YOLOv7. In order to improve the detection of small objects and streamline the model, the efficient channel attention (ECA) attention mechanism is incorporated into the backbone of the network structure. Considering the small sizes of defect targets on automotive parts and the complexity of their backgrounds, the neck portion of the model is redesigned. This redesign includes the integration of the Bidirectional Feature Pyramid Network (BiFPN) feature fusion module to enhance feature fusion, aiming to reduce missed detections and false alarms. The results show improvements compared to the original YOLOv7 network.
The aim of this Special Issue is to offer a broad and timely contribution to the current body of literature. The featured methodologies are both valuable and engaging, and are expected to attract attention from industry and the scientific community alike. The novel strategies presented are intended to inspire researchers in multiple multidisciplinary fields and foster further exploration in image processing, artificial intelligence, and intelligent robotics. Future research may focus on leveraging advanced techniques to enhance the image processing algorithms, to improve the capabilities of artificial intelligence methods, and to optimize the vision-based control algorithms used in robotics.

Author Contributions

Conceptualization, V.T.; writing—original draft preparation, V.T.; writing—review and editing, V.T., P.O.; supervision, P.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The Guest Editors wish to express their appreciation and deep gratitude to all of the authors and reviewers who contributed to this Special Issue. This research was a part of projects 2020-1.1.2-PIACI-KFI-2020-00173 of the University of Dunaújváros, and GINOP_PLUSZ-2.1.1-21-2022-00249 and 2023-1.1.1-PIACI_FÓKUSZ-2024-00011 of the Óbuda University.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Contributions

References

  1. Gonzales, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson: New York, NY, USA, 2018. [Google Scholar]
  2. Tadic, V.; Odry, A.; Toth, A.; Vizvari, Z.; Odry, P. Fuzzified Circular Gabor Filter for Circular and Near-Circular Object Detection. IEEE Access 2020, 8, 96706–96713. [Google Scholar] [CrossRef]
  3. Tadic, V.; Toth, A.; Vizvari, Z.; Klincsik, M.; Sari, Z.; Sarcevic, P.; Sarosi, J.; Biro, I. Perspectives of RealSense and ZED Depth Sensors for Robotic Vision Applications. Machines 2022, 10, 183. [Google Scholar] [CrossRef]
  4. Wang, S.; Chen, R.; Wu, H.; Li, X.; Feng, Z. YOLOH: You Only Look One Hourglass for Real-Time Object Detection. IEEE Trans. Image Process. 2024, 33, 2104–2115. [Google Scholar] [CrossRef] [PubMed]
  5. Kurka, P.R.G.; Salazar, A.A.D. Applications of image processing in robotics and instrumentation. Mech. Syst. Signal Process. 2019, 124, 142–169. [Google Scholar] [CrossRef]
  6. Holtzman-Gazit, M.; Kimmel, R.; Peled, N.; Goldsher, D. Segmentation of thin structures in volumetric medical images. IEEE Trans. Image Process. 2006, 15, 354–363. [Google Scholar] [CrossRef] [PubMed]
  7. Zhong, S.; Wang, W.; Feng, Q.; Zhang, Y.; Ning, Z. Cross-view discrepancy-dependency network for volumetric medical image segmentation. Med Image Anal. 2024, 99, 103329. [Google Scholar] [CrossRef] [PubMed]
  8. Ribeiro, J.B.; da Silva, R.R.; Dias, J.D.; Escarpinati, M.C.; Backes, A.R. Automated detection of sugarcane crop lines from UAV images using deep learning. Inf. Process. Agric. 2023, 11, 385–396. [Google Scholar] [CrossRef]
  9. Puchkov, E. Image Analysis in Microbiology: A Review. J. Comput. Commun. 2016, 4, 8–32. [Google Scholar] [CrossRef]
  10. Xue, X.; Ma, T.; Han, Y.; Ma, L.; Liu, R. Learning Deep Scene Curve for Fast and Robust Underwater Image Enhancement. IEEE Signal Process. Lett. 2023, 31, 6–10. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tadic, V.; Odry, P. Advances in Image Processing, Artificial Intelligence, and Intelligent Robotics. Electronics 2025, 14, 19. https://doi.org/10.3390/electronics14010019

AMA Style

Tadic V, Odry P. Advances in Image Processing, Artificial Intelligence, and Intelligent Robotics. Electronics. 2025; 14(1):19. https://doi.org/10.3390/electronics14010019

Chicago/Turabian Style

Tadic, Vladimir, and Peter Odry. 2025. "Advances in Image Processing, Artificial Intelligence, and Intelligent Robotics" Electronics 14, no. 1: 19. https://doi.org/10.3390/electronics14010019

APA Style

Tadic, V., & Odry, P. (2025). Advances in Image Processing, Artificial Intelligence, and Intelligent Robotics. Electronics, 14(1), 19. https://doi.org/10.3390/electronics14010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop