applsci-logo

Journal Browser

Journal Browser

Advances in Machine Vision for Industry and Agriculture

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 July 2025 | Viewed by 6182

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical & Computer Engineering, Faculty of Engineering, University of Western Macedonia, 50100 Kozani, Greece
Interests: machine vision; recycling; robotics; renewable energy sources; embedded systems; environmental sustainability; neural network; waste management; convolutional neural network; object detection

E-Mail Website
Guest Editor
Department of Communication and Digital Media, University of Western Macedonia, 52100 Kastoria, Greece
Interests: virtual and augmented reality; computer vision; image and video processing; image analysis; machine learning; pattern recognition with applications in medical image analysis and biometrics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Laboratory of Robotics, Embedded and Integrated Systems, Department of Electrical and Computer Engineering, University of Western Macedonia, 50131 Kozani, Greece
Interests: computer architecture; robotics; embedded and cyber-physical systems; gamification; Internet of Things; security and hardware & software cosynthesis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine vision has become increasingly useful in many areas in recent years, particularly in the industrial and agricultural sectors, where there seems to be a significant scope for further research. This Special Issue aims to provide concentrated and multidimensional knowledge to researchers and practitioners to showcase their latest advancements, methodologies, and insights in these areas, fostering collaboration and knowledge exchange to address current challenges and propel innovation forward.

Prospective authors are invited to submit original research articles, review papers, and short communications relevant to the theme of this Special Issue. Submissions will undergo a rigorous peer-review process to ensure the quality and significance of accepted contributions. Contributions are invited on a broad spectrum of topics related to machine vision in industry and agriculture, including, but not limited to, the following:

  1. Machine vision in precision agriculture;
  2. Automated visual inspection;
  3. Novel computational methods and methodology on machine vision;
  4. Experimental techniques and validation studies in machine vision in industry and agriculture;
  5. Pattern recognition;
  6. Robot application utilizing machine vision;
  7. Image analysis for machine vision;
  8. Robot control using machine vision;
  9. Agrobots utilizing machine vision;
  10. Case studies, practical applications, and real-world challenges in the sector of machine vision.

Dr. Dimitris Ziouzios
Dr. Michalis Vrigkas
Dr. Minas Dasygenis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine vision
  • precision agriculture
  • recycling
  • agrobots
  • industry 4.0l
  • smart systems
  • vision algorithms
  • robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 10577 KiB  
Article
Research on the Method of Crop Pest and Disease Recognition Based on the Improved YOLOv7-U-Net Combined Network
by Wenchao Xiang, Zitao Du, Xinran Liu, Zehui Lu and Yuna Yin
Appl. Sci. 2025, 15(9), 4864; https://doi.org/10.3390/app15094864 - 27 Apr 2025
Viewed by 81
Abstract
This paper proposes an improved YOLOv7-U-Net combined network for crop pest and disease recognition, aiming to address the issue of insufficient accuracy in existing methods. For the YOLOv7 network, a self-attention mechanism is integrated into the SPPCSPC module to dynamically adjust channel weights [...] Read more.
This paper proposes an improved YOLOv7-U-Net combined network for crop pest and disease recognition, aiming to address the issue of insufficient accuracy in existing methods. For the YOLOv7 network, a self-attention mechanism is integrated into the SPPCSPC module to dynamically adjust channel weights and suppress redundant information while optimizing the PAFPN structure to enhance cross-scale feature fusion and improve small-object detection capabilities. For the U-Net network, the CBAM attention module is added before decoder skip connections, and depth-separable convolutions replace traditional kernels to strengthen feature fusion and detail attention. Experimental results show the improved algorithm achieves 97.49% detection accuracy, with mean average precision (mAP) reaching 96.91% and detection speed increasing to 90.41 FPS. The loss function of the improved U-Net network decreases towards 0 with training iterations, validating its effectiveness. The study shows that the improved YOLOv7-U-Net combined network provides a more effective solution for crop pest and disease detection. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

21 pages, 7718 KiB  
Article
Effect of Selected Optical Navigation Methods on the Energy Consumption of Automated Guided Vehicles
by Krzysztof Balawender, Artur Jaworski, Mirosław Jakubowski and Hubert Kuszewski
Appl. Sci. 2025, 15(5), 2494; https://doi.org/10.3390/app15052494 - 26 Feb 2025
Viewed by 521
Abstract
The study compares the performance parameters of an Automated Guided Vehicle (AGV) equipped with optical navigation, focusing on two solutions: one utilizing reflective optocouplers and the other employing a camera. These components, commonly used in AGV optical navigation systems, differ in factors such [...] Read more.
The study compares the performance parameters of an Automated Guided Vehicle (AGV) equipped with optical navigation, focusing on two solutions: one utilizing reflective optocouplers and the other employing a camera. These components, commonly used in AGV optical navigation systems, differ in factors such as cost and the sophistication of control methods. The primary objective of the research was to evaluate the performance criteria of the analyzed optical navigation methods, with particular attention paid to electricity consumption, power profiles during specific transit tasks, and total transit time. The analysis also investigated two potential installation locations for the reflective optocouplers and the camera on the vehicle. The results indicate that the camera-based optical navigation method is more efficient. Specifically, the average energy consumption was approximately 26% lower when using the camera compared to the reflective optocouplers. Furthermore, the study revealed that the location of the camera had minimal influence on the vehicle’s energy consumption, whereas the location of the reflective optocouplers significantly affected energy usage. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

20 pages, 8703 KiB  
Article
Depth-Oriented Gray Image for Unseen Pig Detection in Real Time
by Jongwoong Seo, Seungwook Son, Seunghyun Yu, Hwapyeong Baek and Yongwha Chung
Appl. Sci. 2025, 15(2), 988; https://doi.org/10.3390/app15020988 - 20 Jan 2025
Viewed by 1316
Abstract
With the increasing demand for pork, improving pig health and welfare management productivity has become a priority. However, it is impractical for humans to manually monitor all pigsties in commercial-scale pig farms, highlighting the need for automated health monitoring systems. In such systems, [...] Read more.
With the increasing demand for pork, improving pig health and welfare management productivity has become a priority. However, it is impractical for humans to manually monitor all pigsties in commercial-scale pig farms, highlighting the need for automated health monitoring systems. In such systems, object detection is essential. However, challenges such as insufficient training data, low computational performance, and generalization issues in diverse environments make achieving high accuracy in unseen environments difficult. Conventional RGB-based object detection models face performance limitations due to brightness similarity between objects and backgrounds, new facility installations, and varying lighting conditions. To address these challenges, this study proposes a DOG (Depth-Oriented Gray) image generation method using various foundation models (SAM, LaMa, Depth Anything). Without additional sensors or retraining, the proposed method utilizes depth information from the testing environment to distinguish between foreground and background, generating depth background images and establishing an approach to define the Region of Interest (RoI) and Region of Uninterest (RoU). By converting RGB input images into the HSV color space and combining HSV-Value, inverted HSV-Saturation, and the generated depth background images, DOG images are created to enhance foreground object features while effectively suppressing background information. Experimental results using low-cost CPU and GPU systems demonstrated that DOG images improved detection accuracy (AP50) by up to 6.4% compared to conventional gray images. Moreover, DOG image generation achieved real-time processing speeds, taking 3.6 ms on a CPU, approximately 53.8 times faster than the GPU-based depth image generation time of Depth Anything, which requires 193.7 ms. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

19 pages, 6836 KiB  
Article
Mobile Augmented Reality Application to Evaluate the River Flooding Impact in Coimbra
by Mehdi Lamrabet, Rudi Giot, Jorge Almeida and Mateus Mendes
Appl. Sci. 2024, 14(21), 10017; https://doi.org/10.3390/app142110017 - 2 Nov 2024
Viewed by 2012
Abstract
The downtown area of the city of Coimbra, Portugal, is at low altitude and has historically suffered floods that have caused serious economic losses. The present research proposes a mobile augmented reality (MAR) application aimed at visualising the effect of possible scenarios of [...] Read more.
The downtown area of the city of Coimbra, Portugal, is at low altitude and has historically suffered floods that have caused serious economic losses. The present research proposes a mobile augmented reality (MAR) application aimed at visualising the effect of possible scenarios of flooding in an area of higher risk in the city. A realistic 3D model of the city was created, using data extracted with BLosm and processed through Blender, followed by its integration into Unity with Vuforia for AR visualisation. The methodology encompasses the extraction and simplification of 3D models, mapping real-world coordinates in Unity, analysing several datasets, obtaining a model through regression and implementing a workflow to manage interactions between various Unity objects. The MAR application enables users to visualise potential flood impacts on buildings, utilising colour-coded indicators to represent different levels of water contact. The system’s efficacy was evaluated by simulating various use-case scenarios, demonstrating the application’s capability to provide real-time, interactive flood risk assessments. The results underline the potential of integrating AR and machine learning for enhancing urban flood management and prevention. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

18 pages, 8143 KiB  
Article
Fuzzy Classification of the Maturity of the Orange (Citrus × sinensis) Using the Citrus Color Index (CCI)
by Marcos J. Villaseñor-Aguilar, Miroslava Cano-Lara, Adolfo R. Lopez, Horacio Rostro-Gonzalez, José Alfredo Padilla-Medina and Alejandro Israel Barranco-Gutiérrez
Appl. Sci. 2024, 14(13), 5953; https://doi.org/10.3390/app14135953 - 8 Jul 2024
Cited by 2 | Viewed by 1699
Abstract
The orange (Citrus sinensis) is a fruit of the Citrus genus, which is part of the Rutaceae family. The orange has gained considerable importance due to its extensive range of applications, including the production of juices, jams, sweets, and extracts. The [...] Read more.
The orange (Citrus sinensis) is a fruit of the Citrus genus, which is part of the Rutaceae family. The orange has gained considerable importance due to its extensive range of applications, including the production of juices, jams, sweets, and extracts. The consumption of oranges confers several nutritional benefits, including flavonoids, vitamin C, potassium, beta-carotene, and dietary fiber. It is crucial to acknowledge that the primary quality criterion employed by consumers and producers is maturity, which is correlated with the visual quality associated with the color of the epicarp. This study proposes the implementation of a computer vision system that estimates the degree of ripeness of oranges Valencia using fuzzy logic (FL); the soluble solids content was determined by refractometry, while the firmness of the fruit was evaluated through the fruit firmness test. The proposed method was divided into five distinct steps. The initial stage involved the acquisition of RGB images. The second stage presents the segmentation of the fruit, which entails the removal of extraneous noise and backgrounds. The third and fourth steps involve determining the centroid of the fruit, and five regions of interest were obtained in the centroid of the fruit of the Citrus Color Index (CII), ranging from 3 × 3 to 11 × 11 pixels. Finally, in the fifth step, a model was created to estimate maturity, °Brix, and firmness using Matlab 2024 and the Fuzzy Logic Designer and Neuro-Fuzzy Designer applications. Consequently, a statistically significant correlation was established between maturity, degree Brix, and firmness, with a value greater than 0.9, using the Citrus Color Index (CII), which reflects the physical–chemical changes that occur in the orange. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

Back to TopTop