Next Article in Journal
Towards Net-Zero Coastal Homes: Techno-Economic Optimization of a Hybrid Heat Pump, PV, and Battery Storage System in a Deeply Retrofitted Building in Poland
Previous Article in Journal
The Impact Assessment of Eco-Industrial Development on Air Pollution: Evidence from China’s National Eco-Industrial Demonstration Parks
Previous Article in Special Issue
Vapor Pressure Deficit as an Indicator of Condensation in a Greenhouse with Natural Ventilation Using Numerical Simulation Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Automation of Monitoring and Production Accounting in Greenhouse Complexes Using Integrated AI, Robotics, and Data Systems

1
Joint Institute for Nuclear Research, 6 Joliot-Curie, 141980 Dubna, Russia
2
Engineering Modeling Studio, Dubna State University, Universitetskaya str. 19, 141980 Dubna, Russia
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(7), 3620; https://doi.org/10.3390/su18073620
Submission received: 4 March 2026 / Revised: 1 April 2026 / Accepted: 3 April 2026 / Published: 7 April 2026

Abstract

Production greenhouse complexes increasingly require automation and digitalization to address rising labor costs, improve productivity, and support sustainable resource use. However, most existing solutions target isolated tasks and lack a unified framework for continuous monitoring and production-oriented accounting at facility scale. This paper proposes a system-level architecture that integrates robotic monitoring platforms, AI-based perception, and cloud-based data management into a coherent operational framework. The robotic monitoring platforms operate on rails and concrete surfaces and are capable of elevating cameras and sensors up to 5 m to support plant-health assessment, environmental monitoring, and production accounting. Aggregated data are incorporated into a digital twin that supports spatial traceability, historical analysis, and decision support. The proposed approach enables continuous inspection, improves early detection of crop stress, reduces repetitive manual scouting, and supports targeted interventions. The framework provides a scalable foundation for sustainable, data-driven greenhouse management and practical deployment of robotic monitoring systems in industrial production environments.

1. Introduction

Controlled-environment agriculture has become a key component of modern food production systems in Europe, Asia, and North America. Large industrial facilities rely on standardized infrastructures such as tube-rail transport systems, trellis-based crop cultivation, and centralized climate control. Rising labor costs, continuous production requirements, and the need for more efficient use of fertilizers and pesticides have intensified the demand for automation and digitalization in controlled-environment agriculture.
As of 2024, the total greenhouse area in the Russian Federation exceeded 3.46 thousand hectares across more than 100 industrial enterprises, with tomatoes and cucumbers as the dominant crops [1,2,3]. The sector is undergoing rapid technological modernization. This trend is creating demand for scalable monitoring and management solutions.
Despite this growth, operational monitoring in most facilities remains predominantly manual. Agronomists inspect crop rows sequentially, often requiring several days for full coverage. This limits inspection frequency and delays the identification of early stress symptoms. Pest detection is similarly constrained, as insects may remain concealed within dense canopies and are difficult to detect through visual assessment alone [4,5].
Accounting tasks present an additional challenge. Production managers must routinely count the number of plants per row, estimate the number of developing tops, and quantify fruit load and the number of harvest-ready units. These procedures require repetitive manual labour, are highly subjective, and their precision deteriorates with fatigue and time constraints.
Modern industrial greenhouse complexes typically consist of multiple greenhouse blocks containing hundreds of crop rows. Individual row lengths often exceed 100 m. Such spatial scale and structural density create significant challenges for operational tasks. Representative examples of these environments are shown in Figure 1.
As a result, manual monitoring is labour-intensive, inconsistent, and temporally sparse. Long intervals between inspection rounds reduce information freshness, allowing diseases, wilting, nutrient stress, or pest proliferation to develop unnoticed. A transition toward sustainable, high-efficiency food production therefore requires continuous, high-frequency observation and formal accounting. This process should be supported not only by human expertise but also by intelligent autonomous systems capable of uninterrupted operation.
Early greenhouse automation focused primarily on industrial process control, with climate computers regulating environmental parameters such as temperature, ventilation, and irrigation [6]. Subsequent developments introduced distributed sensing and IoT technologies for continuous microclimate monitoring [7,8,9], followed by computer vision systems for crop assessment [10,11] and, more recently, mobile robotic platforms for autonomous scouting and data acquisition [12,13,14]. Despite this technological progress, most deployed solutions remain task-specific and operate as independent subsystems. As a result, greenhouse facilities still lack integrated, production-wide digital models that support continuous monitoring, traceability, and data-driven decision making.

Related Research

Research on autonomous systems for greenhouse complexes has intensified in recent years, with substantial advances in robotic mobility, navigation, and AI-driven perception. Comprehensive reviews have synthesized this rapidly evolving field and consistently highlight both the diversity of applications and the fragmentation of existing technical solutions [15,16].
The main application directions of greenhouse robotics include phenotyping and crop monitoring, transplanting and grafting, spraying and pest control, harvesting, environmental monitoring, and internal logistics. In phenotyping and crop monitoring, robotic platforms have been developed for automated acquisition of morphological and physiological traits, enabling non-destructive stress detection and growth analysis [17]. Transplanting and grafting tasks have been addressed through precision cutting and high-throughput robotic mechanisms designed for commercial-scale nurseries, demonstrating improved accuracy and reduced manual labor [18,19].
For spraying and pest control, mobile manipulators and autonomous vehicles have combined navigation, perception, and targeted pesticide application to enable precision treatment within narrow greenhouse rows [20,21]. Harvesting robots have reached relatively high technological maturity, with several systems demonstrating autonomous fruit detection and picking under commercial greenhouse conditions, including pipe-rail mobile platforms and continuum-arm manipulators tailored for sweet-pepper and tomato harvesting [22,23]. However, these systems remain largely task-specific and focus exclusively on fruit collection.
Robust navigation and environmental monitoring have been supported by LiDAR-based SLAM and multi-sensor fusion approaches that combine laser, inertial, and wheel-odometry data for reliable localization under dense canopy and dynamic conditions [24,25]. Internal logistics has also been explored through intelligent vehicles capable of transporting crops on greenhouse rails and concrete floors, integrating semantic perception and autonomous rail-following strategies [26,27].
These studies demonstrate strong progress in robotic mobility, perception, and task-oriented manipulation. However, most solutions remain vertically integrated and address only a single operational function. Their interoperability is limited, and integration into unified production-wide data ecosystems remains minimal.
From a practical deployment perspective, many robotic prototypes rely on custom and costly hardware that is difficult to maintain and poorly compatible with existing greenhouse infrastructure. Critical issues such as sanitation during row transitions, reliability in humid environments, and interoperability with climate-control and enterprise systems are often insufficiently addressed. This is particularly important given the well-known risk of mechanical pathogen transmission inside greenhouses [28,29,30,31].
Review studies further emphasize that high system cost, energy consumption, maintenance demands, and the need for specialized technical personnel hinder wide industrial adoption. Moreover, most reported solutions remain at low Technology Readiness Levels (TRL 2–4), with validation often limited to laboratory experiments or small pilot trials rather than large operating facilities. Publicly documented systems approaching near-commercial readiness therefore remain rare in the open scientific literature [13].
Overall, most existing solutions remain task-specific and focus on individual operational functions such as harvesting, spraying, phenotyping, or environmental monitoring. These systems typically emphasize either robotic mobility or perception performance, while their integration into unified data management and decision-support frameworks remains limited. As a result, existing approaches are generally fragmented and do not provide continuous, system-level monitoring and accounting at facility scale. In addition, aspects such as multi-height canopy inspection and biosecurity-aware operation (e.g., automated sanitation during inter-row transitions) are rarely addressed in existing implementations.
In contrast, the proposed framework introduces a system-level approach that integrates hybrid rail–ground robotic mobility, multi-level sensing using a lift mechanism, onboard AI-based perception, environmental monitoring, and a cloud-based digital twin within a single operational architecture. Unlike existing solutions, these components are designed to operate as a unified system, enabling continuous monitoring, spatial traceability, and production-oriented accounting at facility scale. This combination of hybrid mobility, multi-height sensing, sanitation-aware operation, and integrated digital twin functionality has not been jointly addressed in existing studies.
Owing to ongoing development and intellectual property considerations, certain low-level technical specifications are not disclosed in detail. Nevertheless, the work contributes a practical robotic platform and a complete software–hardware architecture validated through deployment in operational greenhouse facilities. The architecture and prototype implementation are described in the following sections.

2. Conceptual Framework for Automated Monitoring and Accounting in Greenhouse Complexes

The proposed framework for automated monitoring and accounting in greenhouse complexes is guided by a set of system-level design principles that define its architecture, operational behavior, and sustainability objectives. To improve clarity, these principles can be conceptually grouped into three categories: system integration and scalability, operational continuity and robustness, and data-driven decision support.
From a system perspective, robotic mobility, sensing, AI-based perception, data management, and operator-facing analytics are implemented within a unified cyber–physical architecture. This design enables consistent data flow from field sensing to production-oriented accounting and avoids the fragmentation typical of task-specific solutions.
In terms of scalability, multiple platforms operate simultaneously within the same facility, distributing monitoring tasks spatially and temporally and enabling scalable fleet-based deployment in large industrial complexes. The architecture supports incremental expansion across compartments and facilities. Additional robots, sensors, and analytical modules can be integrated without redesign of the overall system.
Operational continuity and robustness are addressed through several design choices. Continuous automated observation replaces episodic manual scouting, increasing inspection frequency and temporal resolution and supporting earlier detection of crop stress and disease. Robot movement between rows incorporates sanitation and contamination-control procedures to reduce the risk of pathogen transfer within biologically sensitive greenhouse environments.
Finally, the framework emphasizes data-driven decision support. Historical and real-time data are combined within the digital twin to forecast stress, disease, and yield risks, enabling proactive rather than purely reactive decision making. Monitoring outputs are structured to quantify plant counts, fruit load, and productivity metrics, supporting objective yield estimation, planning, and traceable documentation of production states. All sensing and analytical results are integrated into a continuously updated spatial model of the greenhouse, providing visualization, historical analysis, and decision-support capabilities. Spatially localized diagnostics further enable targeted interventions, reducing unnecessary chemical and resource use while maintaining compatibility with existing greenhouse infrastructure.
Together, these principles establish a scalable and production-oriented foundation for integrated greenhouse automation.

2.1. System Architecture Overview

Based on the above design principles, the proposed framework uses a distributed cyber–physical architecture. Mobile robots perform localized sensing and inference, while centralized infrastructure handles aggregation, storage, digital-twin management, and decision-support analytics. This design ensures continuous information flow between the physical greenhouse environment and the digital management. At the same time, it balances onboard autonomy with centralized coordination.
At the physical level, the system consists of a fleet of autonomous monitoring platforms deployed across greenhouse compartments. Each unit performs navigation, sensing, and local processing, traversing rows using hybrid rail–ground mobility compatible with heating-pipe rails, trolley tracks, and concrete floors. An integrated lift mechanism enables vertical positioning of cameras and sensors up to 5 m, supporting full-canopy inspection across growth stages.
Each platform integrates visual, depth, and environmental sensors that generate spatially referenced observations of plant appearance, canopy geometry, and microclimate conditions. Measurements are synchronized with onboard localization and mapped to a greenhouse coordinate system.
Communication with the backend is realized through bidirectional wireless links. To reduce bandwidth and ensure robustness, primary perception and accounting tasks are executed locally, while only summarized analytical outputs are transmitted to the server. Conversely, the backend provides mission schedules, task updates, and model parameters. This selective transmission strategy maintains operation even under temporary connectivity degradation.
The centralized backend aggregates data from multiple robots and performs cross-platform fusion and higher-level analytics, generating spatial health maps, stress indicators, production summaries, and historical archives for long-term trend analysis and risk assessment. In this way, onboard inference supports local decision-making, while the backend operates at facility scale.
At the application level, the system provides production-oriented monitoring and accounting services. Plant counting, fruit detection, and local yield estimation are performed on the robots, whereas aggregated statistics, spatial production maps, and temporal productivity indicators are maintained centrally and delivered to agronomists through dashboards and analytical interfaces.
All sensing and analytical results are integrated into a cloud-based digital twin that represents greenhouse geometry, crop layout, plant health, environmental fields, and historical production data within a unified spatial–temporal model.
This layered separation of physical execution, local inference, centralized aggregation, and management logic enables scalability, modular expansion, and straightforward integration of additional robots, sensors, and models without structural redesign (Figure 2).

2.2. Robotic Platform Layer

The robotic platform layer provides the physical foundation of the proposed framework, directly implementing the mobility and sensing capabilities defined by the system architecture. Each platform operates as an autonomous mobile unit that integrates hybrid mobility, vertical canopy access, onboard perception and accounting, multi-battery power supply, and biosecurity-aware operation.
Mobility is realized through a hybrid rail–ground configuration that combines rail-guided traversal along heating-pipe tracks with omnidirectional ground motion. Ground mobility is implemented using Mecanum wheels, which enable lateral translation and in-place rotation, allowing efficient maneuvering within service corridors that typically reach widths of approximately 3 m. This omnidirectional capability simplifies repositioning and row alignment and provides fine positioning accuracy to compensate for rail misalignment and long-term mechanical wear. The hybrid locomotion module is illustrated in Figure 3.
Navigation relies primarily on RGB–depth perception (Intel RealSense, Intel Corporation, Santa Clara, CA, USA) combined with the structured geometry of greenhouse infrastructure. Rather than employing computationally intensive SLAM or LiDAR-based mapping, which performs poorly in repetitive greenhouse environments, the system exploits fixed row layouts and rail tracks for stable guidance. QR markers placed at row entrances provide absolute localization and eliminate long-term drift, while hardware limits and visual monitoring enforce safe motion.
Vertical canopy access is provided by a scissor-type lift mechanism capable of elevating cameras and sensors up to 5 m, enabling inspection of the full plant profile across growth stages. Compared with commercially available telescopic towers, which are often costly and less mechanically robust under continuous greenhouse operation, the scissor configuration offers improved stability, durability, and cost-effectiveness while remaining easily adaptable to different crop heights. A schematic of the lift mechanism is shown in Figure 4.
Power is supplied through distributed multi-battery modules to improve weight balance and operational endurance. Embedded computing hardware performs primary perception and accounting tasks directly on the platform, including disease detection, stress recognition, plant counting, and fruit detection. This edge-processing strategy reduces communication load and allows autonomous operation during temporary connectivity degradation, while only inference results and selected imagery are transmitted to the backend.
To address greenhouse biosecurity requirements, the platform incorporates an automated sanitation mechanism that disinfects critical components during row-to-row transitions, reducing the risk of pathogen transfer between crop zones (Figure 5).
The robotic platforms provide reliable low-speed autonomous motion, multi-height sensing, onboard inference, and localized data buffering, enabling continuous monitoring and production-oriented accounting under typical greenhouse conditions characterized by high humidity, dense vegetation, and variable illumination. The complete platform configuration is illustrated in Figure 6.

2.3. Sensing and Data Acquisition Layer

Building on the robotic platform layer, the sensing and data acquisition layer defines how environmental and plant-related information is captured and structured. It supplies spatially and temporally referenced observations that continuously update the system’s digital twin and support monitoring, production-oriented accounting, and higher-level analytics under real operating conditions.
Sensor configuration was refined through multiple deployments in commercial greenhouses characterized by dense vegetation, narrow clearances, and highly variable illumination. Early multi-camera lateral layouts increased occlusions and reduced reliability of accounting tasks. Consequently, the final configuration employs two RGB–depth cameras mounted centrally on the lift and oriented obliquely (≈45°) toward both sides of the row. This geometry improves visibility of lateral foliage surfaces and enhances robustness of plant-top and fruit detection compared with purely frontal or nadir viewpoints (Figure 7).
RGB–depth sensing (e.g., Intel RealSense) combines visual appearance with geometric information, enabling separation of overlapping plant structures and improving detection accuracy under partial occlusion [32,33]. This multimodal approach is particularly beneficial in dense canopies typical of mature tomato and cucumber crops.
To address lighting variability caused by mixed natural and artificial illumination, the platforms incorporate dedicated onboard lighting that provides controlled, localized illumination during image capture. This reduces dependence on ambient conditions and improves the stability of visual features used by perception models.
Beyond vision-based sensing, each platform integrates a modular environmental sensing block for localized microclimate monitoring. Measured parameters include air temperature, relative humidity, illumination intensity and spectrum, and gas concentrations such as CO2 and CH4, with optional integration of additional sensors. The modular design facilitates adaptation to different sensing requirements and straightforward sensor replacement under greenhouse conditions. A promising direction for further extension of the modular sensor suite is the integration of electronic-nose modules for early detection of pest infestation and plant stress [34,35,36]. Spectral light measurements may further assist interpretation of plant responses in facilities equipped with adjustable artificial lighting systems [37,38,39]. All sensor data are synchronized with platform localization and lift position to ensure precise spatial registration within the greenhouse coordinate system.
Data acquisition follows a selective, event-driven strategy: high-frequency streams are processed locally on the robot, while only semantically relevant outputs are transmitted to the backend. These summarized observations update the digital twin and support a closed perception–decision–action workflow.
The sensing layer prioritizes robustness and trend detection rather than laboratory-grade precision; relative changes and spatial patterns are typically more informative for early anomaly identification than absolute measurements [40,41,42]. This multimodal configuration provides a reliable and computationally efficient foundation for AI-based perception and continuous greenhouse monitoring.

2.4. Data Management, Digital Twin, and Decision-Support Layer

At the system level, collected data are processed and integrated within the data management and digital twin layer, which provides centralized coordination and decision support. Communication with the backend is implemented through application programming interfaces (APIs) that enable scalable and asynchronous data exchange between mobile units and the cloud infrastructure.
Data transmission follows a selective, event-driven strategy. Perception, stress detection, and accounting tasks are executed onboard the robots, while only semantically relevant outputs—such as detected anomalies, plant and fruit counts, confidence scores, telemetry, and representative imagery—are transmitted to the server. This approach reduces network load, supports operation under intermittent connectivity, and enables scalable deployment across large facilities.
On the server, incoming data are organized using a multi-tenant model that reflects the hierarchical structure of greenhouse operations, supporting multiple organizations, facilities, and robotic platforms with role-based access control. The backend continuously monitors robot status, including connectivity, localization, battery level, and task progress, enabling operators to assess fleet readiness and identify abnormal behavior.
Task coordination is performed through a server-side task constructor that allows authorized users to define and schedule missions by specifying inspection zones, row sequences, monitoring frequency, lift parameters, and priorities. This mechanism enables flexible adaptation of monitoring strategies without direct hardware interaction.
A central component of the layer is the greenhouse digital twin, which maintains a spatially explicit and continuously updated representation of facility geometry, crop layout, plant condition, environmental measurements, accounting results, and robot activity. Rows are treated as primary spatial entities, allowing precise association of observations and metrics with specific crop segments. Within this framework, physical entities including rows, plants, sensors, and robotic platforms are mapped to their virtual counterparts and continuously updated through incoming data streams, enabling not only visualization but also contextual interpretation of observations and coordinated system operation, including spatially targeted monitoring, task planning, and interaction with robotic platforms at facility scale.
Monitoring results are presented through map-based dashboards with row-level status indicators, anomaly markers, and environmental heatmaps. Historical data support temporal trend analysis and evaluation of interventions, while aggregated indicators provide management-level insights into production performance and risk distribution.
The system is designed as a decision-support environment rather than an autonomous control system. It enhances situational awareness and planning while leaving final agronomic and operational decisions to human operators.
This layer provides the informational backbone that integrates robot coordination, digital twin management, and analytical services required for scalable and continuous greenhouse monitoring.

2.5. Sustainability Impact and Deployment Considerations

Beyond technical implementation, the proposed framework has broader implications for sustainability and practical deployment in greenhouse environments.

2.5.1. Environmental, Economic, and Social Implications

Beyond its technical architecture, the proposed framework contributes to sustainability primarily through improved information quality and monitoring continuity rather than through direct actuation. Continuous robotic inspection enables early detection and spatial localization of crop stress and disease, allowing interventions to be limited to affected rows or zones instead of entire compartments, thereby reducing unnecessary chemical applications [43,44].
Although nutrient and climate regulation remain governed by existing greenhouse control systems, timely and spatially explicit diagnostics can support more informed management decisions. The environmental benefit therefore arises from earlier and more targeted responses rather than direct control over resource flows [45].
From an operational perspective, automated monitoring improves labor efficiency. Manual scouting in large facilities often limits inspection frequency to once per week, whereas robotic platforms can operate continuously, including nights and weekends. Based on traversal speed and inspection time measurements obtained during experimental deployment (see Section 3), two platforms are estimated to be sufficient to inspect an entire compartment within approximately one day, enabling daily full-facility coverage. This higher temporal resolution reduces repetitive walking inspections and allows agronomists to focus on diagnosis and intervention.
Production-oriented accounting further improves predictability by providing automated estimates of plant counts, fruit load, and harvest readiness, supporting planning of harvesting, logistics, and workforce allocation [46,47].
Socially, the framework functions as a human-centered decision-support system. By reducing physical workload and providing structured, spatially explicit information, it enhances situational awareness while preserving the central role of agronomists in decision making.

2.5.2. Challenges and Limitations

Although greenhouse infrastructures are often standardized, practical deployment remains constrained by facility-level variability in compartment layouts, crop types, canopy management practices, and the placement of auxiliary equipment. Irregular rail connections, shared corridors, and local obstacles complicate navigation, requiring robotic platforms to maintain operational flexibility.
Environmental conditions further challenge system reliability. Elevated humidity, condensation, and exposure to agrochemicals can reduce the long-term durability of sensors and electronics. At the same time, dense canopies, occlusions, and variable illumination introduce uncertainty into perception tasks. These factors necessitate careful sensor positioning, regular calibration, and cautious interpretation of model predictions.
Finally, organizational factors are critical. Effective adoption requires integration with existing workflows and acceptance by personnel; without routine operational use, the benefits of automated monitoring may not be fully realized.

2.5.3. Opportunities for Future Deployment

The modular architecture of the framework supports gradual enhancement as sensing technologies and analytical methods evolve. Improvements in camera quality, multispectral or hyperspectral imaging, and advanced environmental or volatile-compound sensors may further strengthen early stress detection [34,35,36,37,38,39].
Continuous data accumulation also enables higher-level analytics, including plant-level growth modeling, anomaly prediction, and probabilistic yield forecasting, supporting strategic planning and risk assessment.
While the system could technically support automated actuation, such as spraying or harvesting, these operations remain safety-critical and are better retained under human supervision. Accordingly, the framework prioritizes information and analytics rather than full automation of crop handling.
The platform is intended to evolve as a scalable information and decision-support system that enhances human expertise and promotes sustainable, data-driven greenhouse management.

3. Prototype Implementation and Experimental Validation

3.1. Prototype Architecture and Hardware Implementation

The proposed framework is the result of an iterative process of experimentation and development. Several variants of the robotic test platform were developed and deployed to evaluate mobility, sensing, navigation, onboard perception, and system integration under real greenhouse operating conditions.
While the latest industrially tested prototype does not implement all conceptual elements described in Section 2 in full detail, it represents a practical realization of the framework’s core principles and enables validation of key architectural components in both testbed and production environments.
The platform is designed as a mobile monitoring unit capable of operating within greenhouse rows equipped with tube-rail infrastructure. Its architecture follows a modular design, integrating a hybrid mobility subsystem, a vertical lift mechanism, a multimodal sensing suite, on-board computing hardware, power supply, and wireless communication modules. The prototype prioritizes stable motion, repeatable positioning, and reliable data acquisition, reflecting its primary role as a monitoring and accounting system rather than a manipulation platform.
The current prototype has a footprint of approximately 90 cm in width and 140 cm in length, with an overall height of 90 cm when the lift mechanism is fully folded. The fully equipped platform has a total mass of approximately 150 kg. Power supply for the platform is provided by three independent, self-assembled battery modules. Each module consists of 156 battery cells and integrates load-balancing circuitry, a controller unit, and a digital display for local monitoring of operating status. The total energy capacity of the power system is approximately 8541 Wh, supporting extended autonomous operation during monitoring missions. The evolution of the robotic platform from the previously evaluated prototype to the current experimental configuration is illustrated in Figure 8.
Mobility is implemented using a hybrid rail–ground configuration compatible with standard production greenhouses. Rail-based motion is achieved through a dedicated wheel assembly designed to operate directly on heating-pipe rails, providing stable and repeatable traversal along crop rows. To enable transitions between rows and movement within service corridors, the platform is additionally equipped with 152 mm omnidirectional ground wheels, each driven by a 200 RPM DC geared motor, allowing controlled motion on concrete or similar surfaces. Switching between rail and ground modes enables flexible repositioning without manual handling and ensures compatibility with heterogeneous greenhouse layouts.
Vertical access to tall crops is provided by an integrated scissor-type lift mechanism capable of elevating the sensor mast to heights of up to approximately 5 m. This range enables full-canopy inspection of crops such as tomatoes and cucumbers throughout all growth stages. The lift structure is constructed using aluminum profiles, bearings, gas springs, and a 1.3 T automotive winch. The mechanism is designed to maintain structural stability during motion and data acquisition, and lift position is synchronized with the robot’s localization system to support accurate spatial referencing of collected data.
A sanitation subsystem is integrated into the platform to support biosecurity-aware operation when transitioning between rows. The current implementation consists of a compact air compressor, electric faucet, pressure sensor, liquid canister, and a network of hoses and spray nozzles, enabling localized disinfection of components that may come into contact with the greenhouse floor or plant debris.
Control of the platform’s subsystems is distributed across multiple microcontrollers coordinated by a central ATmega2560 (Microchip Technology Inc., Chandler, AZ, USA). All functional modules are implemented as ROS 2 nodes, enabling modular software development, inter-process communication, and integration with higher-level planning and monitoring services.
The sensing configuration focuses on robust visual and environmental monitoring under dense canopy conditions. The primary visual sensing subsystem consists of two cameras mounted on the lift mast and oriented approximately 45° toward the left and right sides of the row. This dual-camera configuration provides overlapping oblique views of the canopy, improving coverage of lateral plant structures and reducing occlusion effects during fruit and plant-top detection. An onboard illumination unit supplies controlled local lighting during image acquisition, reducing the influence of ambient lighting variability. Depth data from the RealSense cameras enable RGB–depth data fusion, improving spatial reasoning during perception and accounting tasks.
Environmental monitoring is performed using a compact sensor block mounted at the top of the lift. The block is based on a Raspberry pi 4 and supports measurement of air temperature, relative humidity, illumination intensity, and selected gas concentrations. The sensor module is mechanically protected and sealed to withstand high humidity, condensation, and exposure to agrochemicals. Its modular design allows rapid replacement of individual sensing elements without affecting overall system operation.
Navigation and localization are implemented using a vision-based approach tailored to the structured greenhouse environment. The system relies on video streams from the RealSense camera combined with classical computer vision methods and lightweight segmentation models to detect greenhouse structural elements such as rails, row boundaries, and corridor geometry (Figure 9). Semantic segmentation is used to identify navigable regions and relevant visual landmarks, enabling stable row-following behavior without reliance on full-scale SLAM.
To improve positional reliability and support absolute spatial referencing, QR codes are placed at the entrances of each row. These markers are detected by the onboard vision system and used to correct the robot’s position within the greenhouse coordinate system, enabling reliable association of collected data with specific rows and compartments. This hybrid navigation strategy, combining continuous visual guidance with discrete visual markers, provides sufficient accuracy for monitoring and accounting tasks while remaining robust to changing canopy density and lighting conditions.
On-board computing is implemented using an embedded edge-processing workstation (Lenovo platform equipped with an Intel Xeon E3-1245 v6 CPU, 16 GB DDR4 RAM, and NVIDIA Quadro M1200M GPU), capable of executing AI-based perception and navigation models in real time. Core tasks such as disease and stress detection, plant and fruit identification, accounting-related feature extraction, and navigation inference are performed locally on the platform. This design reduces communication load and allows continued operation during temporary network disruptions. Processed results, system telemetry, and selected image fragments associated with detected anomalies are transmitted to the backend infrastructure via wireless communication, while mission updates and task parameters are received from the server.
At its current stage, the prototype supports autonomous traversal of greenhouse rows, vertical lift operation, synchronized multimodal data acquisition, vision-based navigation with QR-code-assisted localization, and bidirectional communication with the cloud-based management system.

3.2. Perception Models, Sensor Configuration, and Software Interfaces

The perception and software stack of the prototype were designed to support robust monitoring and production-oriented accounting under real greenhouse conditions, characterized by dense canopy structures, frequent occlusions, and highly variable illumination. In contrast to laboratory-oriented systems, the design emphasis was placed on robustness, interpretability, and operational reliability, rather than on maximizing performance on narrowly defined benchmark datasets.
Visual perception tasks are primarily addressed using object detection and lightweight segmentation models from the YOLO family (YOLOv12s), selected to balance inference accuracy with real-time execution constraints on embedded computing hardware. The core perception objectives include detection of plant tops, identification of fruits, recognition of visually observable stress symptoms, and extraction of accounting-related features at the row level. Evaluation on previously collected and manually annotated greenhouse datasets demonstrated high detection accuracy, with plant-top detection achieving mAP@0.5 values above 0.95 for both tomato and cucumber crops, and cucumber wilt/stress detection achieving mAP@0.5 above 0.94. The datasets used for training and evaluation were collected under real greenhouse conditions and included annotated samples with at least 400 labeled objects for each perception task.
Production-oriented accounting, specifically plant counting and fruit load estimation, is implemented using Python 3.9 and C++(17 standard)-based object tracing scripts that track detected instances across consecutive frames and along the robot trajectory, enabling consistent aggregation of counts while reducing double-counting and missed detections. Plant-top detection and counting demonstrated stable performance during continuous row traversal, with no missed detections observed in the recorded experimental sequences. Quantitative evaluation of fruit-counting error is ongoing; however, stable tracking behavior was observed during continuous traversal.
All perception and accounting tasks are executed locally on the robotic platform. Runtime performance tests showed that GPU-accelerated Python inference exceeded 30 frames per second, while optimized C++ CPU implementations achieved approximately 15 frames per second. These processing rates substantially exceed the requirements imposed by the robot traversal speed (0.5 m·s−1), ensuring reliable real-time operation without frame drops during monitoring. Representative examples of detection, tracking, and production-oriented accounting outputs obtained during greenhouse operation are shown in Figure 10.
However, as data acquisition is performed automatically and under varying illumination and canopy density, potential robustness issues may arise in more complex scenarios. For this reason, the system architecture is designed to transition toward full RGB–depth perception as the primary sensing modality in future iterations.
Depth information acquired from the Intel RealSense D455 cameras can play a critical role in improving perception robustness. Depth cues are fused with RGB imagery to support spatial reasoning in situations where visual appearance alone is ambiguous. In dense canopies, overlapping leaves and fruits frequently obscure each other, making purely RGB-based detection unreliable. Depth data enable estimation of relative distances, separation of foreground and background structures, and improved filtering of detections originating from adjacent rows. The dual-view configuration also improves temporal consistency of object tracing and reduces missed detections caused by self-occlusion of fruits and leaves, which is common in dense tomato and cucumber canopies.
To ensure reliable absolute localization and consistent data association, QR codes placed at row entrances are detected by the onboard vision system and used as discrete reference points. In addition, wheel encoders and electronic estimation of platform lift height are used to synchronize the robot’s position within the greenhouse coordinate system. This combination allows perception outputs and accounting results to be consistently linked to specific rows and compartments, which is essential for longitudinal analysis and integration with the digital twin.
Interaction between the robotic platforms and the backend infrastructure is realized through a set of application programming interfaces (APIs) designed for event-driven communication. These interfaces support transmission of (i) the current operational status and state of each robot, (ii) detailed summaries of analyzed rows, (iii) environmental sensor measurements, and (iv) structured descriptions of detected anomalies and problem areas within rows. In addition, robotic platforms periodically query the backend for updates on new mission assignments, scheduling changes, and configuration parameters.
On the server side, incoming data are ingested into the data management and digital twin layer described previously. The backend maintains the current operational state of each robot, aggregates perception and accounting results, and provides task definitions to the platforms through a task-constructor interface. This bidirectional interaction enables flexible scheduling of monitoring missions, dynamic adjustment of inspection priorities, and coordination of multiple robotic units operating within the same facility. Several main interfaces are presented in Figure 11.
The perception and software architecture of the prototype reflect a pragmatic balance between algorithmic sophistication and operational reliability. The system provides a robust foundation for continuous monitoring and accounting in industrial greenhouse environments while remaining extensible for future enhancements.

3.3. Experimental Setup, Evaluation Methodology, and Results

Experimental evaluation of the developed robotic platform is conducted as an ongoing, iterative process accompanying system development. Evaluation activities are organized into complementary stages that support continuous refinement rather than a single, finalized validation cycle. To date, testing has been carried out both on a dedicated experimental testbed and in real industrial greenhouse environments. The testbed stage enables controlled assessment of individual subsystems and long-duration stress testing, while greenhouse deployments provide validation of system behavior under practical operating conditions.

3.3.1. Testbed Evaluation

A dedicated testbed was constructed to evaluate mechanical, electrical, and control components of the platform both individually and in combination. The testbed includes several segments of tube-rail infrastructure representative of production greenhouse layouts, as well as concrete surfaces for ground-based motion. This setup enabled repeated execution of controlled experiments and stress tests without interfering with active crop production. Representative views of the testbed configurations used during development and stress testing are shown in Figure 12.
Testbed experiments confirmed stable and reliable platform motion on concrete surfaces using omnidirectional wheels, with no loss of traction or control instability observed during prolonged operation. Rail-based mobility was also validated, including traversal over tube rails with minor obstacles and surface irregularities. Extended stress tests combining continuous rail motion and lift actuation were conducted for several hours of uninterrupted operation, during which no mechanical failures were observed. Manual entry onto tube rails was performed reliably, and automated rail entry achieved consistent performance under standard operating conditions. To further enhance reliability in dynamically changing lighting environments, integration of depth-camera data is planned to complement the existing vision-based alignment approach.
Automated inter-row transitions operated consistently during experimental deployment. Ongoing algorithmic development targets enhanced absolute localization accuracy and improved platform boundary awareness, including optimization of visual marker design, refined detection of corridor edges, and more robust verification of correct rail engagement. The scissor-type lift mechanism operated consistently throughout the test campaign, providing stable elevation of the sensor mast; occasional interference with flexible cable conduits was observed, suggesting minor mechanical refinements, but no critical failures occurred. Power consumption measurements aligned with design expectations, and no thermal or electrical anomalies were detected. Long-duration stress tests further demonstrated energy autonomy and mechanical robustness. The platform operated continuously for up to 48 h without recharging while performing repeated monitoring and navigation cycles, indicating that multi-day operation is feasible without manual battery intervention.
While the testbed enables controlled and repeatable evaluation of mechanical, electrical, and control subsystems, it does not fully reproduce the environmental conditions of operational greenhouses, such as high humidity, elevated CO2 levels, and complex lighting variations. Therefore, testbed experiments are primarily intended for subsystem validation and stress testing, while full system performance under realistic environmental conditions requires dedicated greenhouse deployment.

3.3.2. Greenhouse Evaluation

Following testbed validation, the platform was deployed in an operational industrial greenhouse cultivating cucumbers and tomatoes. The most recent experimental campaign was conducted in late 2025 at the Podmoskovye greenhouse complex operated by the Eco-Cultura company (Russia). The experimental campaign was conducted on approximately 20 greenhouse rows under active production conditions. The evaluated crops were in the fruiting stage, although most fruits had not yet reached harvest maturity. The tomato height during the experiments reached up to approximately 2.5 m, resulting in dense canopy conditions representative of industrial greenhouse operation.
This stage of evaluation focused on assessing system behavior under real production conditions, including dense canopy structures, variable natural and artificial lighting, and the presence of agricultural personnel (Figure 13).
During greenhouse deployment, the platform successfully performed autonomous traversal of crop rows, synchronized lift operation, and data acquisition. Wireless connectivity inside the greenhouse proved sufficient for near–real-time data transmission to an external server, enabling continuous interaction with the cloud-based management system. Storage and charging logistics were also evaluated, and placement of charging stations directly inside the greenhouse was identified as the most practical solution, as it eliminates the need for repeated sanitation procedures associated with external storage and transport.
The experiments confirmed the feasibility of continuous monitoring workflows, including repeated row inspections and aggregation of data for digital twin integration. At the same time, greenhouse operation revealed several design limitations that informed subsequent refinement of both the conceptual framework and later prototype iterations. These included excessive platform width relative to standard greenhouse corridors, suboptimal placement of cameras and illumination modules affecting field of view and lighting uniformity, and limitations in obstacle detection and rail-termination awareness. Additional improvement directions were identified, including further reduction in lift width, integration of forward obstacle sensing, refinement of the lift sanitation mechanism, enhancement of waterproofing for lower platform components, expansion of the environmental sensor block with additional spectral and gas-analysis capabilities, and systematic evaluation of alternative camera configurations and onboard computing requirements.
Operational performance metrics were also measured during testbed and greenhouse trials. The platform achieved a ground-travel speed of approximately 1.0 m·s−1 on concrete surfaces and a stable rail-travel speed of 0.5 m·s−1 on tube-rail infrastructure. Full deployment or retraction of the scissor-lift mechanism required no more than 5 s. From the moment a row entrance QR marker is detected, automatic rail engagement typically requires approximately 20 s. Repositioning between adjacent rows without turning maneuvers requires no more than 10 s, while transitions between rows located on opposite sides of the concrete corridor require less than 40 s.
During greenhouse deployment, perception models were validated under real operating speeds of 0.5 m·s−1. Plant-top detection and counting remained stable with no missed detections observed during continuous row traversal, confirming that the achieved inference rates are sufficient for practical operation.
Industrial greenhouse compartments considered in this study can reach row lengths of up to 120 m and may contain approximately 100–120 rows. To ensure complete canopy coverage at multiple heights, each row is inspected using a multi-pass strategy in which the platform traverses the row several times while adjusting the lift position between passes. In the current configuration, a full inspection cycle involves four traversals of the row combined with brief lift adjustments. At the measured rail speed, traversal of a 120 m row requires approximately 240 s per pass, resulting in an effective inspection time of approximately 16–17 min per row, including lift actuation and repositioning. Consequently, monitoring of large greenhouse facilities can be distributed across multiple robotic units operating in parallel. Based on the measured inspection time, a compartment containing 100–120 rows requires approximately 28–34 h for complete coverage by a single platform under continuous operation. Deployment of two robotic units is expected to reduce this time to less than 24 h, enabling at least daily full-facility monitoring. The reported traversal speed, inspection time, and system operation metrics are based on direct experimental measurements, whereas the full-facility coverage time and multi-robot deployment scenarios represent analytical estimates derived from these measurements. These estimates indicate the potential scalability of the proposed fleet-based architecture for industrial-scale greenhouse environments.
Greenhouse deployment validated the practical feasibility of the proposed monitoring approach while simultaneously highlighting design trade-offs and improvement pathways inherent to operating robotic platforms in real industrial greenhouse environments.
While the presented results demonstrate the practical feasibility and operational potential of the proposed system, they should be interpreted with consideration of the current stage of development. The experimental validation was conducted under a limited set of greenhouse conditions and operational scenarios and therefore does not yet fully capture the variability of different crop types, layouts, and long-term operational factors. Such extended validation requires significant time and operational resources, and the present study therefore focuses on demonstrating the system architecture, implementation, and initial performance under representative conditions. Further large-scale and extended-duration experiments are therefore identified as a direction for future work, including validation across diverse greenhouse conditions and quantitative assessment of system performance and sustainability-related effects.

4. Discussion

This study demonstrates the feasibility of integrating robotic platforms, AI-based perception, and cloud-based data management into a unified framework for continuous greenhouse monitoring and production-oriented accounting. In contrast to task-specific robotic solutions reported in the literature, the proposed approach emphasizes system-level integration, spatial traceability, and operational continuity at facility scale.
Experimental deployments demonstrate that hybrid rail–ground mobility, lift-based multi-height sensing, and onboard inference can operate reliably under dense canopy conditions typical of commercial greenhouses. The achieved inspection throughput indicates that a small fleet of platforms can provide daily coverage of large compartments, substantially increasing temporal resolution compared with manual scouting routines. This higher monitoring frequency enables earlier detection of localized stress and supports more targeted interventions.
Practical operation also reveals several constraints, including limited navigation robustness near rail transitions, reduced sensor durability under high-humidity conditions, and the necessity of careful integration with existing workflows. These observations emphasize that successful greenhouse automation depends not only on perception performance but also on mechanical reliability, system maintainability, and organizational acceptance.
The results suggest that meaningful sustainability gains may arise primarily from improved information quality and timeliness rather than from full task automation. Continuous, spatially explicit monitoring enhances human decision making while avoiding the complexity and risks associated with fully autonomous crop manipulation.
In addition to the above observations, several practical limitations of the current study should be explicitly noted. First, experimental validation was conducted within a limited number of greenhouse environments and crop configurations, which may not fully represent the variability of industrial greenhouse systems. Second, the perception models were evaluated primarily on tomato and cucumber crops under specific lighting and canopy conditions, and their performance across other crop types and growth stages requires further investigation. Third, while the proposed architecture supports multi-robot deployment, large-scale coordination and long-term continuous operation were not yet validated under full production conditions. Finally, the current evaluation focuses on feasibility and operational performance and does not yet include comprehensive statistical analysis or long-term reliability assessment.
These limitations highlight the need for further experimental validation across diverse greenhouse environments and extended-duration deployments, as well as additional investigation of perception robustness and system-level scalability.

5. Conclusions

This work presented and experimentally validated a system-level framework for automated monitoring and production-oriented accounting in industrial greenhouse complexes, integrating mobile robotic platforms, onboard AI perception, and cloud-based digital twin management. A functional prototype demonstrated reliable hybrid rail–ground mobility, multi-height sensing up to 5 m, real-time perception, and automated production accounting under real greenhouse operating conditions. The results show that continuous robotic inspection, automated accounting, and structured data integration enable daily facility-scale monitoring and provide a practical foundation for scalable and sustainable greenhouse digitalization.
Beyond technical feasibility, the proposed framework has the potential to deliver practical sustainability benefits. Continuous automated inspection reduces reliance on repetitive manual scouting, increases monitoring frequency from episodic to daily coverage, and enables spatially targeted interventions that can reduce unnecessary chemical and resource use. By improving information quality and timeliness rather than replacing human expertise, the system supports more efficient, data-driven, and resilient greenhouse operations.
Future work will focus on addressing the identified limitations through large-scale and long-term deployments across diverse greenhouse environments, as well as systematic evaluation of perception robustness across different crop types and growth stages. Additional efforts will be directed toward improving navigation reliability, expanding multimodal sensing capabilities, and validating multi-robot coordination under full production conditions, with the goal of transitioning the system from prototype validation to routine industrial operation.

Author Contributions

Conceptualization, A.U.; methodology, A.U.; software, A.U. and A.D.; validation, A.U., L.T., A.D. and M.I.; formal analysis, A.U.; investigation, A.U., A.D., L.T. and M.I.; data curation, A.U.; writing—original draft preparation, A.U.; writing—review and editing, A.U., L.T., A.D. and M.I.; visualization, A.U. and M.I.; supervision, A.U.; project administration, A.U. and L.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions related to ongoing research and potential commercial applications.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. ИКАР в СМИ; IKAR: Moscow, Russia, 2024; Available online: https://ikar.ru/1/press/8817/ (accessed on 2 March 2026).
  2. TASS. V Rossii Urozhay Teplichnykh Ovoshchey v 2023 godu Vyros na 5%; TASS: Moscow, Russia, 2024; Available online: https://tass.ru/ekonomika/20060217 (accessed on 2 March 2026).
  3. Svoe Fermerstvo. Tochka Vkhoda V Teplichnyy Biznes: Investitsii I Gospodderzhka; Svoe Fermerstvo: Moscow, Russia, 2026; Available online: https://svoefermerstvo.ru/svoemedia/articles/tochka-vhoda-v-teplichnyj-biznes-investicii-i-gospodderzhka (accessed on 2 March 2026).
  4. Adamides, G.; Edan, Y. Human–robot collaboration systems in agricultural tasks: A review and roadmap. Comput. Electron. Agric. 2023, 204, 107541. [Google Scholar] [CrossRef]
  5. Moreno, J.C.; Rodríguez, F.; Sánchez-Hermosilla, J.; Giménez, A.; Sánchez-Molina, J.A. Feasibility analysis of robots in greenhouses: A case study in European Mediterranean countries. Smart Agric. Technol. 2024, 9, 100638. [Google Scholar] [CrossRef]
  6. Gieling, T.H.H.; van Meurs, W.T.H.; Janssen, H.J. A Computer Network with SCADA and Case Tools for On-Line Process Control in Greenhouses. Adv. Space Res. 1996, 18, 171–174. [Google Scholar] [CrossRef]
  7. Sigrimis, N.A.; Arvanitis, K.G.; Ferentinos, K.P. MACQU: An Open SCADA System for Intelligent Management and Control of Greenhouses. In Proceedings of the 2002 ASAE Annual International Meeting, Chicago, IL, USA, 28–31 July 2002; p. 023033. [Google Scholar] [CrossRef]
  8. Pawlowski, A.; Guzmán, J.L.; Rodríguez, F.; Berenguel, M.; Sánchez, J. Wireless Sensor Network-Based Greenhouse Climate Monitoring and Control. Sensors 2009, 9, 232–252. [Google Scholar] [CrossRef]
  9. Li, X.-h.; Cheng, X.; Yan, K.; Gong, P. A Monitoring System for Vegetable Greenhouses based on a Wireless Sensor Network. Sensors 2010, 10, 8963–8980. [Google Scholar] [CrossRef]
  10. Story, D.; Kacira, M. Design and Implementation of a Computer Vision-Guided Greenhouse Crop Diagnostics System. Mach. Vis. Appl. 2015, 26, 495–506. [Google Scholar] [CrossRef]
  11. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [PubMed]
  12. Harik, E.H.C.; Korsaeth, A. Combining Hector SLAM and Artificial Potential Field for Autonomous Navigation Inside a Greenhouse. Robotics 2018, 7, 22. [Google Scholar] [CrossRef]
  13. Wang, B.; Ding, Y.; Wang, C.; Li, D.; Wang, H.; Bie, Z.; Huang, Y.; Xu, S. G-ROBOT: An Intelligent Greenhouse Seedling Height Inspection Robot. J. Robot. 2022, 2022, 9355234. [Google Scholar] [CrossRef]
  14. Fonteijn, H.; Afonso, M.; Lensink, D.; Mooij, M.; Faber, N.; Vroegop, A.; Polder, G.; Wehrens, R. Automatic Phenotyping of Tomatoes in Production Greenhouses Using Robotics and Computer Vision: From Theory to Practice. Agronomy 2021, 11, 1599. [Google Scholar] [CrossRef]
  15. Bagagiolo, G.; Matranga, G.; Cavallo, E.; Pampuro, N. Greenhouse Robots: Ultimate Solutions to Improve Automation in Protected Cropping Systems—A Review. Sustainability 2022, 14, 6436. [Google Scholar] [CrossRef]
  16. Sánchez-Molina, J.A.; Rodríguez, F.; Moreno, J.C.; Sánchez-Hermosilla, J.; Giménez, A. Robotics in Greenhouses. Scoping Review. Comput. Electron. Agric. 2024, 219, 108750. [Google Scholar] [CrossRef]
  17. Atefi, A.; Ge, Y.; Pitla, S.; Schnable, J.C. In Vivo Human-Like Robotic Phenotyping of Leaf Traits in Maize and Sorghum in Greenhouse. Comput. Electron. Agric. 2019, 163, 104854. [Google Scholar] [CrossRef]
  18. Chen, S.; Liang, H.; Zhang, Q.; Feng, Q.; Li, T.; Chen, L.; Jiang, K. Melon Robotic Grafting: A Study on the Precision Cutting Mechanism and Experimental Validation. Agriculture 2023, 13, 2139. [Google Scholar] [CrossRef]
  19. Xie, Z.; Gu, S.; Chu, Q.; Li, B.; Fan, K.; Yang, Y.; Yang, Y.; Liu, X. Development of a High-Productivity Grafting Robot for Solanaceae. Int. J. Agric. Biol. Eng. 2020, 13, 82–90. [Google Scholar] [CrossRef]
  20. Martin, J.; Rodríguez, F.; Sánchez-Hermosilla, J.; González, R. A Generic ROS-Based Control Architecture for Pest Inspection and Treatment in Greenhouses Using a Mobile Manipulator. IEEE Access 2021, 9, 94981–94995. [Google Scholar] [CrossRef]
  21. Nguyen, N.T.A.; Pham, C.C.; Lin, W.-C. Development of a Line-Following Autonomous Spraying Vehicle with Machine Vision-Based Leaf Density Estimation for Cherry Tomato Greenhouses. Comput. Electron. Agric. 2023, 208, 108429. [Google Scholar] [CrossRef]
  22. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a Sweet Pepper Harvesting Robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  23. Yeshmukhametov, A.; Koganezawa, K.; Yamamoto, Y.; Buribayev, Z.; Mukhtar, Z.; Amirgaliyev, Y. Development of Continuum Robot Arm and Gripper for Harvesting Cherry Tomatoes. Appl. Sci. 2022, 12, 6922. [Google Scholar] [CrossRef]
  24. Jiang, S.; Wang, S.; Yi, Z.; Zhang, M.; Lv, X. Autonomous Navigation System of Greenhouse Mobile Robot Based on 3D LiDAR and 2D LiDAR SLAM. Front. Plant Sci. 2022, 13, 815218. [Google Scholar] [CrossRef]
  25. Cheng, B.; He, X.; Li, X.; Zhang, N.; Song, W.; Wu, H. Research on Positioning and Navigation System of Greenhouse Mobile Robot Based on Multi-Sensor Fusion. Sensors 2024, 24, 4998. [Google Scholar] [CrossRef]
  26. Wu, C.; Tang, X.; Xu, X. System Design, Analysis, and Control of an Intelligent Vehicle for Transportation in Greenhouse. Agriculture 2023, 13, 1020. [Google Scholar] [CrossRef]
  27. Tsiakas, K.; Papadimitriou, A.; Pechlivani, E.M.; Giakoumis, D.; Frangakis, N.; Gasteratos, A.; Tzovaras, D. An Autonomous Navigation Framework for Holonomic Mobile Robots in Confined Agricultural Environments. Robotics 2023, 12, 146. [Google Scholar] [CrossRef]
  28. Li, R.; Baysal-Gurel, F.; Abdo, Z.; Miller, S.A.; Ling, K.-S. Evaluation of disinfectants to prevent mechanical transmission of viruses and a viroid in greenhouse tomato production. Virol. J. 2015, 12, 5. [Google Scholar] [CrossRef] [PubMed]
  29. Chanda, B.; Shamimuzzaman, M.; Gilliard, A.C.; Ling, K.-S. Effectiveness of disinfectants against the spread of tobamoviruses: Tomato brown rugose fruit virus and Cucumber green mottle mosaic virus. Virol. J. 2021, 18, 7. [Google Scholar] [CrossRef]
  30. Skelton, A.; Frew, L.; Ward, R.; Hodgson, R.; Forde, S.; McDonough, S.; Webster, G.; Chisnall, K.; Mynett, M.; Buxton-Kirk, A.; et al. Tomato Brown Rugose Fruit Virus: Survival and Disinfection Efficacy on Common Glasshouse Surfaces. Viruses 2023, 15, 2076. [Google Scholar] [CrossRef]
  31. Kawaguchi, A.; Kitabayashi, S.; Inoue, K.; Tanina, K. An HLD Model for Tomato Bacterial Canker Focusing on Epidemics of the Pathogen Due to Cutting by Infected Scissors. Plants 2022, 11, 2253. [Google Scholar] [CrossRef]
  32. Amir, A.; Giri, S.; Giri, S.; Butt, M. An Advanced Approach to Tomato Apex Head Thickness Measurement Using Lightweight YOLO Variants, Faster RCNN, and RGB-Depth Sensor. Smart Agric. Technol. 2025, 12, 101214. [Google Scholar] [CrossRef]
  33. Wang, P.; He, Y.; Zhang, J.; Liu, J.; Chen, R.; Zhuang, X. Phenotypic Trait Acquisition Method for Tomato Plants Based on RGB-D SLAM. Agriculture 2025, 15, 1574. [Google Scholar] [CrossRef]
  34. Noosidum, A.; Onwong, R.; Phittayanivit, J.; Arkhan, C.; Poolprasert, P.; Sangtongpraow, B.; Wongchoosuk, C. Detection of Spodoptera litura Infestation in Greenhouse Vegetable Crops Using a Novel Electronic Nose System. Comput. Electron. Agric. 2025, 239, 110984. [Google Scholar] [CrossRef]
  35. Cui, S.; Cao, L.; Acosta, N.; Zhu, H.; Ling, P.P. Development of a Portable Electronic Nose System for Fast Diagnosis of Whitefly Infestation in Tomato Plants in a Greenhouse. Chemosensors 2021, 9, 297. [Google Scholar] [CrossRef]
  36. Cui, S.; Inocente, E.A.; Acosta, N.; Keener, H.M.; Zhu, H.; Ling, P.P. Development of a Fast Electronic Nose System for Early-Stage Diagnosis of Aphid-Stressed Tomato Plants. Sensors 2019, 19, 3480. [Google Scholar] [CrossRef]
  37. Chutimanukul, P.; Wanichananan, P.; Janta, S.; Toojinda, T.; Darwell, C.T.; Mosaleeyanon, K. Stage-Dependent Growth and Physiological Responses of Plants to Different Light Spectral Compositions under Controlled Environments. Sci. Rep. 2022, 12, 4577. [Google Scholar] [CrossRef]
  38. Skoczowski, A.; Oliwa, J.; Stawoska, I.; Rys, M.; Kocurek, M.; Czyczyło-Mysza, I. Light Spectral Composition Modulates Plant Physiological and Biochemical Responses under Ozone Stress. Int. J. Mol. Sci. 2022, 23, 2941. [Google Scholar] [CrossRef]
  39. Suresh, P.; Rameshkumar, S.; Lee, K.H.; Bae, D.W.; Muneer, S. Spectral Light Quality Regulates Photosynthesis and Stress Adaptation in Plants. Front. Plant Sci. 2025, 16, 1706708. [Google Scholar] [CrossRef]
  40. Bicamumakuba, E.; Reza, M.N.; Jin, H.; Samsuzzaman; Lee, K.-H.; Chung, S.-O. Multi-Sensor Monitoring, Intelligent Control, and Data Processing for Smart Greenhouse Environment Management. Sensors 2025, 25, 6134. [Google Scholar] [CrossRef]
  41. Contreras-Castillo, J.; Guerrero-Ibañez, J.A.; Santana-Mancilla, P.C.; Anido-Rifón, L. SAgric-IoT: An IoT-Based Platform and Deep Learning for Greenhouse Monitoring. Appl. Sci. 2023, 13, 1961. [Google Scholar] [CrossRef]
  42. Islam, M.N.; Reza, M.N.; Iqbal, M.Z.; Lee, K.-H.; Jang, M.-K.; Chung, S.-O. Spatial and Temporal Variability of Environmental Variables in Chinese Solar Greenhouses in the Summer Season. Horticulturae 2025, 11, 421. [Google Scholar] [CrossRef]
  43. Meshram, A.T.; Vanalkar, A.V.; Kalambe, K.B.; Badar, A.M. Pesticide spraying robot for precision agriculture: A categorical literature review and future trends. J. Field Robot. 2022, 39, 153–171. [Google Scholar] [CrossRef]
  44. Rajmis, S.; Karpinski, I.; Pohl, J.-P.; Herrmann, M.; Kehlenbeck, H. Economic potential of site-specific pesticide application scenarios with direct injection and automatic application assistant in northern Germany. Precis. Agric. 2022, 23, 2063–2088. [Google Scholar] [CrossRef]
  45. Ariesen-Verschuur, N.; Verdouw, C.; Tekinerdogan, B. Digital Twins in greenhouse horticulture: A review. Comput. Electron. Agric. 2022, 199, 107183. [Google Scholar] [CrossRef]
  46. Yu, J.; Liu, J.; Sun, C.; Wang, J.; Ci, J.; Jin, J.; Ren, N.; Zheng, W.; Wei, X. Sensing technology for greenhouse tomato production: A systematic review. Smart Agric. Technol. 2025, 11, 101020. [Google Scholar] [CrossRef]
  47. Zhai, Z.; Martínez, J.F.; Beltran, V.; Martínez, N.L. Decision support systems for agriculture 4.0: Survey and challenges. Comput. Electron. Agric. 2020, 170, 105256. [Google Scholar] [CrossRef]
Figure 1. Typical industrial greenhouse environments used in this study: (A) aerial view of a greenhouse complex in the Moscow region; (B) interior view showing long crop rows; and (C) entrance to a crop row with tube-rail infrastructure and dense canopy.
Figure 1. Typical industrial greenhouse environments used in this study: (A) aerial view of a greenhouse complex in the Moscow region; (B) interior view showing long crop rows; and (C) entrance to a crop row with tube-rail infrastructure and dense canopy.
Sustainability 18 03620 g001
Figure 2. Conceptual architecture of the proposed distributed greenhouse monitoring framework.
Figure 2. Conceptual architecture of the proposed distributed greenhouse monitoring framework.
Sustainability 18 03620 g002
Figure 3. Schematic realization of the hybrid locomotion module: (A) assembled unit; (B) disassembled unit showing the wheel, drive, and rail-guidance components.
Figure 3. Schematic realization of the hybrid locomotion module: (A) assembled unit; (B) disassembled unit showing the wheel, drive, and rail-guidance components.
Sustainability 18 03620 g003
Figure 4. Schematic realization of the scissor-lift mechanism: (A) folded configuration; (B) 1/3 extended configuration.
Figure 4. Schematic realization of the scissor-lift mechanism: (A) folded configuration; (B) 1/3 extended configuration.
Sustainability 18 03620 g004
Figure 5. Schematic realization of the automated sanitation mechanism: (A) standalone disinfectant delivery module; (B) system integrated into the robotic platform positioned to disinfect the lift mechanism and other contamination-prone surfaces.
Figure 5. Schematic realization of the automated sanitation mechanism: (A) standalone disinfectant delivery module; (B) system integrated into the robotic platform positioned to disinfect the lift mechanism and other contamination-prone surfaces.
Sustainability 18 03620 g005
Figure 6. Overall mechanical layout of the robotic monitoring platform: (A) front view; (B) side view.
Figure 6. Overall mechanical layout of the robotic monitoring platform: (A) front view; (B) side view.
Sustainability 18 03620 g006
Figure 7. Sensor and lighting configuration of the monitoring platform: (A) oblique view of the sensing mast and embedded electronics; (B) frontal view.
Figure 7. Sensor and lighting configuration of the monitoring platform: (A) oblique view of the sensing mast and embedded electronics; (B) frontal view.
Sustainability 18 03620 g007
Figure 8. Robotic monitoring platform prototypes (A) version evaluated during the previous greenhouse deployment; (B) updated configuration without protective enclosures.
Figure 8. Robotic monitoring platform prototypes (A) version evaluated during the previous greenhouse deployment; (B) updated configuration without protective enclosures.
Sustainability 18 03620 g008
Figure 9. Vision-based navigation pipeline: (A) segmented navigable area and structural elements; (B) rail-line extraction and QR-marker detection for row alignment and localization.
Figure 9. Vision-based navigation pipeline: (A) segmented navigable area and structural elements; (B) rail-line extraction and QR-marker detection for row alignment and localization.
Sustainability 18 03620 g009
Figure 10. Examples of perception and production-oriented accounting. (A) Cucumber plant-top detection and counting; (B) fruit detection with multi-object tracking and fruit-load estimation; (C) tomato plant-top detection and counting.
Figure 10. Examples of perception and production-oriented accounting. (A) Cucumber plant-top detection and counting; (B) fruit detection with multi-object tracking and fruit-load estimation; (C) tomato plant-top detection and counting.
Sustainability 18 03620 g010
Figure 11. Frontend interface of the proposed greenhouse monitoring system. (A) Digital twin visualization showing row-level monitoring and spatial localization of detected anomalies. (B) Robot control interface with live camera feed and remote operation panel.
Figure 11. Frontend interface of the proposed greenhouse monitoring system. (A) Digital twin visualization showing row-level monitoring and spatial localization of detected anomalies. (B) Robot control interface with live camera feed and remote operation panel.
Sustainability 18 03620 g011
Figure 12. Experimental testbed used for subsystem validation and stress testing. (A) Initial testbed configuration with three tube-rail rows; (B) extended evaluation setup with a two-row configuration used for long-duration stress tests and integrated system validation.
Figure 12. Experimental testbed used for subsystem validation and stress testing. (A) Initial testbed configuration with three tube-rail rows; (B) extended evaluation setup with a two-row configuration used for long-duration stress tests and integrated system validation.
Sustainability 18 03620 g012
Figure 13. Robotic monitoring platform prototype and deployment in greenhouse conditions. (A) Side view of the complete platform; (B) underside view during rail engagement; (C) autonomous operation inside a crop row during monitoring and data acquisition.
Figure 13. Robotic monitoring platform prototype and deployment in greenhouse conditions. (A) Side view of the complete platform; (B) underside view during rail engagement; (C) autonomous operation inside a crop row during monitoring and data acquisition.
Sustainability 18 03620 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Uzhinskiy, A.; Teryaev, L.; Dorokhin, A.; Ivashev, M. Sustainable Automation of Monitoring and Production Accounting in Greenhouse Complexes Using Integrated AI, Robotics, and Data Systems. Sustainability 2026, 18, 3620. https://doi.org/10.3390/su18073620

AMA Style

Uzhinskiy A, Teryaev L, Dorokhin A, Ivashev M. Sustainable Automation of Monitoring and Production Accounting in Greenhouse Complexes Using Integrated AI, Robotics, and Data Systems. Sustainability. 2026; 18(7):3620. https://doi.org/10.3390/su18073620

Chicago/Turabian Style

Uzhinskiy, Alexander, Lev Teryaev, Artem Dorokhin, and Mikhail Ivashev. 2026. "Sustainable Automation of Monitoring and Production Accounting in Greenhouse Complexes Using Integrated AI, Robotics, and Data Systems" Sustainability 18, no. 7: 3620. https://doi.org/10.3390/su18073620

APA Style

Uzhinskiy, A., Teryaev, L., Dorokhin, A., & Ivashev, M. (2026). Sustainable Automation of Monitoring and Production Accounting in Greenhouse Complexes Using Integrated AI, Robotics, and Data Systems. Sustainability, 18(7), 3620. https://doi.org/10.3390/su18073620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop